![]() |
Monday, September 01, 2003 |
We don’t need no stinking email
A variety of problems have plagued email system recently. Sobig clogged email systems, and overwhelmed email clients. A variety of solutions have been proposed such as better procedures, better user education, and more government regulation. Some have said no solution will ever work because the profit motive will always be a stronger motive. If above solutions worked we would not have any crime. I think that there is a fundamental flaw in the concept of email. It was developed in an environment that did not envision either the profit motive or people who like to cause trouble. As I scan my sent mail folder here are some of the things I do with email: communications with co-workers, communications with friends, commercial transactions, file transfers, communicating with groups of people, synchronizing information between different computing platforms that I use. As security becomes a more important concern I note that my email is not encrypted, but that most of it travels in SSL tunnels. Now that I am using Evolution as my email client, I can use PGP. Perhaps I should call it Novell Evolution. There is a communication system that started from a different prospective and could be adapted to provide the kinds of communications that I need without all of the problems that email has. Instant messaging starts with the premise that you want to know who you communicate with. You have a list of buddies. Anonymous people can’t send you messages, just a request to talk to you. You can remove temporarily or permanently anybody from your list. Normally you communicate with a person and exchange some piece of information such as a screen name before the communication. The main advantage of IM is that you can immediately communicate or send files to somebody. Right now, IM is missing two things that email systems have, a method to send a message or a file to somebody who is not currently available and a method to file information that you have received. ICQ has some offline facilities I believe. My guess is that it would be easier to add the missing features to IM than to try and fix the email system. 2:13:58 PM ![]() |
![]() |
Friday, August 08, 2003 |
Keep your parents off the net!
I have not referenced other internet content before, but this is really funny. 9:05:02 AM ![]() |
![]() |
Wednesday, August 06, 2003 |
Copying DVD's
I just got a DVD+RW burner. I can now make copies of DVD’s. Since I have the Blockbuster Movie Freedom Pass, I have access to all of Blockbuster’s DVD’s. I do not know if the Movie Freedom Pass is available everywhere, but we have had it in Salt Lake since the first of the year. Anyway, I have not found a reason to copy any DVD’s yet. I asked myself why do I want to buy blanks, do the copying, and find a way to store a bunch of DVD’s? I could not come up with a good answer. If I want to see a particular DVD, I just pick it up the next time I am at Blockbuster. Given the time it would take to download a movie, and the cost and hassle of copying it, I am not sure that the movie manufacturers are going to have the same problem that the music industry is having. 1:00:06 PM ![]() |
![]() |
Monday, July 21, 2003 |
What Makes a Killer Application
I have been busy configuring my home server machine on Redhat Linux 9. In the process I have thought about the idea of a killer application. That is something that a bunch of people want, or need, or think that they need. When a lot of people want something, enormous resources get directed to making that something available. Competitors offer people a variety of choices. Right now cell phones that take pictures and transfer them to other cell phones for viewing or storing is a candidate for a killer app. Another candidate is instant messaging that is available on your internet connected device which could also be a cell phone. My candidate for the killer application is a personal internet server. I have been using Network Solutions for my email for a couple of years. The impetus for using Network Solutions came from three forced switches of my home email address as companies bought other companies or sold my home DSL line to another company. Every internet service that I have needed to have its email address switched. I went looking for a POP provider that would allow me to have my own domain name and also provide SMTP. This allowed me to use Outlook for my personal and business email. At the time Network Solutions had the best deal. The problem I have had since Network Solutions and Verisign came together is that I could not get my two POP passwords and the password for managing my domains set. I would change them via the web interface or a customer service person, and they would be working. A few days later the passwords would be changed somehow. When I got real IP addresses, I saw a solution to my Network Solutions problem. I had an old Optiplex that would make a good network server. I loaded Redhat 9 on it, and presto I had a potential network server. Linux has in it the potential to be a very robust network server for a variety of services. It can be a web server. It can be an email server for the popular email clients with services such as SMTP, POP, and secure IMAP. It also has security enhancements such as Postfix. It can be an accurate time server, a secure remote login server, a domain name server, a local LAN address server (DHCP), a Windows file sharing server (Samba), a Quake server, a print server, a web cache server, a secure tunnel server, and a secure file transfer server. There are also a host of funky technical things like TFTP that could be used for a variety things. The question for me was how to get the servers configured and working to do what I wanted to have done: Domain Name Service (DNS), secure IMAP, SMTP, network time, secure login, secure file transfer, and secure tunneling. The SSH program provides secure login, file transfers and tunneling. The big question for a killer app is how difficult is it going to be to make the application work. Unfortunately, UNIX documentation sucks. Microsoft realized that most people don’t want to read a manual to set up a program, so they invested a bunch of time making application configuration and use somewhat intuitive. They provide buttons or lists of the things that most people what to do. I have been a UNIX administrator on several different types of UNIX. The Linux community is in the process of creating easy-to-use tools that make the powerful Linux and UNIX software accessible to the non-geek community. The model is to go to something like a control panel, find the aspect of the machine you want to change, and click a few buttons and fill in a few boxes. For me to set up my Redhat Linux machine as a Web server looks to be very simple. It is just a matter of clicking a few boxes during the install process. Network time was also easy. I just said I wanted it during the install and gave it a server name. SSH was a click box. DNS and email were another story. To get IMAPS up I needed to learn how to turn on services (control panel->service). I also needed to learn how to add rules to the firewall (edit /etc/sysconfig/iptables). DNS required using a configuration tool (control panel->DNS) and then editing the files it produces (/var/named) to actually work. The firewall rules also needed to be changed because I am hosting my own domain name. The hard one was SMTP. Passing traffic from one site to another, called being an open relay, is not something that you want to do unless you like your machine being used to deliver SPAM. I wound up tweaking a bunch of files. I am still not sure which ones affect email delivery. I changed the list of machines allowed local net access (/etc/hosts.allow). I perhaps set up postfix to enhance security (/etc/postfix/main.cf). I added to the list of machines that are allowed to relay email (/etc/mail/access). The firewall was allowing access for incoming SMTP. The thing that made it start working was modifying the sendmail file (/etc/mail/sendmail.mc) to allow it to contact all of the hosts on the internet. So, continued user-friendliness in the Linux community could make very stable internet servers available to anyone with an old PC, CDROM burner to get their neighbor’s copy of Redhat, an always-on connection, and a real internet address. Real internet address typically cost money right now. With IPV6, they should be free. 2:29:37 PM ![]() |
![]() |
Sunday, June 29, 2003 |
Home Wireless Fixed and NAT Update. My home wireless network is now working. I am posting this blog item from it. The problem appeared to be some sort of NAT related thing so I went through the two week exercise of getting static addresses. As soon as I got the static addresses configured, I tried the wireless to see what would happen. It acted the same way. Having static addresses ruled out a bunch of possible things that could be wrong. All that was left was looking at bits. What I found was that Linksys 1.010 firmware would at random points in the TCP session flip a bit in the source-port or flag field and recompute the checksum. That seems like pretty strange behavior for a device that claims to be a hub. So the problem was not the NAT on the DSL modem, but a NAT-link feature or bug on Linksys. Firmware 1.01c behaves like a regular hub and I can use it to post this blog. 7:35:26 PM ![]() |
No IPV4 Shortage? The discussion between proponents of IPV4 and IPV6 continues. This article discusses the idea that IPV4 is fine for the next 20 years and references other discussions of the topic. It also includes references to articles that make the case that IPV6 is needed now. Currently allocation policies only chew up 4 class A’s per year and we have 100 class A size blocks left. On the other hand 3G cell phone will have most phones using an IP address most of the time with 1 billion predicted users. So just that one application can use up 60% of the remaining address space. This also assumes one address per person. My cell phone and my laptop with a 3G card will both need an address and might communicate. Maybe my PDA will need one too. In one sense they both are right; we can stretch the IPV4 address space for a long time, but there are costs to the strategies that stretch address space. Strategies like NAT take time to develop, manage and debug. NAT makes some applications work, some applications not work, and other applications work in a different way. For me, changing the way internet applications work to “fix” the IPV4 address shortage is the main issue. If the 3G providers want to force customers to connect to other customers through a server, that is fine. If the only reason they are implementing servers is because they can’t get enough addresses to do what they really want to do then I hope they provide an IPV6 alternative. What I see happening is that the internet is being divided into separate groups. A few years ago I was working for a company that restricted internet access to whatever you could get via a proxy server. I like to think of people connected in this way as third class internet citizens. My definition of how to implement third level service is to use application gateways or implement firewall policies that start out by blocking everything and then make it difficult or impossible to add useful services. I would like to point out that when you let anything go into or out of the internet, some smart person will find a way to get viruses, worms and even file sharing into your network. At the same time that my work offered third class service, I bought a new service call DSL that offered first class service. A real address albeit not the same one each time, with no blocking or filtering applied. This let me do things like play Age of Empires on two home machines connected to the Gaming Zone server. Very cool. It didn’t let me do DNS on my real address without some fancy goofing around. I could not have my own DNS or email server. Still and all it was much better. If some new internet thing is developed, I could try it out. Firewall service is up to me which makes my life much easier. A little later, my ISP decided to do a mandatory downgrade to my service and move me into the realm of second class internet citizenry. I got NAT’ed. This immediately broke my Gaming Zone fun. About a year later somebody figured out how to make a single machine on your LAN not do port overloading. This allowed me to play with a single home machine. NAT is a very nasty idea. It changes basic assumptions about how TCP works. The thing I most object to with NAT is that in most implementations it is difficult to quickly determine if it is causing problems. If I suspect that a firewall is breaking something I can quickly turn it off and see if the problem is fixed. Turning off NAT is neither simple nor quick. To turn off NAT for my home DSL line, I needed to cancel service with my ISP, wait a week for them to process their order, order a new DSL line, wait a week for it to be installed, and order service from another DSL provider that offers static addresses. Most of the costs of NAT are hidden and only revealed when you least want to pay them. At work, one of our schools found out that in order to do the out-of-state video call they wanted to make, they would need to switch from their private supposedly NATable addresses to real addresses. Two network engineers spent two days finding all of the hidden dependencies between the two sites that needed to be tweaked before the real addresses could be made to work. But I digress. Solutions like NAT were temporary solutions to make the IPV4 address space last until the internet community could develop a plan with enough addresses. NAT raises the bar for developing a new internet application. Take Internet Messaging for example. The only practical way to implement it now is to put an application on each machine that connects to a central server. When that server is down your internet messaging is down. This is a regular occurrence at work. It is even more baffling when it affects some machines regularly and leaves others untouched. If the majority of the internet community are first class citizens, somebody would design an IM system what is more resilient, more platform independent and gave more control to the users. Right now as a Microsoft IM user, I am getting system broadcasts with sales messages which I think are connected to my IM installation. I will need to spend time tracking this down, evaluation possible solutions and implementing something. One solution I have rejected out of hand is the ad delivered by a system broadcast message offering a system broadcast blocker for a mere $39.95. I have not found the place to turn off these messages in my IM client yet. My vote is for a system like IPV6 that allows everybody to be a first class internet citizen. 10:31:05 AM ![]() |
![]() |
Monday, June 23, 2003 |
NAT is Evil
I am learning something about NAT. I am also learning something about blogging. For a while, I would write a blog, and then wait for the outcome. So I haven’t been blogging for several month. Blogging is about documenting the process. Anyway, back to NAT. I have been having problems with my home network. I have a Dell desktop, a switch that calls itself a hub, and a wireless access point. I have been having all kinds of trouble with intermittent inability to do things on the network. Being an engineer by temperament, I took this situation as an opportunity to learn about my home network. Here are some of the things that I learned.
The reason that NAT is evil is that it breaks a fundamental design goal of TCP/IP. It breaks the universal connectivity between any two nodes on the internet. A firewall also breaks the universal connectivity property of TCP/IP, but that is another blog. If there is universal connectivity and I can ping from one device to another, then all of the TCP/IP programs work. Ping means that a device is connected. NAT breaks this property. NAT also wastes a bunch of time. It complicates the debugging process by introducing subtle differences in the way the network behaves. My original complaint was that email worked on my laptop connected via wireless, but that SSH, some web sites and some other applications didn’t. The ah ha moment was when I powered off all of the network and computer components, powered them back on and found that if I tried one of the “broken” web sites, it worked, but now email didn’t work. The problem was port specific. At this point I decided that becoming an expert on the vagaries of NAT is not a good use of time. My new DSL provider will give me five real addresses. I have to spend two weeks with only cellular connectivity for my laptop, but I get to be a full member of the internet community at home. 10:21:29 AM ![]() |