?

Log in

No account? Create an account

Previous Entry | Next Entry

what is old is new again

I promised to write about congestion avoidance ... here it is. Not too long ago, I discovered jg's Ramblings, which is Jim Gettys' (of X windows and W3C fame) blog. He has been writing quite a bit about bufferbloat, a condition that causes poor network performance due to buffers (device memory) that increases without being properly managed. He actually reached a conclusion that had been reached many years ago by John Nagle, who argued in RFC 970 that even if you have infinite buffering capacity, you can still experience congestive collapse – all of the packets will be dropped.

Since this is a personal interest of mine, I decided to look into his work a bit more closely. There are several mailing lists at bufferbloat.net, where I found to my surprise that Eric S. Raymond is a contributor. (I guess I should not be too surprised, since a lot of the work is being done on Linux, but I didn't think he was interested in this sort of problem or project.)

I would love to work on something like this, because during my last year at SRI and grad school, it was my favorite project. Unfortunately, I have not seen any job postings for this type of work. (Granted, I have seen postings for other types of performance analysis/tuning work. I've applied for those that match my background, but have either gotten rejections or heard nothing from those companies yet.) In my experience, most companies don't recognize this as a problem, or choose to pursue it in different ways (such as buying faster links or routers). There are, of course, many algorithms that run in devices that do queue management, packet marking to signal congestion, etc., but there are very few open positions. Also, since I have not done anything like this in nearly 20 years, I'm very rusty, as compared with people who work on it regularly.