Wednesday, May 6, 2009

A faster Internet?

The Internet is founded on a very simple premise: shared communications links are more efcient than dedicated channels that lie idle much of the time. And so we share. We share local area networks at work and neighborhood links from home. And then we share again—at any given time, a terabit backbone cable is shared among thousands of users surfng the Web, downloading videos.. But there’s a profound flaw in the protocol that governs how people share the Internet’s capacity. The protocol allows you to seem to be polite, even as you elbow others aside, taking far more resources than they do.

You might be shocked to learn that the designers of the Internet intended that your share of Internet capacity would be determined by what your own software considered fair. They gave network operators no mediating role between the conflicting demands of the Internet’s hosts. The Internet’s primary sharing algorithm is built into the Transmission Control Protocol, a routine on your own computer that most programs . TCP is one of the twin pillars of the Internet, the other being the Internet Protocol, which delivers packets of data to particular
addresses. The two together are often called TCP/IP.

Forcing the way!
TCP routine constantly increases your transmission rate until packets fail to get through!Then TCP very politely halves your bit rate. The mechanism is termed "binary exponential back-off". What a name isn't it? All other TCP routines around the Internet behave in just the same way, in a cycle of taking, then giving, that fills the pipes while sharing them equally.

Fair play?
An equal bit rate for each data flow is likely to be extremely unfair, by any realistic definition. It’s like insisting that boxes of food rations must all be the same size, no matter how often each person returns for more or how many boxes are taken each time. But any programmer can just run the TCP routine multiple times to get multiple shares. It’s much like getting around a food-rationing system by duplicating ration coupons. This trick has always been recognized as a way to sidestep TCP’s rules—the frst Web browsers opened four TCP connections!

The solution!
There’s a far better solution- according to Bob Briscoe. It would allow light browsing to go blisteringly fast but hardly prolong heavy downloads at all. The solution comes in two parts. It begins by making it easier for programmers to run TCP multiple times—a deliberate break from TCP-friendliness. They set a new parameter—a weight—so that whenever your data
comes up against others all trying to get through the same bottleneck, you’ll a share of the total. The key is to set the weights high for light interactive usage, like surfing the Web, and low for heavy usage, such as movie downloading.

Imagine a world where some Internet service providers offer a deal for a fat price but with a monthly congestion-volume allowance. Note that this allowance doesn’t limit downloads as such; it limits only those that persist during congestion. If you used a peer-to-peer program like BitTorrent to download 10 videos continuously, you wouldn’t bust your allowance so long as your TCP weight was set low enough. Your downloads would draw back during the brief moments when flows came along with higher weights. But in the end, your video downloads would finish hardly later than they do today.

Reference
  • http://www.cs.ucl.ac.uk/staf/bbriscoe/projects/refb/
  • www.spectrum. ieee.org