Monday 29 July 2013

Internet traffic jams, meet your robot nemesis

On an 80-core computer at the Massachusetts Institute of Technology, scientists have built a tool that might make networks significantly faster just by coming up with better algorithms.

The system, called Remy, generates its own algorithms for implementing TCP (Transmission Control Protocol), the framework used to prevent congestion on most networks. The algorithms are different from anything human developers have written, and so far they seem to work much better, according to the researchers. On one simulated network, they doubled the throughput.

[ Also on InfoWorld: Teach your router new tricks with DD-WRT. | Get expert networking how-to advice from InfoWorld's Networking Deep Dive PDF special report. | Subscribe to InfoWorld's Data Center newsletter to stay on top of the latest developments. ]

Remy is not designed to run on individual PCs and servers, but someday it may be used to develop better algorithms to run on those systems, said Hari Balakrishnan, the Fujitsu professor in Electrical Engineering and Computer Science at MIT. For now, it's churning out millions of possible algorithms and testing them against simulated networks to find the best possible one for a given objective.

IP networks don't dictate how fast each attached computer sends out packets or whether they keep transmitting after the network has become congested. Instead, each system makes its own decisions using some implementation of the TCP framework. Each version of TCP uses its own algorithm to determine how best to act in different conditions.

These implementations of TCP have been refined many times over the past 30 years and sometimes fine-tuned for particular networks and applications. For example, a Web browser may put a priority on moving bits across the network quickly, while a VoIP application may call for less delay. Today, there are 30 to 50 "plausibly good" TCP schemes and five to eight that are commonly used, Balakrishnan said.

But up to now, those algorithms have all been developed by human engineers, he said. Remy could change that.

"The problem, on the face of it, is actually intractably hard for computers," Balakrishnan said. Because there are so many variables involved and network conditions constantly change, coming up with the most efficient algorithm requires more than "naive" brute-force computing, he said.

Figuring out how to share a network requires strategic choices not unlike those that cyclists have to make in bike races, such as whether to race ahead and take the lead or cooperate with another racer, said Balakrishnan's colleague, graduate student Keith Winstein.

"There's a lot of different computers, and they all want to let their users browse the Web, and yet they have to cooperate to share the network," Winstein said.

However, Remy can do things that human algorithm developers haven't been able to achieve, Balakrishnan said. For one thing, current TCP algorithms use only a handful of rules for how a computer should respond to performance issues. Those might include things like slowing the transmission rate when the percentage of dropped packets passes some threshold. Remy can create algorithms with more than 150 rules, according to the researchers.


View the original article here

No comments:

Post a Comment