Internet traffic discovery points the way to more efficient networks

A recent discovery by researchers at Bell Labs sheds new light on the nature of Internet and could lead to more efficient routers and other network components.

A recent discovery by researchers at Bell Labs, the R&D arm of Lucent Technologies, sheds new light on the nature of Internet traffic and could lead to more efficient routers and other network components.

Using new software programs to analyse and simulate data traffic in unprecedented detail, the researchers found that the ‘burstiness’ seen in traffic at the edges of the Internet disappears at the core.

Their surprising discovery – that traffic on heavily loaded, high-capacity network links is unexpectedly regular – may point the way to more efficient system and network designs with better performance at lower cost.

The Bell Labs research team of three statisticians and a computer scientist – Jin Cao, William Cleveland, Dong Lin, and Don Sun – analysed billions of packets on an individual basis, the way equipment in the network sees them.

Packet traffic has long been characterised as bursty, fluctuating between extremes – from trickles of packets with large time intervals between them to floods of closely spaced packets. On local-area networks at the edges of the Internet, traffic does swing between such extremes, like local automobile traffic that can alternate rapidly between sparse and bumper-to-bumper conditions.

For years, the industry has proceeded on the assumption that such behaviour would be reflected and greatly magnified on higher-capacity links where traffic from many local tributary networks flows together. What the Bell Labs team found at the core of the Internet, however, is more like traffic on an ideal highway – a steady, high-speed stream that can be full to capacity with few serious delays.

The conventional wisdom would say that as both the capacity and the speed of Internet links increase, packet networks need to be engineered to accommodate ever greater rapid swings between low and high traffic loads, and Internet routers need ever larger packet buffers to accommodate increasingly variable queueing delays.

‘But the burstiness that you can see in individual traffic flows vanishes in large aggregate streams of Internet traffic,’ said Sun. ‘The intermingling of packets from many different flows smooths out the aggregate traffic.’ Traffic on a high-capacity link becomes more random, regular, smooth, and manageable when the numbers of users and simultaneous computer-to-computer connections go up.

During their initial study, the research team assembled hundreds of gigabytes of packet traffic data from high-speed networks at Bell Labs and five universities, using a unique software system called S-Net for measurement and analysis. ‘S-Net is our statistical software microscope,’ Cleveland said, ‘which has enabled us to find patterns not seen before in studies using less precise data-viewing instruments.’ The researchers tested and confirmed their results with another software system, called PackMime, which they created to run realistic simulations of packet traffic.

Further research will be needed to fully explore the implications for Internet engineering, but the broad outlines seem clear: more efficient sizing of packet buffers, higher average utilisation of link capacity, and greater flexibility in handling growth in Internet traffic.

More information on the research, including papers on Internet traffic analysis and the team’s recent results, is available on the Web at http://cm.bell-labs.com/stat

On the web