Swinburne, Google target data centre congestion

By

New networking algorithms to avoid packet loss.

Melbourne network engineers have partnered with Google in an attempt to address traffic congestion and packet loss in data centres.

Swinburne, Google target data centre congestion

The project, led by Professor Grenville Armitage of the Swinburne University of Technology, aims to control ‘microbursts’: sub-second traffic spikes that occur when multiple data streams suddenly converge on a wire.

Packets are typically lost in today’s data centres when multiple servers attempt to return data – for example, the results of a web search – to a single end-point simultaneously.

If the network switch closest to the end-point becomes congested, its buffer may overflow, causing packets to be lost, stalling the application.

Lost packets may be particularly problematic in high-frequency trading environments in the financial services sector.

Professor Armitage said the so-called ‘Microburst Congestion Control’ (MCC) project with Google was an extension of Swinburne’s Cisco-funded ‘newtcp’ research into the TCP networking protocol.

For the newtcp project, which commenced in 2005, Swinburne researchers studied delay-based and loss-based methods to determine a network’s optimal speed.

They developed a framework that was added to version 9.0 of the open source FreeBSD operating system last week, allowing FreeBSD users to choose which TCP congestion control method to use.

Prior to last week's release, FreeBSD only supported the loss-based NewReno method, under which data is transmitted slowly at first, with the speed increasing rapidly until packet loss is detected.

Lost packets are then retransmitted as transmission speed drops, only to increase rapidly again.

In the Linux community, NewReno has been replaced by CUBIC, a more aggressive TCP that Armitage said suited high-speed transfers within research laboratories but caused high latency in home networks with multiple users on a single internet gateway.

While loss-based methods like NewReno and CUBIC react to traffic spikes and packet loss, alternative, delay-based methods monitor variations in network latency to pre-empt spikes and adjust transmission speed accordingly.

Armitage said delay-based algorithms had the potential to be "better-behaved" when sharing home internet gateways with interactive applications like VoIP or online games.

But machines using delay-based TCP would lose out to machines using NewReno or the more aggressive CUBIC if they were to run on the same network.

Armitage said Swinburne’s framework in FreeBSD 9.0 would allow the researchers and community to more easily develop and trial new TCP congestion control algorithms.

The team planned to analyse anonymised data from Google’s production networks, test congestion management schemes and make recommendations to the search giant for the MCC project.

Results could potentially be applied to Google’s systems in a year, he said, highlighting the value of networking techniques as cloud computing becomes more pervasive.

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

RBA reveals three-year project to upgrade payment IT systems

RBA reveals three-year project to upgrade payment IT systems

Coles looks to set up full stack observability

Coles looks to set up full stack observability

Rio Tinto to expand network transformation to operational sites

Rio Tinto to expand network transformation to operational sites

The 2024 Australian IoT Awards - Deadline extended to 31 January 2024

The 2024 Australian IoT Awards - Deadline extended to 31 January 2024

Log In

  |  Forgot your password?