NC State developed software can be used with existing network protocols and hardware

When it comes to WiFi networks, the key to boosting speed may not lie solely in adopting new, faster hardware and software protocols, but also in developing better software to balance loads when networks get overrun with traffic.

At 25 users the system showed a 400 percent gain in throughput, while at 45 users the system sped the network up 700 percent versus traditional networking software. Best of all, the researchers say their program plays nicely with existing protocols and network hardware without the need for an upgrade.

The only potential downside is that if by some unfortunate occurrence all the access points in a region were overloaded, the gains might be diminished, hypothetically. But for most scenarios where some areas are swamped and others underutlized, the dynamic prioritizing concept could offer a big step forward.

The researchers are presenting their work at the ACM CoNEXT 2012 conference in Nice, France. The paper's authors are Arpit Gupta (lead author), a Ph.D. student in computer science at NC State, Jeongki Min, a Ph.D. student at NC State, and Dr. Injong Rhee (senior author), a professor of computer science at NC State.

I've been working with Cisco wireless APs for years, and they have a dynamic load balancing feature that swaps channels constantly among multiple APs based on user demands and locations. When I turned the feature on on our university's wireless network, speeds jumped up significantly. How is this different than what USC did?

I read the source link, and I read the daily tech article, and I find that I'm coming to a completely different understanding than Jason did when he wrote the article.

What the source link describes is when you have one access point serving a large number of users, there are significant performance issues, and NC State's proposed solution to some of those issues.

It's not some sort of coordination between access points, but rather it's a means of dynamically giving priority to the access point to transmit a backlog of data over the users within a given WiFi channel.

Interestingly, the abstract in the source link specifies that the 400% increase was in "downlink goodput", not overall throughput, but that caveat wasn't listed anywhere else...

Goodput IS what you want to measure. Throughput refers to, essentially "number of bit transitions" including bits that are used to run the protocol, as headers, as packets that are dropped at the router for lack of buffer space, etc.Goodput refers to the throughput of REAL data --- how many bytes per sec of MY data do I see leaving my PC.

ALL network measurements anywhere that are of interest to the public should be of goodput. The only time throughput should ever be mentioned is in technical papers dealing with modulation and protocols, where the target audience knows the difference and understands the relationship between the two.

"Intel is investing heavily (think gazillions of dollars and bazillions of engineering man hours) in resources to create an Intel host controllers spec in order to speed time to market of the USB 3.0 technology." -- Intel blogger Nick Knupffer