Internet Traffic – Gigaomhttp://gigaom.com
The industry leader in emerging technology researchMon, 19 Mar 2018 22:01:45 +0000en-UShourly1Google Fiber: we don’t charge for peering, don’t have fast laneshttp://gigaom.com/2014/05/21/google-fiber-we-dont-charge-for-peering-dont-have-fast-lanes/
http://gigaom.com/2014/05/21/google-fiber-we-dont-charge-for-peering-dont-have-fast-lanes/#commentsWed, 21 May 2014 23:40:53 +0000http://gigaom.com/?p=843703Google (S GOOG) used its Google Fiber internet access business Wednesday to chime in on the continuing debate around peering and internet fast lanes, and guess what: the company doesn’t use either. Fiber, which is slowly expanding its footprint, doesn’t have “any deals to prioritize (some content companies’) video ‘packets’ over others or otherwise discriminate among Internet traffic,” according to a blog post published Wednesday afternoon.

Google also said it doesn’t charge for peering, and instead invites content providers and content delivery networks to colocate within their facilities to get their content closer to the end user. Google identified Akamai and Netflix as two companies that make use of colocation; Netflix has for some time tried to partner with ISPs and place its own OpenConnect caching appliances within the ISPs’ networks.

From the blog post:

“We also don’t charge because it’s really a win-win-win situation. It’s good for content providers because they can deliver really high-quality streaming video to their customers. For example, because Netflix colocated their servers along our network, their customers can access full 1080p HD and, for those who own a 4K TV, Netflix in Ultra HD 4K. It’s good for us because it saves us money (it’s easier to transport video traffic from a local server than it is to transport it thousands of miles). But most importantly, we do this because it gives Fiber users the fastest, most direct route to their content.”

Of course, this was more than Google gloating about how fast Fiber is. The post also comes at a time where Netflix sees itself pressured to strike paid peering deals with companies like Comcast (S CMCSK) and Verizon (S VZ) to improve an otherwise subpar video streaming experience for the customers of those ISPs.

]]>http://gigaom.com/2014/05/21/google-fiber-we-dont-charge-for-peering-dont-have-fast-lanes/feed/46New OECD report shows that connected TVs won’t break the internet, but they will break business modelshttp://gigaom.com/2014/01/24/new-oedc-report-shows-that-connected-tvs-wont-break-the-internet-but-they-will-break-business-models/
Fri, 24 Jan 2014 14:14:48 +0000http://gigaom.com/?p=804854Television has gone broadband, and that transition will continue over the next decade as more content finds its way onto IP networks as opposed to the old-school dedicated pay TV networks. As this transition has unfolded, new players like Netflix, YouTube and Hulu have entered the market while both ISPs and big content companies are trying to figure out how to protect their revenue and adapt to this new era.

Amid the confusion and competition comes an Organization for Economic Cooperation and Development report full of data that digs into the peering debate that is currently causing customers of certain ISPs pain. This is an issue that, like an earworm, keeps popping up and drives anyone looking to consume internet delivered content on televisions nuts.

As I laid out on Wednesday, the issue concerns how internet giants interconnect. Right now some ISPs look at interconnection points as a source of potential revenue — a way to get a company sending large amounts of traffic through to the ISPs end users to help offset the cost of maintaining and building the ISP network. Thus, they want to implement a paid peering model for companies that send a lot of traffic their way.

Paid peering is not a new concept, but it is relatively unusual, according to the OECD. In 2012, the group released a survey that showed that 99.5 percent of 142,000 interconnection points it studied peered with each other for free. The new study discusses the economics of peering in a way that’s hard to come by in on-the-record conversations with people who manage peering relationships. And it presents a compelling argument against the common rationale offered for paid peering by ISPs.

To carry 500 Mbit/s in one direction, a 655 Mbit/s or 1 Gbit/s connection is needed irrespective of whether the traffic in the opposite direction is 10, 100 or 499 Mbit/s. The same equipment would be needed in a network to carry traffic; as for the core of that network there is no asymmetric networking equipment, supporting higher speeds in one direction. Accordingly, there would be no difference in costs. A further consideration is that it is unclear how ISPs would make their traffic balanced, with most ISPs currently only supporting asymmetric up and download levels of, for example, 20:1 on ADSL2 networks. An alternative response from a content provider could be to strictly limit the traffic it sends to the ISP, such that it is always in balance. The result could be that traffic would dwindle to close to zero, because most of the traffic from an ISP consists of control messages controlling the flow of incoming data. When there is less incoming data there is also less outgoing data.

In short, the OECD is saying that idea that web traffic can ever be balanced between two networks is a fallacy given that the end users consume way more than they create and broadband speeds are generally asymmetrical to reflect that reality. Plus it asserts, implementing balanced traffic would be a catch-22 because much of the outgoing traffic from an ISP is to manage the flow of incoming traffic, so as incoming traffic drops so would the outgoing traffic the application is trying to balance against.

But before breaking down some of the economics and the question of balance, the report also shines a light on the question of motive — suggesting that it has changed over time.

It points out that a few years ago most of the peering arguments involved getting smaller traffic-generating sites to pay to interconnect, with the argument that it costs money to interconnect to smaller networks. In many ways the 2012 study was a rebuttal to that argument, showing that more network connections generated via settlement-free peering benefitted both smaller and larger providers (although the smaller providers did get the lion’s share of the benefit) as well as the internet at large by expanding its reach and influence while lowering costs for all participants. From the report:

Another justification used to not peer between two ASs, or that a network is flooded, is that the traffic is unbalanced. In many peering relationships and especially with connected television services, one network is likely to send more than it receives. What is unclear in this case is why the traffic from, for example, the content provider to the ISP has to be balanced. Historically, the case was made that networks that could not reach a 2:1 or 3:1 ratio were too small to peer with larger Tier 1 networks. With the advent of large content providers the argument has reversed, now end-user ISPs argue that networks exceeding a 2.5:1 ratio should pay for peering.

One way to get around paid peering is to use a Content Delivery Network or transit provider that already peers with an ISP. In the last three years content companies have tried to move away from that model as a way to cut their costs and get better control over the network. Google, Netflix and even Facebook are building out caching servers, trying to build their own CDN services that will carry their content to the “front door” of an ISP. Many ISPs welcome adding a server from one of these content companies as a way of cutting down their own transit costs for getting the content on their network. But some do not.

ISPs argue that they don’t want to support boxes from “every” provider of web content, and say that the boxes take up space and consume power. And that’s the tension that can hurt customers, as ISPs demand paid peering and content companies demand that ISPs use their content servers. Consumer pain has become a negotiating chip for internet giants to use in an attempt to squeeze in a bit more revenue or lower costs.

The report also covers data caps, threats to the connected TV hardware business and rebuts the idea that over the top video will destroy broadband networks. It’s worth a read.

If there was any doubt about Google and its dominant reach on the web, then the five minute outage that took down all Google properties including GMail and YouTube on late Friday proved it for once and for all. Go Squared, which keeps track of web traffic, on its website noted that “the number of page views coming into GoSquared’s real-time tracking — around a 40% drop.” According to Deepfield, an Ann Arbor, MI based networking company, Google (not including its other properties) now accounts for nearly 25 percent of internet traffic on an average. “I’d say the overall impact was modest since the outage mainly seemed to impact lower bandwidth (but arguably more critical) services like gmail,” said Craig Labovitz, founder of Deepfield and added, “Specifically, the large volumes of Youtube traffic originating from distributed Google Edge Caches (GGC) do not appear to have been impacted in the same way.”

]]>http://gigaom.com/2013/08/16/five-minute-outage-equals-big-dip-in-internet-traffic/feed/9Fear of peershttp://gigaom.com/2013/06/18/fear-of-peers/
Tue, 18 Jun 2013 20:36:22 +0000http://pro.gigaom.com/?post_type=go_shortpost&p=181167Two items from GigaOM today serve as good reminders that the over-the-top video user experience can be degraded by factors that are not normally talked about in the context of net neutrality and are not addressed by the FCC’s 2010 net neutrality regulations (currently being challenged in court by Verizon).

Both problems are the result of peering disputes among network operators. In the Verizon case, the ISP appears to be having a peering dispute with Cogent Communications, which provides wholesale bandwidth to Netflix. Cogent and Verizon peer with each other in about 10 places. Over the last year, however, as Netflix traffic has grown on Cogent’s network, the flow of bits across those peering point has been increasingly asymmetric, with Cogent dumping a lot more bits on Verizon that Verizon is shipping Cogent. As with Comcast’s dispute with Level 3 in 2011 over the same issue, Verizon has apparently concluded that its free peering arrangement with Cogent is no longer equitable and wants to charge Cogent for the extra traffic. To make its point, Verizon apparently is allowing its peer connections with Congent to degrade.

“This is a business model problem, not an engineering problem,” Cogent CEO Dave Shaffer told Om.

In Time Warner Cable’s case, the MSO/ISP has been forced to issue a denial to allegations that it is intentionally throttling YouTube traffic.

The Internet is not as simple as one wire connecting a website’s servers to a customer’s home. Traffic originates in countless places, heading toward billions of end-user destinations. Each network that carries web traffic is itself a collection of a number of complicated technological and business relationships. As traffic flows from one area of the Internet to another, it passes through this network of technologies, agreements, and protocols and culminates in each particular user experience [snip]

Websites and other content providers make their own arrangements about how to get traffic to and from the Internet. And each participant in the Internet ecosystem makes its own decision about the formats and equipment to use, and each has its own budget. So the levels of quality vary greatly at the source as well as the network level…Delivery of video and other data over the Internet is a complex matter with many, many variables contributing to each particular end-user experience. But we can assure you that, at Time Warner Cable, we don’t throttle traffic.

Throttling traffic over the last mile, of course, particularly where that traffic could be seen as competitive with the ISP’s own content offerings, would be a straight up violation of the FCC’s net neutrality regs. Even if the courts ultimately throw those regs out, in fact, an ISP throttling competing traffic would be inviting antitrust scrutiny from the Federal Trade Commission or the Justice Department.

But the current net neutrality regs, as well as much of the net neutrality discussion so far, stop at the end of the last mile. They do not address (and the FCC would likely lack statutory authority to regulate) the B2B dealings between last-mile ISPs and other network operators. Yet there’s clearly plenty of user-experience mischief that can be made upstream of the last mile.

]]>Sorry internet: The Super Bowl still happens elsewherehttp://gigaom.com/2013/02/04/super-bowl-streaming-traffic/
http://gigaom.com/2013/02/04/super-bowl-streaming-traffic/#commentsMon, 04 Feb 2013 19:05:14 +0000http://gigaom.com/?p=607133CBS (s CBS) streamed Sunday’s Super Bowl online in its entirety, but most viewers still prefered to watch the game on their TV. Internet traffic was down roughly 15 percent during the game when compared to an average Sunday evening, according to network management specialist Sandvine.

The company revealed on its blog Monday that the Super Bowl stream did account for over 3 percent of total network traffic Sunday evening. But that was more than outweighed by people who decided not to tune into Netflix (s NFLX) and other forms of online entertainment, and watch the game on their TV instead – for which there was no easy online option, as Sandvine noted:

“At Sandvine’s we’ve long maintained that the biggest screen is always the best screen to consume content, and for the Super Bowl it makes sense that most people would prefer to watch the game on their large HDTV. Since the only option to stream the game was via a web browser, getting the game streaming to their TV would have been a challenge for most people, so unsurprisingly viewers opted to tune in via their cable or satellite provider.”

One should probably add that the Super Bowl was also available via free over-the-air for cord cutters with an antenna. Still, there were some noteworthy blips during the evening when people went online to stream:

Sandvine also noted that the availability of free streams may have an impact on people’s expectations, even if they don’t use them en masse just yet:

“Sandvine’s traffic statistics have shown continued growth in adoption of live streamed sports events, but for the time being it is no threat to replace viewing via traditional broadcast methods. It is clear however that live streaming is only going to get more popular, and if free streaming is being provided for the biggest television event of the year, then users will soon start expecting it to be offered for everything they watch.”

]]>http://gigaom.com/2013/02/04/super-bowl-streaming-traffic/feed/3The shape of the internet has changed: It now lives life on the edgehttp://gigaom.com/2012/09/13/the-shape-of-the-internet-has-changed-it-now-lives-life-on-the-edge/
Thu, 13 Sep 2012 19:39:31 +0000http://gigaom.com/?p=562537A decade ago the internet had about 1.4 terabits per second of global capacity while today it has 77 Tbps. But as the internet gets bigger, the way traffic moves back and forth across the “series of tubes” that make up the internet is changing. As a result of the growth in internet exchange points around the world and more people in more countries getting online, the internet is becoming truly global.

Instead of massive streams of data moving back and forth across entire networks each time people request a web page, a video or a digital download, data is getting sent to a content delivery network and kept at the edge of the network. Thus, when it’s called up by a user, it doesn’t have as far to go. But there are two significant things that are changing how the internet is “shaped,” for lack of a better term.

The 1999 Internet had a hub with two fat spokes.

First, the growth of Internet Exchange Points (IXPs) and caches mean the traffic patterns look more like a river flowing downhill to a reservoir as opposed to millions if creeks spreading out to feed each user. Internet exchange points are giant data-center like buildings where different networks connect and exchange traffic. Content can be cached in local IXPs or even further out at the edge of the network in specific ISP’s central offices.

Two, the growth of broadband access in the rest of the world means that places like Latin America and Africa, which used to depend on getting most of their bandwidth served from U.S. or European providers are gradually beefing up their supply of internet exchange points. (IXPs) They get content reservoirs too.

More content but more caching as well.

This has been the case for years when it came to content such as movies and graphic-rich web pages, but as the basic delivery of bits became commoditized, players like Akamai (s akam) and Limelight as well as newer companies like Edgecast and Fast.ly sprung up to deliver newer types of content. Now, even Facebook (s fb) is getting in on edge caching, joining Google (s goog), which has had edge network servers for a couple of years.

In a paper released yesterday, UK analyst firm Analysys Mason estimates that 98 percent of internet traffic now consists of content that can be stored on servers. This combined with deeper penetration of IXPs and caching means that the way traffic flows across networks is changing too. The paper was written to persuade governments that the proposed ITU regulatory changes would hinder the growth of the web, but the report is well worth reading as a way of understanding how the web has changed over time.

Those conclusions are also backed up by similar analysis from Craig Labovitz, who documented that roughly 45 percent of internet traffic today is content from CDNs. That analysis emphasized, however, how few companies control web traffic, while the Analysys Mason report focused on how deeply the internet has penetrated different areas of the world.

As an example, the Analysys report takes a close look at how connectivity has changed for Africa:

While in 1999, 70% of bandwidth from Africa went to the US, by 2011 this had fallen to just a few percent, and nearly 90% went to Europe. This does not mean that over time Africans began to rely almost exclusively on European content, but rather that much of the content originally from the US began to be stored on servers in Europe as providers began to build out their networks. This shows how traffic can shift in response to changes in bandwidth costs and local conditions, as Europe liberalized its telecom networks and IXPs developed to host the content, and demonstrates how in future similar shifts could localize traffic in Africa to further reduce latency and costs.

To bolster this example, the Analysys report notes that bandwidth to the U.S. has fallen from over 90 percent of total international connectivity in 1999 to just over 40 percent in 2011. And as the internet becomes far more global and content spends much of its time at the edge, it changes the way we should think about and regulate the web.

The internet is like a cockroach.

The report is focused on the looming ITU regulations, but the key point is one that was raised time and time again during the worries over the SOPA and PIPA legislation in the U.S.— the internet has no borders, and governments must recognize that. It’s like our monetary system, our food supply and myriad other complex ecosystems we depend on for our modern life. That’s why we’re seeing a rise of treaties and international bodies attempting to create rules governing these systems, because regulating the web in the U.S. is like trying to solve a cockroach infestation by fogging a single apartment in a multitenant building.

As the internet has advanced, it’s become exactly what it was supposed to: An interconnected series of networks that have organically grown to meet demand at the edge. Like cockroaches, it can survive in hostile conditions. But unlike roaches, it’s something most people want in their lives, so news of its growing resiliency and localization should be good news.

]]>Tablets & TVs make online video go boom!http://gigaom.com/2012/05/30/tablets-connected-tvs-video-consumption-data/
Wed, 30 May 2012 19:18:47 +0000http://gigaom.com/?p=527046Online video accounted for more than half of all Internet traffic in 2011, and it’s only going to grow: Cisco (s CSCO) estimates that we will consume three trillion Internet video minutes worldwide per month by 2016. That means that the world will watch the equivalent of 833 days of video every single second!

The number of people using online video services will also grow dramatically. Worldwide, 792 million people used online video last year. By 2016, that number will roughly double, to 1.5 billion users.

Cisco released these projections Wednesday as part of its annual Visual Networking Index forecast, which also concluded that the world’s data consumption will reach 1.3 zettabytes by 2016 (for more details on that staggering number, check out Stacey Higginbotham’s write-up). But most intriguing about all that data, and the role video is playing, are the devices that are causing it.

TVs and game consoles make us watch longer

There are two major factors for online video’s huge growth potential: Online video is finding its way to the living room TV set, and people are watching more videos on tablets. Cisco’s forecast shows that especially those HD streams to the TV will have a huge impact going forward. The amount of Internet video delivered to TVs already doubled in 2011, and its expected to grow sixfold by 2016. By 2016, online video delivered to TVs will make up for six percent of all worldwide consumer Internet traffic.

Some additional data released by Ooyala today explains quite nicely why TV traffic is growing that much: Once online video reaches the TV, viewers start to stick around much longer. Completion rates for videos longer than 6 minutes are over 50 percent on connected TVs, but only around 25 percent on PCs.

Even more astonishing: 88 percent of all content consumed on connected TVs is longer than 10 minutes.

More devices will cause more traffic

At the same time, we are going to watch a lot more video on mobile phones and tablets. Mobile video traffic will grow 18-fold from 2011 to 2016, and the number of worldwide mobile users will reach 1.6 billion, a six-fold increase over 2011 levels. Altogether, nearly a third of all Internet traffic will come from devices other than the PC in 2016.

Of course, all of that wouldn’t be possible if we all didn’t have more and more devices to watch all those videos. Cisco estimates that the number of connected devices per U.S. household is going to grow from 5.5 connected devices (excluding cell phones and anything accessing mobile phone networks) to 8.5 devices by 2016.
Disclosure: GigaOM has a commercial relationship with Ooyala for the delivery of its video content.

]]>Traffic jams, ISPs and net neutralityhttp://gigaom.com/2011/11/13/traffic-jams-isps-and-net-neutrality/
http://gigaom.com/2011/11/13/traffic-jams-isps-and-net-neutrality/#commentsSun, 13 Nov 2011 17:00:14 +0000http://gigaom.com/?p=437932In the net neutrality debate, Internet Service Providers like AT&T (s T) and Verizon (s VZ), have said they need to charge content providers for prioritization so they can invest in improving infrastructure: faster internet service for all, they say.

But placing a price on prioritizing content creates an inherent disincentive to expand infrastructure. ISPs would profit from a congested Internet in which some content providers will be more than willing to pay an additional fee for faster delivery to users. Content providers like the New York Times (s NYT) and Google (s GOOG) would have little choice but to fork it over to get their information to end users. But end users would be unlikely to see the promised upgrades in speed. Those are some of the results of research we conducted on the Internet market.

Despite the fierce back-and-forth on net neutrality, there is a surprising lack of rigorous economic analysis on the topic. To change that, we built a game-theoretic economic model to address this question: Do ISPs have more incentive to expand their infrastructure capacity when net neutrality is abolished?

This is a key claim, used widely by ISP companies in arguing against maintaining a net neutral internet. The money from fees levied on content providers, they say, would be incentive to improve and expand infrastructure. In this argument, web surfers gain access to a faster internet.

But our analysis shows that if net neutrality were abolished, ISPs actually have less incentive to expand infrastructure.

Here is the intuition behind this result: Think of any road or highway you hate to drive on during rush hours. Say, I-5 in Seattle or the 495 loop in Washington, D.C. The highway is like the Internet, and the individual cars are the packets of data. The ISP is essentially the gatekeeper that controls the flow of cars on the highway.

If the ISP is allowed to snatch any car from the back of a very long line and put it in front of everybody else when the driver of the car pays a “priority delivery fee”, would the ISP have an incentive to keep the road congested, or, to expand the road capacity?

In this scenario, ISPs profit more when the roads are congested — if traffic is cruising, no one would feel the need to pay for faster service.

Currently, ISPs earn profits from attracting customers — mostly end users — using their computers for things like blogging, Tweeting, and downloading music and movies. For these people speed is an asset they might be willing to pay for. That gives ISPs motivation to improve their service and better compete for users.

But in a non-neutral Internet, the dynamic would change. ISPs would be able to strike deals to give certain Web sites or services priority in reaching users. For sites and services that pay up, there’ll be less waiting when the Internet’s information superhighway gets jammed — their pages will load faster. Those who don’t pay will be essentially forced to sit in traffic.

To see how ISPs and content providers might act under these proposed circumstances, we developed a model that describes the interactions of an ISP, multiple content providers and end-users. We examined how content providers, ISPs, and consumers would fare under both the neutral and non-neutral regimes. The most unambiguous finding from the model is that incentive for ISPs to invest in infrastructure is higher under the neutral regime than under the alternative. This is the case because the non-neutral regime allows ISPs to profit from greater congestion, undermining their return on infrastructure expansion.

Without net neutrality, ISPs will likely be better off and content providers worse off. This finding mirrors the reality of the debate, where the two sides have squared off on opposite sides.

If the goal of public policy is to expand broadband availability and reduce congestion, decision-makers should look beyond the immediate winners and losers and focus on the long-term consequences of their choices. Eliminating net neutrality will put a damper on investment in the Internet infrastructure that is likely to power a great deal of future innovation and growth — not exactly a recipe for maintaining the United States’ position as the global technological and economic leader.

Hsing “Kenny” Cheng and Shubho Bandyopadhyay are professors at the University of Florida, and Hong Guo is a professor at the University of Notre Dame.

]]>http://gigaom.com/2011/11/13/traffic-jams-isps-and-net-neutrality/feed/34Today in Connected Consumerhttp://gigaom.com/2011/05/17/68639/
Tue, 17 May 2011 14:36:33 +0000http://pro.gigaom.com/?p=68639Perhaps not surprising to many, but Netflix now makes up nearly a third of total Internet traffic in North America. While the debate about the impact of OTT streaming services on bandwidth and how ISPs may react has been raging for some time, this may give new fuel to fears that many wireline ISPs will eventually create caps or tiering. This same conversation has been happening in the UK since 2008, when eye-popping growth of the BBC’s iPlayer started push many to consider bandwidth caps.
]]>Internet Keeps Growing! Traffic up 62% in 2010http://gigaom.com/2010/10/06/internet-keeps-growing-traffic-up-62-in-2010/
http://gigaom.com/2010/10/06/internet-keeps-growing-traffic-up-62-in-2010/#commentsWed, 06 Oct 2010 11:45:04 +0000http://gigaom.com/?p=163410Whether it’s Hulu, or 85 million-plus daily tweets or millions of photos being uploaded to Facebook, Internet traffic keeps growing and growing. That’s not going to change any time soon, mostly because the Internet is now becoming a crucial part of our daily lives. In some parts of the world, it’s hard to escape the ‘net, so to speak. Soon, thanks to the mobile Internet revolution, a massive new majority is going to join the Internet.

Data from research firm Telegeography shows that Internet traffic has grown 62 percent in 2010, after logging a handsome 74 percent growth in 2009. The growth in traffic is coming from non-mature markets likes Eastern Europe and India, where traffic growth between mid-2009 and mid-2010 was in excess of 100 percent. Telegeography notes:

The regions experiencing the fastest growth in international Internet traffic between mid-year 2009 and mid-year 2010 were Eastern Europe and India/South Asia, where average traffic growth exceeded 100 percent, and the Middle East, where traffic rose just under 100 percent. Even relatively “mature” markets are still growing rapidly: western European international Internet traffic increased 66 percent, and the U.S. and Canada’s international Internet traffic climbed 54 percent.

This means the carriers, who added about 13.2 Tbps of new Internet capacity in 2010, will have to keep beefing up their networks. In comparison, carriers added 9.4 Tbps of capacity in 2009 and 6 Tbps in 2008. Compare that to 2002; we have indeed come a long way! (The chart below is from our archives.)

That said, the networks are not evenly divided. The capacity is still in abundance in larger, more mature markets, but less so in newer markets such as Africa. This will be changing soon, especially as we see deployment of new cables in those regions.

This new capacity in non-mature markets, when married to growth in wireless networks and easy availability of cheap smartphones, is going to turn the Internet on its head. A good indication of this shift can be foreseen in the growth of mobile social networking in India. As Telegeography notes:

The number of mobile social network users in India is expected to reach around 72 million by 2014, driven by the reduced cost of smartphones and the launch of 3G services, according to the latest research from Analysys Mason. The number of online social network users in India has grown by 43% to approximately 33 million unique users as of July 2010, with India emerging as the seventh largest market globally. According to the report, the increased number of social network users is driving the number of mobile social network users (around 10 million in 2009), representing around 2.2% of the total number of mobile subscribers.

This has to factor into Facebook’s future plans. Now imagine a repeat of this in Africa! You get the gist.