Friday, May 5, 2017

The big theme coming out of Facebook's recent F8 Developer Conference in San Jose, California was augmented reality (AR). Mark Zuckerberg told the audience that the human desire for community has been weakened over time and believes that social media could play a role in strengthening these ties.

Augmented reality begins as an add-on to Facebook Stories, its answer to Snapchat. Users simply take a photo and then use the app to place an overlay on top of the image, such as a silly hat, fake moustache, while funky filters keep the users engaged and help them create a unique image. Over time, the filter suggestions become increasingly smart, adapting to the content in the photo - think of a perfect frame if the photo is of the Eiffel Tower. The idea is to make the messaging more fun. In addition, geo-location data might be carried to the FB data centre to enhance the intelligence of the application, but most of the processing can happen on the device.

Many observers saw Facebook's demos as simply a needed response to Snapchat. However, Facebook is serious about pushing this concept far beyond cute visual effects for photos and video. AR and VR are key principles for what Facebook believes is the future of communications and community building.

As a thought experiment, one can consider some of the networking implications of real-time AR. In the Facebook demonstration, a user turns on the video chat application on their smartphone. While the application parameters of this demonstration are not known, the latest smartphones can record in 4K at 30 frames per second, and will soon be even sharper and faster. Apple's Facetime requires about 1 Mbit/s for HD resolution and this has been common for several years (video at 720p and 30 fps). AR certainly will benefit from high resolution, so one can estimate the video stream leaves the smart phone on a 4 Mbit/s link (this guestimate is on the low end). The website www.livestream.com calculates a minimum of 5 Mbit/s upstream bandwidth for launching a video stream with high to medium resolution. LTE-Advanced networks are capable of delivering 4 Mbit/s upstream, with plenty of headroom, and WiFi networks are even better.

To identify people, places and things in the video, Facebook will have to perform sophisticated graphical processing with machine learning. Currently this cannot be done locally by the app on the smartphone and so will need to be done at a Facebook data centre. So the 4 Mbit/s stream will have to leave the carrier network and be routed to the nearest Facebook data centre.

It is known from previous Open Compute Project (OCP) announcements that Facebook is building its own AI-ready compute clusters. The first design, called Big Sur, is an Open Rack-compatible chassis that incorporates eight high-performance GPUs of up to 300 watts each, with the flexibility to configure between multiple PCI-e topologies. It uses NVIDIA's Tesla accelerated computing platform. This design was announced in late 2015 and subsequently deployed in Facebook data centres to support its early work in AI. In March, Facebook unveiled Big Basin, its next-gen GPU server capable of machine learning models that are 30% bigger than those handled on Big Sur using greater arithmetic throughput and a memory increase from 12 to 16 Gbytes. The new chassis also allows for disaggregation of CPU compute from the GPUs, something that Facebook calls JBOG (just a bunch of GPUs), which should bring the benefits of virtualisation when many streams need to be processed simultaneously. The engineering has anticipated that increased PCIe bandwidth will be needed between the GPUs and the CPU head nodes, hence a new Tioga Pass server platform was also necessitated.

The Tioga Pass server features a dual-socket motherboard, with DIMMs on both PCB sides for maximum memory configuration. The PCIe slot has been upgraded from x24 to x32, which allows for two x16 slots, or one x16 slot and two x8 slots, to make the server more flexible as the head node for the Big Basin JBOG. This new hardware will need to be deployed at scale in Facebook data centres. Therefore, one can envision that the video stream originates at 4 Mbit/s and travels from the user's smartphone and is routed via the mobile operator to the nearest Facebook data centre.

Machine learning processes running on the GPU servers perform what Facebook terms Simultaneous Localisation and Mapping (SLAM). The AI essentially identifies the three-dimensional space of the video and the objects or people within it. The demo showed a number of 3D effects being applied to video stream, such as lighting/shading, placement of other objects or text. Once this processing has been completed, the output stream must continue to its destination, the other participants on the video call. Maybe further encoding has compressed the stream, but still Facebook will have to be burning some amount of outbound bandwidth to hand the video stream over to another mobile operator for delivery via IP to the app on the recipient's smartphone. Most likely, the recipient(s) of the call will have their video cameras turned on and these streams will also need the same AR processing in the reverse direction. Therefore, we can foresee see a two-way AR video call burning tens of mgeabits of WAN capacity to/from the Facebook data centre.

The question of scalability

Facebook does not charge users for accessing any of its services, which generally roll out across the entire platform at one go or in a rapid series of upgrade steps. Furthermore, Facebook often reminds us that it is now serving a billion users worldwide. So clearly, it must be thinking about AR on a massive scale. When Facebook first began serving videos from its own servers, the scalability question was also raised, but this test was passed successfully thanks to the power of caching and CDNs. When Facebook Live began rolling, it also seemed like a stretch that it could work at global scale. Yet now there are very successful Facebook video services.

Mobile operators should be able to handle large numbers of Facebook users engaging in 4 Mbit/s upstream connections, but each of those 4 Mbit/s streams will have to make a visit to the FB data centre for processing. Fifty users will burn 200 Mbit/s of inbound capacity to the data centre, 500 users will eat up 2 Gbit/s of bandwidth, 5,000 20 Gbit/s and 50,000 200 Gbit/s. For mobile operators, if AR chats prove to be popular lots of traffic will be moving in and out of Facebook data centres, and one could easily envision a big carrier like Verizon or Sprint having more than 500,000 simultaneous users on Facebook AR. So this would present a challenge if 10 million users decide to try this out on a Sunday evening. That would demand a lot of bandwidth that network engineers would have to find a way to support. Another point is that, from experience with other chat applications, people are no longer accustomed to economising in terms of length of the call or number of participants. One can expect many users to kick-off a Facebook AR call with friends on another continent and keep the stream opened for hours.

Of course, there could be clever compression algorithms in play so that the 4 Mbit/s at each end of the connection could be reduced, while if the participants do not move from where they are calling and nothing changes in the background, perhaps the AR can snooze, reducing the amount of processing needed and the bandwidth load. In addition, perhaps some of the AR processing can be done on next gen smartphones. However, the opposite could also be true, where AR performance is enhanced by using 4K, multiple cameras per user are used on the handset for better depth perception, and the video runs at 60 fps or faster.

Augmented reality is so new that it is not yet known whether it will take off quickly or be dismissed as a fad. Maybe it will only make sense in narrow applications. In addition, by the time AR calling is ready for mass deployment, Facebook will have more data centres in operation with a lot more DWDM to provide its massive optical transport – for example the MAREA submarine cable across the Atlantic Ocean between Virginia and Spain, which Facebook announced last year in partnership with Microsoft. The MAREA cable, which will be managed by Telxius, Telefónica’s new infrastructure company, will feature eight fibre pairs and an initial estimated design capacity of 160 Tbit/s. So what will fill all that bandwidth? Perhaps AR video calls, but the question then is, will metro and regional networks be ready?

It has been a very eventful start to the year for U.S. mobile operators. As the first quarter financials reports have rolled in over the past two weeks, it is clear the two top tier players, AT&T and Verizon, are increasingly under pressure from the No.3 and No.4 contenders, Sprint and T-Mobile US. All four operators offer virtually the same set of services, delivered to same handsets with roughly equivalent levels of performance in most places. So it is no wonder the battle is now primarily about price.

After years of rather stagnant market positions and services, suddenly a lot is happening in the U.S. mobile market. While there have not yet been any moves this year to consolidate four players to three, nor a full commitment from a cloud company (such as AWS, Google or Microsoft) to enter and lead in wireless, the 2017 battlefield differs from 2016 in the following respects:

·The move to 'unlimited' data plans.

·A new regulatory climate and the pending super merger of AT&T and Time Warner.

One could also add the rollout LTE-M to support the first significant wave of connected things to the list, but that is a topic on its own.

The move to unlimited data plans

With reports of mounting subscriber losses to T-Mobile and Sprint, on February 12th Verizon unveiled an Unlimited mobile data plan for smartphones and tablets on its LTE network (the plan also covers HD video streaming, Mobile Hotspot, calling and texting to Mexico and Canada and up to 500 Mbytes/day of 4G LTE roaming in Mexico and Canada, but subscribers may encounter throttling after 22 Gbytes of data usage on a line during any billing cycle). A day after Verizon announced its re-entry into unlimited mobile data plans, T-Mobile responded the addition of HD video and 10 Gbytes high-speed Mobile Hotspot data to its T-Mobile ONE unlimited plan. T-Mobile also introduced a new offer of two lines on T-Mobile ONE for $100.

In turn, this was quickly followed by AT&T, which begin offering a post-paid, unlimited mobile plan to consumers and business customers without requiring a DirecTV subscription. Under pressure from Steve Jobs, AT&T famously offered an unlimited data plan for the original iPhone, but in later years moved to tiered service. On the same day, Sprint fired back with a price cut that it says makes its service 50% cheaper than AT&T or Verizon. Sprint has been advertising heavily that its network quality/performance is 'within 1%' of the industry leader. The Sprint unlimited plan offers four lines for $22.50 each, with HD-quality video, 10 Gbytes mobile hotspot per line and an iPhone 7 lease included.

While the 'unlimited' data plans do not make sense for all customers, they do point to a future market that is quite familiar. Over time, telecom carriers have been forced to drop per-minute charges for voice calling, then for long-distance voice calling, and then for SMS. Some of this can be attributed to over-the-top services like Skype and Messenger, but there is also the case that the overall carrying capacity of the network has increased so much that this is really no incremental burden for carrying an additional text message. There is plenty of bandwidth available for these applications and the cost of billing for each transaction may not be worth while - retaining customers is a higher priority than high granularity billing. So it seems we are moving toward a market where mobile bandwidth consumption for the majority of subscribers will be 'unlimited', even if throttling occurs on some applications during peak hours or busy locations.

New regulatory climate puts media partnerships in play

The arrival of the Trump administration was certain to bring changes to the FCC. This came quickly with the nomination and subsequent confirmation of Ajit Pai as chairman of the FCC. Within 2 weeks, the FCC's Wireless Telecommunications Bureau ended its investigation into wireless carriers' free data offerings. The special video bundles, which Pai noted to be popular with consumers, enable smartphone customers to view select content without consuming data allowances on their mobile plans. Net neutrality was concerned that such offerings meant that the preferential treatment of some content would place other content providers at a competitive disadvantage. Mobile operators need not worry about FCC oversight here anymore.

With the move to 'unlimited' data plans, mobile operators will be incentivised to cache preferred content as close to the subscriber as possible. AT&T’s DirecTV content partnerships could prove valuable here, while its pending acquisition of Time Warner, announced in October 2016 at a whopping transaction value of $108 billion, could be a game changer. Time Warner, formed in 1990 through the merger of Time Inc. and Warner Communications, encompasses several premium media properties including HBO, New Line Cinema, Turner Broadcasting System, The CW Television Network, Warner Bros., CNN, Cartoon Network, Boomerang, Adult Swim, DC Comics, Warner Bros. Animation, Castle Rock Entertainment, Cartoon Network Studios, Esporte Interativo, Hanna-Barbera Productions and Interactive Entertainment. It also owns 10% of Hulu.

Two other big regulatory events in Q1 made this the most Significant quarter at the FCC in years: the FCC’s Broadcast Incentive Auction (see below) and Ajit Pai’s decision to reverse the Title II Net Neutrality rules adopted in 2015. Pai described the Title II rules as a 'regulatory mistake' that slowed down telecom infrastructure spending in the U.S. by 5.6% percent, or $3.6 billion, between 2014 and 2016 for just the top 12 Internet service providers.

One of the key Net Neutrality principles was 'no paid prioritisation' for favoured content. With this out of the way, the regulatory environment would also tend to favour operators with media partnerships. No wonder Verizon's CEO Lowell McAdams was quoted in April as saying the company was open to the possibility of transformative transactions. Perhaps there will be other Time Warner-scale deals coming to the fore. However, one should not take it for granted that these mega mergers will clear all regulatory environments even under the Trump administration. The issue could easily get ensnared in Trump's personal war against the media and 'fake news' companies.

There is a counter argument to the idea that paid prioritisation will rule the market. The impact of the cloud company has not yet been fully felt. Clearly, content and applications are consolidating to the big clouds, each of which is highly motivated to ensure the best possible performance. AWS, for instance, runs its CloudFront content delivery network (CDN), which accelerates websites and video content. CloudFront currently has 85 locations, including 74 PoPs, and a long list of top-tier customers and brands. Even if a carrier such as AT&T develops a special set of HBO videos for its customers under a zero-rating plan, it would still have the business motivation to ensure excellent connections with the AWS PoPs.

Part 2 of this article will look at additional forces reshaping the U.S. mobile industry, including the $10 billion broadcast spectrum auction, the big FirstNet project, early moves in 5G, the shift to network virtualisation, and other trends.

According to market research firm IDC in its latest Quarterly Mobile Phone Tracker, 104.1 million smartphones were shipped to China in the first quarter of 2017, up 1% year on year, with the low growth in part due to the high inventory levels from the previous quarter.

IDC notes that the first quarter was also relatively quiet in terms of new products, with few launches aside from Huawei's new P10, P10 Plus and Honor V9, which combined with strong momentum for its Honor brand meant that Huawei reclaimed the top position by market share from OPPO.

The research firm finds that the top five smartphone companies have a dominant 70% share of the market, and predicts that this overall share will continue to increase, together with consolidation amongst the smaller companies, during the year. IDC does not expect any new smartphone companies to have a significant effect on the Chinese market.

In terms of vendor market share, for the first quarter IDC reports that Huawei led the China market with unit sales of 20.8 million and a share of 20.0%, up from 16.8% in the prior fourth quarter. OPPO was the second ranked supplier with sales of 18.9 million units and a share of 18.2%, up from 18.1% in the fourth quarter. The third placed vendor was vivo, with sales of 14.6 million units and a 14.1% share, down from 16.0% in the prior quarter.

The fifth ranked vendor was Apple with sales of 9.6 million units and a share of 9.2%, versus 11.0% in the fourth quarter, and Xiaomi was sixth with sales of 9.3 million units and a market share of 9.0%, compared with 7.4% in the prior quarter.

IDC notes that Android ASPs (average selling price) continued to increase, both sequentially and year on year, mainly due to the top Chinese smartphone companies Huawei, OPPO and vivo increasing their ASPs as consumers purchase flagship models. In addition, these key flagship models from Chinese suppliers offer upgraded specifications leading to higher prices. IDC cites Huawei's Honor 8, the OPPO R9s and vivo X9 as the most popular models from these companies in the quarter.

Commenting on the data, Tay Xiaohan, senior market analyst with the IDC Asia Pacific client devices team, said, "Despite a soft first quarter in China, the second quarter should pick up sequentially given not only JD.com's June promotions, but also activity around a number of new products such as vivo with its Y53, Xiaomi with its Mi 6, Meizu with its E2 and Gionee with its M6S Plus".

Nokia has announced a strategic Memorandum of Understanding (MoU) with China's Tianfu New Area Chengdu Administrative Committee under which the parties will collaborate on establishing a new digital city that will include the construction of data centre and related telecom infrastructure.

The new digital city project in the Chengdu prefecture of Sichuan province will also involve the deployment of a trial network for internet of things (IoT), the incubation of IoT applications and devices, big data and the deployment of an optical network to serve the Tianfu New Area.

Nokia noted that in December 2015 it announced plans to establish a global R&D centre in Chengdu that would focus on developing technology for areas including next generation telecom networks, IoT, big data and the cloud. The centre is now operational and houses several hundred of R&D staff.

Nokia added that the digital city agreement for Tianfu New Area is the latest smart city engagement for the company worldwide, and highlights the company's strategy to expand its customer base beyond the traditional telecommunications market. In late 2016, Nokia released its Smart City Playbook, which is intended to define best practices for smart city projects.

Tianfu New Area, established in December 2011, is intended to create a modern urban area for residents, industry and commerce, with a focus on developing modern manufacturing and high-end service clusters. Tianfu New Area encompasses parts of Chengdu High-tech Zone, Longquanyi District, Shuangliu County, Xinjin County, Jianyang City, Pengshan County of Meishan City and Renshou County. The plans include the construction of the New Century Global Centre and Chengdu Tianfu International Airport.

Cisco Mexico announced that it has developed the Country Digitization Analytics Platform (CDAP) designed to support the implementation of Mexico Conectado, a program of the Mexican government's Secretariat of Communications and Transportation (SCT).

The Cisco CDAP platform is designed to provide the SCT with analytics information for usage of the initiative, in addition to raw data relating to the usage of the network.

Mexico Conectado is a program initiated by Mexico's federal government that is intended to guarantee citizens constitutional right to access to broadband Internet service by addressing the digital divide in the country. The program was developed by the Mexican SCT and is being implemented through the department Coordination of the Information and Knowledge Society (CSIC, or Coordinación de la Sociedad de la Información y el Conocimiento).

The key objective of Mexico Conectado is to extend broadband Internet access, free of charge, to low income populations via the deployment of more than 100,000 sites nationwide. The system is being implemented across Mexico, primarily in public locations such as schools, health centres, libraries, community centres, public parks and government buildings.

The Country Digitization Analytics Platform is designed to offer an open government analytics and intelligence platform and was developed by Cisco engineers leveraging the cloud-based functionality of Cisco Meraki technology in a multi-carrier and multi-service provider environment.

The CDAP works by collecting data from Mexico Conectado sites and converting it into relevant information that can facilitate measurement of the impact the country digitisation initiative is having. Specifically, the CDAP is designed to provide intelligence relating to the sustainability, social impact and support future fine-tuning of the initiative.

The CDAP enables consolidation and/or correlation of data from a number of different management and use domains and transforms the data into analytics that can be used to measure key usability indicators for the country digitisation program. Analytics data provided includes the number of citizens using public Internet access, external/internal hotspot access distribution, bandwidth consumption and usage of government sites via public Internet.

Through the agreement, China Mobile's Open NFV Testlab will host the platform as part of the operator's Telecom Integrated Cloud (TIC) initiative. The work will specifically encompass validation of a range of NFV tests cases, including virtualised CPE (vCPE), vBRAS, vEPC and vIMS, while also supporting development and integration within the Open Network Automation Platform (ONAP) project.

Enea NFV Core is a carrier-grade virtualisation software platform based on OPNFV and OpenStack. It is designed to enable the deployment and management of vCPE network functions in central offices and data centres utilising generic hardware platforms. The NFV Core is optimised for the vCPE use case and central office deployments and is designed to offer the performance, reliability and flexibility required in next generation telecom networks.

While Noel Hurley, VP and GM, networking and servers, business segments group at ARM, noted, "ARM and its ecosystem of partners (are) committed to enabling OPNFV to bring efficient and cost-effective compute power for data networks… the network pipeline must be expanded at scale to support new computing across all markets in the most efficient way... the ecosystem delivers efficiency through integrated solutions that demonstrate the performance-per-watt, density and TCO provided by ARM technology".

Minneapolis-based Transition Networks, a provider of data network integration solutions and a company of Communications Systems, announced it is showcasing fibre-to-the-desk connectivity solutions designed for enterprise and government network applications during Dell EMC World 2017 in Las Vegas.

At the event, Transition Networks will introduce a range of new products for implementing fibre into Ethernet networks, including a fibre network interface card (NIC) with PoE+ port, plus fast Ethernet and Gigabit Ethernet fibre NICs that have been certified by Dell EMC, specifically the Scorpion-USB 3.0 Gigabit Ethernet fibre adapter and an 18-slot mini media converter chassis for consolidating copper-to-fibre media conversion equipment.

Transition Networks will showcase its family of Gigabit Ethernet fibre-to-the-desk NICs that feature both fibre connectivity for linking to a PC, as well as PoE+ to connect to copper-based powered devices such as IP phones. The company will also demonstrate a government use case of a national deployment of a PoE security solution with an intelligence agency.

During Dell EMC World, Transition Networks is participating in the session 'Fiber-to-the-Desk Connectivity Solutions for Dell PCs and Wyse Thin Clients', which will cover its Gigabit Ethernet fibre-to-the-desk NICs. The discussion will also cover its mini stand-alone media converter and chassis options, designed to simplify physical layer applications, and ION platform, which is designed for more complex fibre integration needs.

Transition Networks' ION platform integrates copper and different types of fibre and is designed to enable customers to extend networks, optimise existing infrastructure. The solution also supports navigation of connected network interfaces for network management. Designed for enterprise data centres and core network applications, the ION platform offers flexibility via modular or stand-alone units with 1-, 2-, 6- or 19-slot chassis options.

Viavi Solutions, a supplier of network and service enablement solutions and optical security and performance products, has announced its CellAdvisor Base Station Analyzer provides support for the signal analysis required for narrowband Internet of Things (NB-IoT) connectivity.

The new test capability is designed to address the needs of service providers seeking to test the overlay IoT infrastructure that must co-exist with the traditional mobile communications network. Viavi is introducing the upgrade to the CellAdvisor following trials with Tier 1 global service providers and collaborations with network equipment manufacturers.

Viavi noted that the IoT will enable billions of smart devices to maintain low-speed and low-latency connectivity to mobile communications networks via narrowband signals. These networks will need to simultaneously support high-bandwidth applications such as video streaming using wideband signals, creating an inherent conflict between the two usage requirements. As a result network operators require solutions to help manage these disparate connection types.

To meet this need, Viavi has developed software-based NB-IoT testing that can be licensed on existing CellAdvisor handheld instruments, which are in use with major carriers worldwide. The new software feature enhances CellAdvisor solution, which supports RF over CPRI and BBU emulation, in addition to LTE testing and automated interference hunting to help improve operational efficiency.

With the new NB-IoT support, CellAdvisor measures the potential interference and performance impact the NB-IoT signal may have on the LTE wideband signal. It also verifies whether the signal has the reach and coverage required to serve the number of devices in the assigned geographic area, including accounting for criteria such as building penetration.

The capabilities of the CellAdvisor with NB-IoT support include analysis of signal power levels, digital demodulation and interference down to the single PRB (physical resource block) for the signal being measured. The measurements provide customers with in-depth data on how the network is operating in terms of performance, coverage and data traffic capacity, also identifying potential issues related to interference or intermodulation.