Medical Body Area Network’s

Medical Body Area Network (MBAN) technology will provide a flexible platform for the wireless networking of multiple body transmitters used for the purpose of measuring and recording physiological parameters and other patient information or for performing diagnostic or therapeutic functions, primarily in health care facilities. This platform will enhance patient safety, care and comfort by reducing the need to physically connect sensors to essential monitoring equipment by cables and wires. As the numbers and types of medical radio devices continue to expand, these technologies offer tremendous power to improve the state of health care in the United States.

An MBAN is a little like a cellular wireless system in miniature, worn on a patient’s body. Sensors around the body monitor various functions, depending on the patient’s needs, and communicate their data to a central hub, worn by the patient or located close by. The hub aggregates data from the various sensors, and transmits those data using the health care facility’s network (possibly over Wi-Fi or Ethernet) to a central control point, from where the data are made available to the professional staff for interpretation and appropriate response.

Medical applications of BAN cover continuous waveform sampling of biomedical signals, monitoring of vital signal information, and low rate remote control of medical devices. They can be broadly classified into two categories depending on their operating environments. One is the wearable BAN, which is mainly operated on the surface or in the vicinity of the body, such as medical monitoring. Another is the implantable BAN, which is operated inside the human body, e.g. capsule endoscope and pacemaker.

Wired Technologies

Wired technologies inevitably result in reduced patient mobility and increased difficulty and delay in transporting patients. Caregivers, in turn, can spend inordinate amounts of time managing and arranging monitor cables, as well as gathering patient data.

The introduction of MBAN represents an improvement over traditional medical monitoring devices (both wired and wireless) in several ways, and will reduce the cost, risk and complexity associated with health care. For example, a health care facility could monitor more patients, particularly in patient care areas where Wireless Medical Telemetry Service (WMTS) is not currently installed; an MBAN could be used outside the health care facility, e.g., within patients’ homes; and an MBAN could be used for both monitoring and therapeutic applications.

Wireless Technologies

Wireless Medical Telemetry Service (WMTS) in health care facilities has overcome some of the obstacles presented by wired sensor networks. Nonetheless, WMTS is an in-building network that is often used primarily for monitoring critical care patients in only certain patient care areas. The MBAN concept would allow medical professionals to place multiple inexpensive wireless sensors at different locations on or around a patient’s body and to aggregate data from the sensors for backhaul to a monitoring station using a variety of communications media.

Currently, there are multiple frequency bands available for different types of wireless medical device applications. The MedRadio service provides an umbrella framework to regulate the operation of both implanted and body-worn wireless medical devices used for diagnostic and therapeutic purposes in humans. MedRadio uses spectrum in the 401-406 MHz, 413-419 MHz, 426-432 MHz, 438-444 MHz, and 451-457 MHz bands, all on a secondary basis.

The Wireless Medical Telemetry Service (WMTS) allows for the transmission of patient-related telemetric medical information to a central monitoring location within a hospital or other medical facility. WMTS operates in the 608-614 MHz, 1395-1400 MHz, and 1427-1432 MHz bands on a primary basis. In addition, medical radio device manufacturers have for many years been allowed to market products which operate on a variety of frequencies on an unlicensed basis.

Dedicated Medical Body Area Networks (MBANs) Wireless Spectrum

The FCC recently took regulatory action (MBAN Joint Proposal) to dedicate spectrum for wireless monitoring sensors, or Medical Body Area Networks (MBANs), making the U.S. the first country in the world to dedicate spectrum specifically for wireless health devices. The FCC concluded that the only cost resulting from these new regulations is the risk of increased interference, and have minimized that risk by adopting rules that permit an MBAN device to operate only over relatively short distances and as part of a low power networked system.

Initially Aeronautical Mobile Telemetry (AMT) licensees and the MBAN proponents – parties strongly disagreed as to whether MBAN and AMT operations could successfully coexist in the same frequency band. The approach taken will permit providing frequencies where an MBAN can co-exist with existing spectrum users and engage in robust frequency re-use, which will result in greater spectral efficiency.

FCC MBAN Joint Proposal

This FCC Joint Proposal is a comprehensive plan that draws from both the existing MedRadio and WMTS rules to specify MBAN operational requirements for body-worn sensors and hubs. It includes a detailed set of requirements for MBAN management within a health care facility. It also proposes that MBAN use in the 2360-2390 MHz band be limited mostly to indoor use and subject to specific coordination procedures and processes to protect AMT users in that band, whereas MBAN use in the 2390-2400 MHz band could occur at any location and without coordination.

The Joint Proposal describes an MBAN as consisting of a master transmitter (hereinafter referred to as a “hub”), which is included in a device close to the patient, and one or more client transmitters (hereinafter referred to as body-worn sensors or sensors), which are worn on the body and only transmit while maintaining communication with the hub that controls its transmissions. The hub would convey data messages to the body-worn sensors to specify, for example, the transmit frequency that should be used.

The hub and sensor devices would transmit in the 2360-2400 MHz band. The hub would aggregate patient data from the body-worn sensors under its control and, using the health care facility’s local area network (LAN) (which could be, for example, Ethernet, WMTS or Wi-Fi links), transmit that information to locations where health care professionals monitor patient data. The hub also would be connected via the facility’s LAN to a central control point that would be used to manage all MBAN operations within the health care facility.

To protect AMT operations from harmful interference, the Joint Proposal would have the Commission designate an MBAN frequency coordinator who would coordinate MBAN operations in the 2360-2390 MHz band with the AMT frequency coordinator. The control point would serve as the interface between the MBAN coordinator and the MBAN master transmitters to control MBAN operation in the 2360-2390 MHz band. The control point would receive an electronic key, which is a data message that specifies and enables use of specific frequencies by the MBAN devices. The control point, in turn, would generate a beacon or control message to convey a data message to the hub via the facility’s LAN that specifies the authorized frequencies and other operational conditions for that MBAN.

MBAN – Licensed by Rule

To help encourage the development of MBAN devices and applications, the FCC decided not to require users to apply for and receive individual licenses from the FCC. Instead, all MBANs will be “licensed by rule,” which means that users will be deemed licensed as long as they abide by all technical and operational limitations.

However, MBAN operations will be permitted only on a secondary basis — users must not cause harmful interference to and must accept interference from the primary licensees in the band.

MBAN Power and Frequency Summary

The permitted MBAN operations depend on which portion of the band will be used:

MBANs using the 30 MHz in the 2360-2390 MHz band. MBAN operations in this band are restricted to indoor uses in health care facilities. Users of this portion of the band will be required to register with an MBAN coordinator (discussed below, will be selected at a later date) and coordinate with primary licensees, if necessary.

MBANs using the 10 MHz in the 2390-2400 MHz band. In this band, MBAN operations can be used in any location, such as in a health care facility, in a patient’s home, or outdoors while the patient is in transit (e.g. ambulances). Users of this portion of the band will not be subject to registration and coordination requirements.

Power limits are relatively generous: 1 milliwatt at 2360-2390 MHz, with a higher limit of 20 milliwatts at 2390-2400 MHz, in part to give patients greater mobility within their homes.

FDA Role in MBAN Functions

In the past, the Food and Drug Administration (“FDA”) has expressed concern about the potential for interference when health care providers rely on wireless medical devices. In a 2007 draft guidance document, Radio-Frequency Wireless Technology in Medical Devices, FDA commented that a quality system for devices that incorporate wireless technology should address potential concerns such as wireless quality of service, wireless coexistence, data integrity and security, and applicable EMC and telecommunications standards and regulations. As a result, there is likely to be increased collaboration between the FCC and the FDA as wireless medical devices enter the market. In particular, the FCC suggested that the FDA could play an important role in specifying whether MBANs may be used to perform functions that are life-critical or time-sensitive.

Wireless Coexistence

Although there is some overlap between electromagnetic compatibility (EMC) and wireless coexistence, differences exist. Wireless coexistence is the ability of one wireless system to perform a task in an environment where other systems that may or may not be using the same set of rules can also perform their tasks.EMC is the ability of a device to function properly in its intended electromagnetic environment without introducing excessive electromagnetic energy that could interfere with other devices. Manufacturers of electrically powered medical devices routinely test their equipment to applicable national and international consensus safety standards. EMC test results are often used to support safety claims to regulatory agencies such as FDA. Less well-known are the issues and concerns associated with wirelessly enabled medical devices, although this is changing thanks to FDA’s guidance document on wireless medical devices.

At any given time, a typical home or hospital uses a number of wireless systems (e.g., IEEE 802.11a/b/g/n, or WiFi; Bluetooth; ZigBee; cordless phones) operating on the same industrial, scientific, and medical (ISM) band. Given the increasing use of wireless, RF wireless medical devices and other wireless systems operating nearby can interfere with each other. If a collision between their respective transmissions occurs, data packets transmitted by medical devices could be delayed or blocked, potentially interfering with timely transmissions of critical data. Techniques such as retransmission and forward error correction might no longer be sufficient to overcome interference and spectrum congestion. Hence, methods to design and test wirelessly enabled medical devices for risks associated with coexistence of wireless technologies are essential for innovative, safe, and effective RF wireless medical devices.

In a similar vein, the same technology can provide effective solutions for personal entertainment as well. The existence of medical body area networks will provide opportunities to expand these product features, better healthcare and well being for the users. It will therefore result in economic opportunity for technology component suppliers and equipment manufacturers.

Medical applications are critical applications and may be life critical. The requirements for medical Body Area Networks (BANs) include: robust links for bounded data loss and bounded latency, capacity for high density of patients and sensors, coexistence with other radios, battery life for days to months of continuous operation, and small form factors for body devices.

These requirements can be satisfied through the utilization of a number of techniques including error control techniques and adaptive repeat requests, low duty cycle and power management, and the development of more efficient, diverse antennas.

Existing standards have been designed for commercial applications with little or no consideration for life saving emergency scenarios. In particular, there is a need to ensure reliable communications by network devices such as sensors that are involved in emergency situations, while ensuring low power consumption.

Are you aware that America’s National Broadband Plan includes recommending that the FCC initiate a pro­ceeding to ensure that all multi-channel video programming distributors (MVPDs) install a gateway device or equiva­lent functionality in all new subscriber homes and in all homes requiring replacement set-top boxes, starting on or before Dec. 31, 2012.

To facilitate innovation and limits costs to consumers, the gateway device must be simple. Its sole function should be to bridge the proprietary or unique elements of the MVPD network (e.g., conditional access, tuning and reception functions) to widely used and accessible, open networking and communications stan­dards. That would give a gateway device a standard interface with televisions, set-top boxes and other in-home devices and allow consumer electronics manufacturers to develop market and sup­port their products independently of MVPDs.

The following key principles apply:

A gateway device should be simple and inexpensive, both for MVPDs and consumers. It should be equipped with only those components and functionality required to perform network-specific functions and translate them into open, standard protocols. The device should not support any other functionality or components.

A gateway device should allow consumer electronics manufacturers to develop sell and support network-neutral devices that access content from the network independently from MVPDs or any third parties.Specifically, third-party manufacturers should not be limited in their ability to inno­vate in the user interface of their devices by MVPD require­ments. User-interface innovation is an important element for differentiating products in the consumer electronics market

Similar to broadband modems the proposed gateway device would accommodate each MVPD’s use of differ­ent delivery technologies and enable them to continue unfettered investment and innovation in video delivery. At the same time, it would allow consumer electronics manufacturers to design to a stable, common open interface and to integrate multiple functions within a retail device. Those functions might include combining MVPD and Internet content and services, providing new user interfaces and integrating with mobile and portable devices such as media players and computers. It could enable the emergence of completely new classes of devices, services and applications involving video and broadband.

To ensure a competitive market for set-top boxes, the open gateway device:

Should use open, published standards for discovering, signal­ing, authenticating and communicating with retail devices.

Should allow retail devices to access all MVPD content and services to which a customer has subscribed and to display the content and services without restrictions or requirements on the device’s user interface or functions and with­out degradation in quality (e.g., due to transcoding).

Should not require restrictive licensing, disclosure or cer­tification. Any criterion should apply equally to retail and operator-supplied devices. Any intellectual property should be available to all parties at a low cost and on reasonable and non-discriminatory terms.

Should pass video content through to retail devices with existing copy protection flags from the MVPD.

Requiring that the gateway device or equivalent functional­ity be developed and deployed by the end of 2012 is reasonable given the importance of stimulating competition and innova­tion in set-top boxes, the extensive public record established in this subject area and the relatively simple architectures proposed to date.

The FCC should establish interim milestones to ensure that the development and deployment of a gateway device or equivalent functionality remains on track. In addition, the FCC should determine appropriate enforcement mechanisms for MVPDs that, as of Dec. 31, 2012, have not begun deploying gateway device functionality in all new subscriber homes and in all homes requiring replacement set-top boxes.

Enforcement mechanisms would be determined with public input as part of the rulemaking proceeding. They could include, for example, issuing fines against non-compliant operators or denying extensions of certain CableCARD waivers like those granted for Digital Transport Adapters (DTAs). The FCC could also reach agreements with operators to provide set-top boxes for free to new customers until a gateway device is deployed.

The FCC should establish up front the criteria for the enforcement mechanisms. The FCC may want, for instance, to grant small operators more time to deploy the gateway device to take account of unique operational or financial circum­stances. Transparency in the criteria for the enforcement mechanisms will establish more regulatory certainty in the market and help limit the number of waiver requests.

The following is a summary of the recommendations across Networks, Devices and Applications that are contained within America’s Broadband Competition and Innovation Policies

Networks

The federal government, including the FCC, the National Telecommunications and Information Administration (NTIA) and Congress, should make more spectrum avail­able for existing and new wireless broadband providers in order to foster additional wireless-wireline competition at higher speed tiers.

The FCC and the U.S. Bureau of Labor Statistics (BLS) should collect more detailed and accurate data on actual availability, penetration, prices, churn and bundles offered by broadband service providers to consumers and busi­nesses, and should publish analyses of these data.

The FCC, in coordination with the National Institute of Stan­dards and Technology (NIST), should establish technical broadband performance measurement standards and meth­odology and a process for updating them. The FCC should also encourage the formation of a partnership of industry and consumer groups to provide input on these standards and this methodology.

The FCC should continue its efforts to measure and publish data on actual performance of fixed broadband services. The FCC should publish a formal report and make the data available online.

The FCC should initiate a rulemaking proceeding by issuing a Notice of Proposed Rulemaking (NPRM) to determine performance disclosure requirements for broadband.

The FCC should develop broadband performance standards for mobile services, multi-unit buildings and small business users.

The FCC should comprehensively review its wholesale competition regulations to develop a coherent and effec­tive framework and take expedited action based on that framework to ensure widespread availability of inputs for broadband services provided to small businesses, mobile providers and enterprise customers.

The FCC should ensure that special access rates, terms and conditions are just and reasonable.

The FCC should ensure appropriate balance in its copper retirement policies.

The FCC should clarify interconnection rights and obliga­tions and encourage the shift to IP-to-IP interconnection where efficient.

The FCC should move forward promptly in the open pro­ceeding on data roaming.

Devices

The FCC should initiate a proceeding to ensure that all multi­channel video programming distributors (MVPDs) install a gateway device or equivalent functionality in all new subscriber homes and in all homes requiring replacement set-top boxes, starting on or before Dec. 31, 2012.

On an expedited basis, the FCC should adopt rules for cable operators to fix certain CableCARD issues while develop­ment of the gateway device functionality progresses. Adop­tion of these rules should be completed in the fall of 2010.

Applications

Congress, the Federal Trade Commission (FTC) and the FCC should consider clarifying the relationship between users and their online profiles.

Congress should consider helping spur development of trusted “identity providers” to assist consumers in manag­ing their data in a manner that maximizes the privacy and security of the information.

The FCC and FTC should jointly develop principles to require that customers provide informed consent before broadband service providers share certain types of informa­tion with third parties.

The federal government, led by the FTC, should put addi­tional resources into combating identity theft and fraud and help consumers access and utilize those resources, includ­ing bolstering existing solutions such as OnGuard Online.

FCC consumer online security efforts should support broader national online security policy, and should be coor­dinated with the Department of Homeland Security (DHS), the FTC, the White House Cyber Office and other agencies. Federal agencies should connect their existing websites to OnGuard Online to provide clear consumer online security information and direction.

The federal government should create an interagency working group to coordinate child online safety and literacy work, facilitate information sharing, ensure consistent messaging and outreach and evaluate the effectiveness of governmental efforts. The working group should consider launching a national education and outreach campaign involving governments, schools and caregivers.

The federal government should investigate establishing a national framework for digital goods and services taxation.

Several months ago six of the world’s largest tech companies – Google, Microsoft, Facebook, Yahoo, Verizon and Deutsche Telekom – joined forces to form the Open Networking Foundation (ONF), which will advance the development of a new open source networking protocol called OpenFlow. What exactly is OpenFlow, and why would these huge companies throw their collective weight behind it?

The ability of a network operator to create custom functions applicable to his own network, and then apply those functions to switches from multiple vendors, is the true
promise of SDN. OpenFlow allows a customer to programmatically control their network, over an industry-standard interface, using the same distributed system libraries and packages they use to orchestrate the rest of their infrastructure. The two key points here are ‘customer programmatic control’ and ‘industry standard.

OpenFlow enables networks to evolve, by giving a remote controller the power to modify the behavior of network devices, through a well-defined “forwarding instruction set”. The growing OpenFlow ecosystem now includes routers, switches, virtual switches, and access points from a range of vendors.

In today’s packet networks, a router/switch is both the control element which makes control decisions on traffic routing, as well as the forwarding element responsible for traffic forwarding, and both these functionalities are tightly linked (Fig. 1a). Housing control and data functions in the same box makes routers complex and fragile, quite unlike the streamlined routers envisaged by the Internet pioneers. Today, a backbone router runs millions of lines of source code, and a plethora of features in software and hardware.

Transport networks are similar. While traditionally they have had a separation between a circuit switched data plane and a packet switched control plane, this control could reside within the box (Fig. 1b) or outside the box with proprietary interfaces (Fig. 1c). Additionally, out-of-box-control may not even be a distributed control plane, but more likely an Element Management System (EMS) / Network Management System (NMS) hierarchy. Those that desire the former are headed towards the same problems seen in packet switched networks today.

OpenFlow advocates a clean separation between the data plane and the control plane in packet or circuit networks (Fig. 2). Because the data plane is typically implemented in hardware, OpenFlow provides the control plane with a common hardware abstraction. A network (for example an autonomous system) is managed by a network-wide operating system (e.g. NOX]) running on multiple software controllers (Fig. 3), that controls the data plane using the OpenFlow protocol.

NOX is an open platform for developing management functions for enterprise and home networks. NOX runs on commodity hardware and provides a software environment on top of which programs can control large networks at Gigabit speeds. More practically, NOX enables the following:

Developers can add their own control software and, unlike standard *nix based router environments, NOX provides an interface for managing off the shelf hardware switches at line speeds.

NOX provides a central programming model for an entire network – one program can control the forwarding decisions on all switches on the network. This makes program development much easier than in the standard distributed fashion.

This video from last year’s Structure Conference, Nick McKeown of Stanford University explains the concepts behind openFlow and the ways it might change the way networks are built and customized.

Open Networking Summit set to explore software-defined networking

For three days this October, a group of computer networking industry heavyweights and academic researchers will assemble at the Li Ka Shing Center at Stanford University for the Open Networking Summit, the first public industry event exclusively focused on a new paradigm known as software-defined networking (SDN) and OpenFlow.

The Open Networking Summit offers a day of hands-on tutorials plus two days of keynote and panel sessions featuring networking thought leaders and influential media.

Most industry participants are members of the Open Networking Foundation, a non-profit industry consortium whose mission is to standardize OpenFlow and advance software-defined networking.

“The Open Networking Summit will — for the first time — bring together the people who want to make SDN and OpenFlow succeed, revolutionizing the field of networking,” said Urs Hölzle, a Senior VP of Engineering at Google who is also Chairman of Open Networking Foundation board. “OpenFlow and SDN will accelerate innovation of the Internet infrastructure — just as the introduction of the PC revolutionized the computer industry in the 1980s.”

The Summit itself is the culmination of nearly a decade of close partnership between industry and academia to rethink networking.

“OpenFlow/SDN is a great example of technology transfer from a university to industry,” said Nick McKeown, a professor of computer science and electrical engineering and faculty director of Clean Slate Program at Stanford.

Starting in 2003, the National Science Foundation funded a series of programs to rethink the Internet architecture. One such program (known as the “100×100 Project for Clean Slate Design”) funded the research of Stanford PhD student Martin Casado. Working with professors at Stanford and Berkeley, Casado started to think how networks could be redesigned, as if from a clean slate, to be more secure, more dependable, and easier to manage. The key ideas from his work led to Software Defined Networking and OpenFlow — a new way to control network switches and routers.

With further funding from the National Science Foundation, the concepts blossomed into the networking substrate of the Global Environment for Network Innovation (GENI), a nationwide proof-of-concept project that now links nine U.S. universities, including Stanford, University of Washington, Indiana University, University of Wisconsin, Georgia Tech, Clemson, Rutgers, and Princeton.

GENI allows researchers to test new ideas at scale with real traffic. Software Defined Networking and OpenFlow became an important part of the GENI backbone, helping GENI to achieve its goal of experimentation at scale with real applications and users.

Soon thereafter, the networking community began to take note. A group of information technology giants — including Google, Facebook, Microsoft, Verizon, Deutsche Telekom, Yahoo!, HP, NEC, Dell, Juniper, among many others — embraced OpenFlow.

The Open Networking Summit will be a major milestone in broader acceptance of OpenFlow/SDN.

“NSF has a long history of investing in foundational computing research that has led to the transfer of knowledge from lab to practice,” said Farnam Jahanian, NSF Assistant Director for Computer and Information Science and Engineering. “We are thrilled to see that software defined networking and OpenFlow have became an important part of the GENI backbone, and we look forward to future innovations that build on these important investments.”

“The Open Networking Summit is an important part of the technology transfer of OpenFlow/SDN. Here we have a university research idea funded by government now becoming an important new direction for industry,” said McKeown. “Much of the credit for where we stand today rests with the National Science Foundation and its early intellectual backing and funding of these ideas.”

In short 4G Americas believes that “SMS-to-911 has significant limitations, not the least of which is substantial widespread modifications at PSAPs that state and local governments can ill afford. SMS-to-911 is simply not viable. The Commission should not propose that carriers implement SMS-to-911.”

4G Americas is a trade association dedicated to supporting the deployment of 4G mobile broadband technologies throughout the Americas.

4G Americas commends the Federal Communications Commission (“FCC” or “Commission”) for initiating the proceeding on Next Generation 911 (“NG911”) and for facilitating the transition of legacy 911 networks to NG911. 4G Americas, a trade association dedicated to supporting the deployment of 4G mobile broadband technologies throughout the Americas, agrees with the Commission that replacing today’s 911 “system with a broadband- enabled, IP-based 911 network will offer far more flexibility, resilience, functionality, innovation potential, and competitive opportunities than is presently possible.”1

Chris Pearson, President of 4G Americas, presented to the Commission’s Emergency Access Advisory Committee (“EEAC”) on August 12, 2011 and reviewed 4G Americas’ recent White Paper, Evaluation of Short-Term Interim Techniques for Multimedia Emergency Services. A copy of the MMES White Paper is attached for inclusion in the record of the Framework for Next Generation 911 Deployment proceeding, PS Docket No. 10-255.

4G Americas, its member companies and others are studying NG911 technology solutions in the Third Generation Partnership Project (3GPP) standards organization, with the leading technology appearing to be an Internet Protocol Multimedia Subsystem-based (IMS- based) Multimedia Emergency Systems (“MMES”). The 3GPP IMS-based MMES standard being developed will offer more flexibility, resiliency, functionality, and innovation than interim solutions being considered today and is consistent with the Commission’s goals.

The 3GPPIMS-based MMES will also facilitate emergency communications by persons who are deaf,deaf-blind, hard of hearing, or with speech impairments, consistent with the goals of the Twenty- First Century Communications and Video Accessibility Act. 4G Americas expects that the MMES being specified in 3GPP will be supported in the LTE and IMS environments. However, the 3GPP MMES being developed will take several more years to deploy.

Appreciating that the emergency services and disability communities want a more immediate, text-based NextGen 911 service, 4G Americas undertook an evaluation of the various interim technologies that might be a solution. 4G Americas investigated possible interim techchnologies that were actually supported by wireless networks and PSAPs. In the current budget environment, it does little good to study technologies that would require massive investments by PSAPs or require complete overhaul of the existing emergency communications systems.

Moreover, an “interim” solution should be just that – available in the immediate term with little or no changes to end-user devices and networks, since the 3GPP MMES will be available in several years. Given that economic and temporal reality, the scope of 4G Americas’ evaluation was minimal impacts to end user devices, wireless infrastructure and PSAPs.

4G Americas’ evaluation found that none of the short-term interim techniques for MMES can be supported without a significant, costly development effort. As detailed more fully in the attached White Paper, the implementation of any “interim” technique for a short-term MMES solution will require significant resources and time to develop and deploy. Sources of funding for the development and deployment of any short-term technique have not been addressed at any level. Even if funding were available, all of the potential techniques evaluated in the White Paper have operational limitations that would impact the use of the technique as an interim short- term solution.

One of the many techniques evaluated by 4G Americas is SMS-to-911. Some proponents have suggested SMS-to-911 in part because of the growing use of SMS in the general population. In its Notice of Inquiry, the Commission asked a number of questions about “whether NG911 networks should be configured to support SMS emergency communications.”2

SMS is a store and forward service with no service or performance guarantees. No locationinformation is provided, so the originating network may not accurately route the message to the correct PSAP. SMS is not a session-based protocol, so subsequent messages from the subscriber may be delivered to different PSAP call takers.3 In short, SMS-to-911 has significantlimitations, not the least of which is substantial widespread modifications at PSAPs that state and local governments can ill afford.4 SMS-to-911 is simply not viable.

The Commission should not propose that carriers implement SMS-to-911. Allowing industry to focus on the 3GPP standards process for MMES, rather than forcing industry into a costly retrofit of legacy 911 for only an interim period, is more consistent with President Obama’s recent Executive Order on Regulation and Independent Agencies.5

4G Americas believes the wisest course is to focus on development of NextGen 911 through 3GPP which will provide far more innovation, flexibility, resilience, and functionality than any “interim” solution under discussion.

The FCC conceded that there could be some issues with so many cooks. “While the operation of multiple database administrators may present some coordination challenges,” it said when it approved the initial nine, “we find it is in the public interest to have multiple parties developing business models for this new mechanism,” it said, both now and as a test-bed for future sharing. “The value of this exercise extends beyond databases for the TV bands, as the Commission is also considering employing similar database approaches in other spectrum bands,” the commission said.

The commission decided one more wouldn’t hurt, although some broadcast engineers disagreed. Engineers for the Integrity of Broadcast Auxiliary Services (EIBASS) had argued against letting Microsoft into the group because it had filed late, that it did not include relay and translator stations among those it would protect, and that Microsoft’s test at the NAB convention raised some issues about its effectiveness.

Even though Microsoft did not ask to be included in that group until three months after the FCC had chosen the administrators, and well over a year since it had first asked for volunteers, the commission said there was nothing in the rules that prevented them from asking.

“We find that Microsoft has shown that it has the technical expertise to develop and operate a TV bands database. Moreover, as explained below, none of the concerns raised by any of the commenters in the record before us causes us to conclude that Microsoft is not capable of meeting all the requirements placed on database administrators by the Commission’s rules,” said OET in granting the approval.

The FCC approved the sharing of the so-called “white spaces” (broadcasters called them “interference zones”) in the TV spectrum by unlicensed devices, like laptops and other devices using Microsoft software, so long as a database was set up to keep track of what frequencies were actually available.

Google and Microsoft were among the companies pushing the FCC hardest to open up the “white spaces,” while broadcasters pushed back over potential interference to their new DTV signals. The tension between broadcasters and Microsoft over the issue goes back years. For example, there were some problems with early testing of a Microsoft device back in 2008 that prompted a war of words between the company and broadcasters.

All the databases must undergo a 45-day test period before they can go live

Are you aware or does the Telecommunications Provider or Vendor company that you work for or affiliated with know that Telcordia has petitioned the FCC to reform or strike Amendment 70, To institute competitive bidding for Number Portability Administration and to end the NAPM LLC’s interim role in the Number Portability Administration contract? Well if you click the link above it will take you to the FCC web site where you can read for yourself all the details of the adopted May 16th, 2011 order.

The FCC is proceeding with an open and transparent procurement process that will consider the advantages of having multiple LNPAs. This by all rights should ultimately benefit consumers and the overall telecommunications industry with competitive pricing, more innovative services and redundancy for what has become a mission-critical service.

After weighing responses to a March 8, 2011 Order and Request for Comment, which was itself a response to Telcordia’s petition to institute competitive bidding for the administration of number portability, the FCC adopted new procedures on May 16 for selecting a new or additional administrators and re-inserted itself more definitively in the process.

The FCC will allow the North American Numbering Council (NANC) and the North American Portability Management LLC (NAPM) to continue to lead the selection process for new administrators, but at the urging of the National Association of State Utility Consumer Advocates (NASUCA) the Commission will take final approval authority over any resulting contracts. Telcordia, for its part, requested several changes to the order, some of which the FCC adopted, such as giving the commission approval authority over the RFP, RFI and Technical Requirements Document and designating the commission as arbiter in any disputes that arise in the recommendations process.

The ultimate result of an open bidding process may likely be a multi-vendor solution to managing the databases for number portability. It has been a long wait for those hoping to see the FCC’s original intention of a multi-vendor solution realized. Other unrealized benefits that a multi-vendor environment might bring about include price competition and new features. “In monopolistic environment one tends to emphasize stability over change and enhancements. You don’t have the competition to spur you on to do creative things. It’s human nature; it’s just how things work,” said Richard Jacowleff, president of Interconnection Solutions at Telcordia.

In 1997, in its Second Report and Order on Telephone Number Portability (FCC 97-289), the FCC required implementation of number portability enabling consumers to keep (or “port”) their local phone number when switching from one telecommunications provider to another by naming two LNPAs to provide NPAC services on a regional basis. In that order the FCC noted, “there are clear advantages to having at least two experienced number portability database administrators that can compete with and substitute for each other, thereby promoting cost-effectiveness and reliability in the provision of Number Portability Administration Center services.” However, when one of the selected LNPA vendors was unable to perform, implementation defaulted to the single remaining vendor, and many of the advantages of having multiple LNPAs were never realized.

Telcordia succeeded – with the help of other’s such as NASUCA, other advocacy groups and the carriers themselves – in opening the process, a first since the Second Report and Order in 1997 establishing the NPAC. Telcordia has reiterated what the FCC said in its original order, that “there are clear advantages to having at least two experienced number portability database administrators that can compete with and substitute for each other, thereby promoting cost-effectiveness and reliability in the provision of Number Portability Administration Center services.”

Reliability hasn’t been an issue – yet, said Richard Jacowleff, president of Interconnection Solutions at Telcordia. “They have had very good service, but still, it is one vendor and one set of infrastructure. If the NPAC is down for any period that will start to decay service for the entire United States,” he said. Besides, the requirements are getting more complex all the time, which leads to a higher probability of failure at some point. “The ecosystem in ’97 was just the carriers. Today those carriers are dwarfed by the other players like the content guys and other players who want access,” Jacowleff said.

Press Release:

Telcordia Applauds FCC’s Approach to Open Bid Process for Management of Number Portability Administration Center in U.S.

PISCATAWAY, NJ – May 18, 2011 – Telcordia, a global leader in number portability solutions, with deployments in 15 countries, today announced that it will participate in the procurement process ordered by the Federal Communications Commission (FCC) (FCC DA11-883) for the US Number Portability Administration Center (NPAC). In response to Telcordia’s Petition, the FCC Order calls for multiple vendors to compete to become Local Number Portability Administrators (LNPAs) to provide NPAC services. The process, as ordered, will include a request for information (RFI) and a request for proposal (RFP) issued by the FCC.

In 1997, in its Second Report and Order on Telephone Number Portability (FCC 97-289), the FCC required implementation of number portability enabling consumers to keep (or “port”) their local phone number when switching from one telecommunications provider to another by naming two LNPAs to provide NPAC services on a regional basis. In that Order the FCC noted, “there are clear advantages to having at least two experienced number portability database administrators that can compete with and substitute for each other, thereby promoting cost-effectiveness and reliability in the provision of Number Portability Administration Center services.” However, when one of the selected LNPA vendors was unable to perform, implementation defaulted to the single remaining vendor, and many of the advantages of having multiple LNPAs were never realized.

Telcordia is pleased that the FCC is proceeding with an open and transparent procurement process that will consider the advantages of having multiple LNPAs. This will ultimately benefit consumers and the overall telecommunications industry with competitive pricing, more innovative services and redundancy for what has become a mission critical service.

“We look forward to joining the open-bid process for number portability administration in the U.S.,” said Richard Jacowleff, President Telcordia Interconnection Solutions. “We are confident after completing the world’s largest number portability implementation in India that Telcordia can immediately contribute to the process by sharing our considerable knowledge base and technology expertise in this area.”

Telcordia has unrivaled experience in number portability and is credited for implementing rollouts in North America and around the world. For more information on NPAC and the FCC ordered procurement process, visitwww.telcordia.com/npac For more information about Telcordia, visit www.telcordia.com.

Telcordia’s Primer On Their Petition to the FCC Regarding Number Portability and The Open-Bidding Contracting Process for Administering the Number Portability Databases

A Brief History of Local Number Portability
As defined on the Number Portability Administration website, local number portability (LNP) and wireless number portability (WNP) are “the ability of users of telecommunications services, to retain, at the same location, existing telephone numbers without impairment of quality, reliability, or convenience when switching from one telecommunications carrier to another.”

Local number portability was a revolutionary idea when it was first considered in the mid-1990s. Congress and the Federal Communications Commission (FCC) saw that successful implementation would go a long way in allowing competition in the telephone business — a key goal of the Telecommunications Act of 1996. Originally number portability was only mandated for wireline services; wireless number portability was implemented in 2004.

The FCC adopted the recommendation of the North American Numbering Council (NANC) – which is a federal advisory commission – that there be seven regional number portability databases, and “multiple database administrators to permit competition in both the initial and future competitive bidding and selection processes.” (FCC 97-289, at para.36). The industry eventually agreed on an industry limited liability structure approved by the FCC for oversight, management and contracting with potential database administrators.

Originally, eight regional Limited Liability Corporations (LLCs) were formed to represent the telephone companies in contracting with database administrators, one LLC for each of the original Regional Bell Operating Company regions and one for Canada. After an open competition, Lockheed-Martin Information Management Services and Perot Systems were selected as the initial database administrators and each was granted a contract by two or more of the LLCs to provide services until 2003. But Perot Systems dropped out when its service could not be ready on time and the LLCs that had contracted with Perot signed contracts with Lockheed-Martin instead. The seven U.S. LLCs eventually merged into a single entity, the North American Portability Management, LLC (NAPM).

In November 1999, Lockheed-Martin Information Management Services became NeuStar, Inc. NeuStar has continually been the LNP Administrator managing NPAC, the Number Portability Administration Center. NPAC is the system that manages the porting of telephone numbers from one local service provider to another.

A more detailed explanation of the history of Local Number Portability can be found on NPAC’s web page.

The Problem As Defined By Telcordia
The last open-bidding contracting process for administering the number portability databases was in 1997, when Lockheed (NeuStar) and Perot, received five-year, non-exclusive contracts. Since that time, NeuStar has been given three contract extensions without any open process seeking competitive bids. The most recent contract extension, termed Amendment 57, extends the Master Agreement another four years through 2015 — meaning, if unchanged, the original database administrator will have been in place for 12 years since the intended expiration of the only contract awarded through an open competitive process. In return for this extension, Amendment 57 provides service providers a volume-based reduction in porting transaction rates.

But it also imposes financial penalties by instituting a per-transaction cost increase (amounting to about $30 million per year based on projected 2008 volume) for any attempt by NAPM to seek lower porting transaction rates from NeuStar, the Commission, or from a potential competitor. These penalty provisions impose the price increase if NAPM even merely issues a Request for Information regarding competitive alternatives, normally the most preliminary step in any contracting process.

Telcordia believes Amendment 57’s penalty provisions are anti-competitive and the antithesis of the FCC’s original intent that multiple vendors administer number portability in the country. It believes these penalty provisions violate the Telecommunications Act of 1996, FCC policy, and antitrust laws. It further believes these penalty provisions are unjust and unreasonable and contrary to the public interest.

Telcordia estimates that competition could bring savings of at least $60 million in 2008 alone on a contract that is expected to generate well over $300 million of revenue in 2008. These savings could total $240 million or more between now and January 1, 2012.

The Solution As Defined By Telcordia
The FCC has legal and regulatory authority over the number portability process. Telcordia has filed a petition with the FCC to raise the issue that its original intent — to have competition in the administration of number porting – is no longer being realized. Telcordia is requesting that the Commission use its authority to reform Amendment 57 by eliminating the financial penalty provisions and, because it has been a decade since there was an open bidding process, to direct NAPM to solicit competitive bids immediately.

While Telcordia believes that the introduction of competition could save approximately $60 million a year in the cost of number portability administration services, it may be that the savings could be even greater — but without an open process, no one will ever know.

It is important to note that Telcordia seeks no changes in the current system of Commission oversight of the number porting process but rather is raising the issue with the FCC so the Commission can re-establish an open bid process, as was the FCC’s original intent. The system, in which contract negotiation and administration rests with a neutral industry body with Commission oversight only when necessary, is well-designed. As the Petition demonstrates, if and when something goes awry it can always be brought to the Commission’s attention.

The provisions of Amendment 57 only became public in November 2006 and, following a review, are now ripe for consideration by the Commission. The Commission now can, and should, reform Amendment 57 by removing its penalty provisions; and should further exercise its oversight responsibilities and require NAPM to implement a formal procurement process so that carriers and the public can get the financial savings and other benefits that competition brings.

GLOSSARY OF TERMS — Number Portability

Amendment 57: Term given to the September 2006 contract extension between NAPM and NeuStar. LEC: Local Exchange Carrier. The company, often a part of a Regional Bell Operating Company (RBOC),that provides local telephone service. LECs also include independent local telephone companies and competitors of both the Bell Companies and independent telephone companies.LNP: Local Number Portability. The ability of users of telecommunications services, to retain, at the same location, existing telephone numbers without impairment of quality, reliability, or convenience when switching from one telecommunications carrier to anotherLSP: Local Service Provider. A company that provides basic local telephone service.NANC: The North American Numbering Council (NANC): A Federal Advisory Committee that was created to advise the Federal Communications Commission on numbering issues and to make recommendations that foster efficient and impartial number administration. The NANC meetings are generally held six times a year.NANP: North American Numbering Plan (NANP): An integrated telephone numbering plan serving 19 North American countries that share its resources, such as area codes and local numbers. These countries include the United States and its territories, Canada, and most of the Caribbean nations. Regulatory authorities in each participating country have plenary authority over numbering resources, but the participating countries share numbering resources cooperatively.NANPA: The North American Numbering Plan Administration (NANPA) holds overall responsibility for the neutral administration of the NANP numbering resources, subject to directives from regulatory authorities in the countries that share the NANP. NANPA is not a policy-making entity. In making assignment decisions, NANPA follows regulatory directives and industry-developed guidelines. NANPA’s responsibilities are defined in Federal Communication Commission (FCC) rules and in comprehensive technical requirements drafted by the telecommunications industry and approved by the FCC.NAPM LLC: North American Portability Management Limited Liability
Corporation. The jointly-owned legal entity that is responsible for oversight of the Number Portability Administration Centers (NPACs). While all certified LSPs can use the NPAC, only official members of the LLC have a vote in its business decisions. Members currently include the following North American carriers: AT&T, Qwest, Verizon, Sprint, T-Mobile, Citizens and Embarq.NeuStar, Inc.: Formerly Lockheed Martin Information Management Services, NeuStar is a telecommunications and information technology company with headquarters in Sterling, VA. Neustar is the North American Numbering Plan Administrator, under contract with the FCC.NPAC: The Number Portability Administration Center (NPAC), which supports the implementation of LNP, serving as the central mediation center for LNP activity.NPAC SMS: NPAC Service Management System. The system used by the NPAC to manage number portability processes and information.Sherman Act: The Sherman Anti-Trust Act: The oldest of all United States anti-trust laws, signed into law in 1890 by President Benjamin Harrison. It was the first U.S. government action to limit monopolies. The bill is named for its author, Senator John Sherman of Ohio.Telcordia Technologies, Inc.: A global provider of telecommunications software and services for IP, wireline, wireless and cable networks. Telcordia is stating that they are the number one provider of number portability solutions worldwide. Telcordia is headquartered in Piscataway, N.J., with offices throughout the United States, Canada, Europe, Asia, Central and Latin America.Telecommunications Act of 1996: Signed into law on February 8, 1996, by President Clinton. It was the first major overhaul of telecommunications law in almost 62 years. It provided a pro-competitive, de-regulatory national policy framework designed to open local telecommunications markets to competition.TN: Telephone Number.Wireless Number Portability (WNP). The ability of users of wireless telecommunications services, to retain an existing telephone number without impairment of quality, reliability, or convenience when switching from one telecommunications

summary of the proposed LNPA Selection Process

The Proposal — which is based on, and consistent with, the Commission’s rules and orders — reflects consensus support for the following LNPA selection process:

The FCC will reaffirm the following delegations of authority:

NANC is authorized to oversee the selection of one or more independent, non-governmental entities that are not aligned with any particular telecommunications segment to serve as the LNPA(s) and to make recommendations to the Commission regarding such selection; and

The NANC, in consultation with the NAPM, will use the selection process approved by the Commission.

The Commission or the Wireline Competition Bureau will select the LNPA(s).

Approval of the NANC selection(s) will occur through an action by the Commission or the Wireline Competition Bureau.

The NANC will establish an LNPA Selection Working Group (“SWG”) to oversee the selection process of the LNPA(s).

The SWG will be comprised of and open to any individual who (a) is a NANC Member, NANC Alternate or technical staff of a NANC Member company, association or governmental entity and (b) who:

i. does not have a conflict of interest, or the appearance of a conflict of interest, with any vendor or potential vendor;

ii. signs a non-disclosure agreement which prohibits (a) disclosure of confidential information to anyone who is not a member of the SWG or the NANC Chair and (b) the use of confidential information for any other purpose or in any other venue or hearing; and

iii. is not a potential vendor.

For reasons of confidentiality, the NANC will delegate the authority to reach consensus on behalf of the NANC to the SWG with respect to the request for information (“RFI”), request for proposals (“RFP”) and the technical requirements document (“TRD”).

Membership and participation in meetings is unrestricted, but each participating NANC Member company, association or governmental entity may exercise only one (1) vote on any given issue regardless of how many individuals associated with the NANC Member company, association or governmental entity are participating in the SWG. Decisions must be reached by consensus, which does not require unanimous consent, but is not reached if the majority of any affected industry segment disagrees with the proposed decision.

The SWG members will elect three chairs for the SWG to administer the SWG activities and determine consensus when required.

FCC staff may attend any meeting of the SWG.

The NAPM LLC will utilize its Future of the NPAC Subcommittee (“FoNPAC Subcommittee”), which operates pursuant to the NAPM LLC Operating Agreement, to administer the selection process of the LNPA(s).

The SWG will work with, provide policy guidance as outlined by the FCC to, and oversee the technical work by, the FoNPAC Subcommittee.

The SWG and the FoNPAC Subcommittee will follow the LNPA vendor selection process set forth below:

The SWG will oversee the development of the draft RFI by the FoNPAC Subcommittee.

The FoNPAC Subcommittee will submit the draft RFI to the SWG for approval.

The SWG will review and either approve the draft RFI or suggest revisions to the draft RFI for the FoNPAC Subcommittee. The FONPAC Subcommittee will consider all suggested revisions and work with the SWG to reach agreement regarding suggested revisions. The SWG will prepare a status report and submit the approved RFI to the NANC Chair.

The NANC Chair will submit the approved RFI to the FCC and will submit the SWG status report to the NANC.

Once the FCC publicly announces the release date of the RFI, the NAPM LLC may activate website software to receive public and vendor responses to the RFI.

The FoNPAC Subcommittee will review and analyze the RFI responses and present recommendations regarding the outline for the TRD and RFP to the SWG.

The SWG will review and approve the outline for the TRD and RFP or suggest revisions regarding NPAC policy issues and vendor qualifications selection criteria to be included in the TRD and RFP for the FoNPAC Subcommittee. The FoNPAC Subcommittee will consider all suggested revisions and work with the SWG to reach agreement regarding suggested revisions to the outline for the RFP.

The FoNPAC Subcommittee will draft the TRD and RFP and submit it to the SWG for review and approval.

The SWG will review and approve the TRD and RFP or suggest revisions regarding the TRD and RFP for the FoNPAC Subcommittee. The FoNPAC Subcommittee will consider all suggested revisions and work with the SWG to reach agreement regarding suggested revisions. The SWG will prepare a status report and will submit the TRD, RFP and status report to the NANC Chair.

The NANC Chair will submit the TRD and RFP to the FCC and the SWG status report to the NANC.

Once the FCC publicly announces the release date of the TRD and RFP, the NAPM LLC may activate website software to receive vendor responses to the TRD and RFP.

The FoNPAC Subcommittee will review and evaluate vendor responses to the TRD and RFP, and prepare a vendor selection recommendation to the SWG.

The SWG will review and evaluate the FoNPAC Subcommittee’s vendor selection recommendation. The SWG may approve the FoNPAC Subcommittee’s vendor selection recommendation or provide specific reasons for not approving the selection recommendation to the FoNPAC Subcommittee. The FoNPAC Subcommittee will consider this feedback and may revise its vendor selection recommendation.

The SWG will present the FoNPAC Subcommittee’s final vendor selection recommendation to the NANC.

The NANC will utilize a consensus process to approve the FoNPAC Subcommittee’s vendor selection recommendation or suggest specific reasons why the FoNPAC Subcommittee should consider an alternative recommendation, which the FoNPAC Subcommittee will consider and, if appropriate, revise its recommendation.

Upon consensus approval of the FoNPAC Subcommittee’s vendor selection recommendation, the NANC Chair will submit the recommended vendor(s) and evaluation report to the NANC for final approval, including the number of votes for each prospective vendor. The NANC will have final approval of the recommendation that will be transmitted to the FCC by the NANC Chair.

If the NANC does not achieve consensus approval, the NANC Chair shall inform the FCC, and forward the FoNPAC’s, the SWG’s, and the NANC’s evaluation information to the FCC.

Upon final approval of vendor(s) selection by the FCC, the NANC will disband the SWG.

FCC staff may attend any meeting of the FoNPAC.

The FCC will authorize the NAPM LLC to approve and oversee system design, development, industry testing and activation.

If the SWG is unable to reach consensus regarding any issue, the issue shall be referred for resolution to the FCC, subject to appropriate protections for confidential information.

Part 1 English Protocol Decodes of a SIP Soft Client Registering To a SIP Sever

Have you ever wondered what an actual SIP registration looks like? What follows are protocol snap shots of some of the various DNS queries and registration messages that are exchanged between a Windows XP SIP VoIP soft client establishing communications with and then registering to a SIP Sever. The SIP server has the capabilities to act as a registrar, proxy or redirect server.

Below you will find various SIP Methods used such as Registration, Subscribe, Options, etc… This list of messages is certainly not the complete list of every message that was exchanged during the registration. It is a representative sampling to give the reader a flavor of what transpires when a SIP VoIP client, such as the X-Lite soft client I used, registers to a SIP server to establish its on-line presence.

I intend to publish a part 2 showing the actual calling (Invite) and answer, etc… with RTP video media packets. You should also expect to see some SDP messages. SDP is a protocol that defines the session end to end. So look forward to that post in the near future.

If you are interested in obtaining a copy of a word document of this post plus all of the decoded protocol trapped during this SIP registration then please contact me at ngntechtalk@gmail.com.

The Session Initiation Protocol (SIP) is an IETF-defined signaling protocol widely used for controlling communication sessions such as voice and video calls over Internet Protocol (IP). The protocol can be used for creating, modifying and terminating two-party (unicast) or multiparty (multicast) sessions consisting of one or several media streams. The modification can involve changing addresses or ports, inviting more participants, and adding or deleting media streams. Other applications include video conferencing, streaming multimedia distribution, instant messaging, presence information, file transfer and online games.

In November 2000, SIP was accepted as a 3GPP signaling protocol and permanent element of the IP Multimedia Subsystem (IMS) architecture for IP-based streaming multimedia services in cellular systems.

The SIP protocol is an Application Layer protocol designed to be independent of the underlying Transport Layer; it can run on Transmission Control Protocol (TCP), User Datagram Protocol (UDP), or Stream Control Transmission Protocol (SCTP). It is a text-based protocol, incorporating many elements of the Hypertext Transfer Protocol (HTTP) and the Simple Mail Transfer Protocol (SMTP). The decodes that follow are structured as if you were looking at the stack itself from the Ethernet frames to the application protocol such as SIP running on top of the stack.

Shown below are frames 114 and 118 expanded. Frame 114 is a standard Domain Name System (DNS) SRV (Service Record query defined in RFC 2782). The SRV RR identifies the host(s) that will support particular services. Frame 118 is the response to that DNS query. DNS resides at the Application layer as does SIP within the IP stack.

Notice that in query Frame 114 immediately below, the field Domain Name System (query)is asking the question:

Frame 667 is one of the SIP Registration requests and is shown below. Notice the SIP Register Method and all the data, such as Contact uri info, that it contains that allow me to register. This information was manually populated by me in the VoIP soft client. I of course had to create an account and establish my credentials with the “cloud” SIP server before I could populate the soft client. This then allows me to be able to register to that server and make VoIP multi-media SIP based calls, etc…

Frame 689 below is the Subscribe method and being sent from the soft client to the SIP Server. Frame 690 is the ack back from the SIP Server to the soft client for frame 689 and is asking for a Proxy Authentication. Frame 692 is the soft client’s response – see the Proxy-Authorization: Digest field in frame 692.

Frame 745 below is the SIP Server sending the method ‘options’ to the soft client. The SIP method OPTIONS allows a UA to query another UA or a proxy server as to its capabilities. This allows a client to discover information about the supported methods, content types, extensions, codecs, etc. without “ringing” the other party.

Frame 746 is the ack back to the SIP Server acknowledging Frame 745. Frame 747 is the 200 OK back from the soft client showing the options and that it supports methods such as INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, NOTIFY, MESSAGE, SUBSCRIBE, INFO

The concept of the cloud and cloud services is persuading more and more users—large enterprises, small/medium-sized businesses (SMBs) and consumers—to lay down their IT burdens and move to higher ground. As the cloud-service concept steadily morphs into solid reality, service providers are staking out various positions in this new marketplace with an eye toward creating new revenue streams. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources.

There are three basic cloud service models: software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS). In the SaaS model the provider’s applications run on a cloud infrastructure and the user accesses applications from various client devices through a thin client interface such as a web browser (web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure: network, servers, operating systems, storage or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

PaaS deploys onto the cloud consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure but does has control over the deployed applications and sometimes over the application hosting environment configurations.

IaaS provides provision processing, storage, networks, and other fundamental computing resources where the consumer can deploy and run software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and sometimes limited control of select networking components, for example, host firewalls.

Four deployment models

There are four different types of cloud and each has unique characteristics. In a private cloud the infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.

In a community cloud the infrastructure is shared by several organizations and supports a specific community that has similar concerns, for example, mission, security requirements, policy, and compliance considerations. It may be managed by the organizations or a third party and may exist on premise or off premise.

Public cloud infrastructure is available to the general public or a large industry group and is owned by an challenges. Corporations’ and small businesses’ environments are becoming increasingly complex and competitive. Consumers are also more demanding newer and faster products and services and this trend is forcing companies to not only deliver new products or services but to do so rapidly. This demand, which is global, is nudging businesses to become more agile.

The threat of voice traffic from VoIP is also very real. Suppliers are competing with telecoms operators to offer integrated communications packages, notably by developing new applications compatible with fixed, mobile handsets or computers (for example, Skype). The current expansion in permanent connections to the Internet via smartphones and broadband usages such as TV as part of triple-play and quad-play offerings or fixed-line and mobile Internet streaming has already resulted in the saturation of the existing collection and transfer networks.

Consequently, providers have to heavily invest to boost capacity on their collection and transport network. In addition to market saturation in industrialized countries, prices and margins are alsodecreasing for fixed line, broadband Internet and mobile telephony. Telecom carriers delivering new IT services to enterprises and SMBs to generate new revenue streams have become the biggest threat for system integrators and VARs; not only do they own the connectivity, but now by offering IT services this puts them in a position to offer an end-to-end solution.

VARs and system integrators are quickly figuring out new partnerships, and partner to partner partnering is becoming a new trend. As companies move from having in-house IT services to managed services options they are discovering that they can dramatically benefit from cloud services.

The cloud benefit is twofold: 1) they do not have to invest heavily in IT and HR resources and can allocate their internal resources on the organization’s mission-critical tasks, and 2) they can still efficiently deliver products or services to their customers. IT has always been an asset that has helped companies operate more efficiently. And now global economies are more closely aligned, corporate IT has to adapt quickly to enable rapidly evolving business processes needed by functional entities such as sales, marketing, finance or engineering. The integration of cloud computing technologies is one option to enable this flexibility. Cloud Computing

As part of the National Broadband Plan (NBP), in March of 2010 the FCC made available a consumer-initiated online test of broadband speed. The purpose of the Consumer Broadband Test is to give consumers additional information about the quality of their broadband connections across their chosen ISPs’ networks and to increase awareness about the importance of broadband quality in accessing content and services over the Internet.

The Consumer Broadband Test has gathered data about how well the Internet is functioning, both generally and for specific ISPs at specific times. But the results of the software-based Consumer Broadband Test do not always capture the baseline connection quality provided by the consumer’s broadband service: the core connectivity between an ISP and its subscribers, rather than between the rest of the Internet and those subscribers. For instance, results of software-based tests can vary depending on the end user’s computer, the type of connection between the end user’s computer and the ISP’s network (e.g., the use of an in-home WiFi router may affect test results), the number of end user devices connected to a broadband service, and the physical distance of the end user from the testing server.

Additionally, there is no standard testing methodology for software-based broadband performance tests, and the Consumer Broadband Test therefore uses two alternative testing methodologies, which also affects the results. In order to assess the speed claims made by ISPs, and to see how particular activities – such as browsing the web or watching streaming video – are impacted by different speeds, the FCC decided to complement the more general Consumer Broadband Test with more consistent tests of the speed of broadband delivered to American homes.

Based on the foregoing, the major findings of the study included the following:

Actual versus advertised speeds. For most participating broadband providers, actual download speeds are substantially closer to advertised speeds than was found in data from early 2009 and discussed in a subsequent FCC white paper, though performance can vary significantly by technology and specific provider.

Sustained download speeds. The average16 actual sustained download speed during the peak period as calculated as a percentage of the ISP’s advertised speed. This calculation was done for different speed tiers offered by each ISP.

Results by ISP: Upload speeds among ISPs ranged from a low of 85 percent of advertised speed to a high of 125 percent of advertised speed.

Latency. Latency is the time it takes for a packet of data to travel from one designated point to another in a network. Since many communication protocols depend upon an acknowledgement that packets were received successfully, or otherwise involve transmission of data packets back and forth along a path in the network, latency is often measured by round-trip time. Round-trip time is the time it takes a packet to travel from one end point to another, and for an acknowledgement of successful transit to be received back. In our tests, latency is defined as the round-trip time from the consumer’s home to the closest server used for speed measurement within the provider’s network.

o During peak periods, latency increased across all technologies by 6.5 percent, which epresents a modest drop in performance.

Results by technology.

Latency was lowest in fiber-to-the-home services, and this finding was true across all fiber-to-the-home speed tiers.

o Web browsing. In specific tests designed to mimic basic web browsing—accessing a series of web pages, but not streaming video or using videochat sites or applications—performance increased with higher speeds, but only up to about 10 Mbps. Latency and other factors limited performance at the highest speed tiers. For these high speed tiers, consumers are unlikely to experience much if any improvement in basic web browsing from increased speed–i.e., moving from a 10 Mbps broadband offering to a 25 Mbps offering.

o VoIP. VoIP services, which can be used with a data rate as low as 100 kilobits per second (kbps) but require relatively low latency, were adequately supported by all of the service tiers discussed in this Report. However, VoIP quality may suffer during times when household bandwidth is shared by other services. The VoIP measurements utilized for this Report were not designed to detect such effects.

o Streaming Video. Test results suggest that video streaming should work well across all technologies tested, provided that the consumer has selected a broadband service tier that matches the quality of streaming video desired. For example, standard video is currently commonly transmitted at speeds below 1 Mbps, while high quality streamed video might require 2 Mbps or more. Consumers should understand the requirements of the streaming video they want to use and ensure that their chosen broadband service tier will meet those requirements, including when multiple members of a household simultaneously want to watch streaming video on separate devices.

Chart 1 shows average download performance over a 24-hour period and during peak periods across all ISPs. Most ISPs delivered actual download speeds within 20% of advertised speeds, with modest performance declines during peak periods. As shown in Chart 2, upload performance is much less affected than download performance during peak periods. Almost all ISPs reach 90 percent or above of their advertised rate, even during peak periods.

In general, it was found that even during peak periods, the majority of ISPs were providing actual speeds that were generally 80 percent or better than advertised rates, though there was considerable variation among the ISPs tested, as shown in Chart 3. As noted previously, performance was also found to vary by technology. Results from a particular company may include different technology platforms (e.g., results for Cox include both their DOCSIS 2.0 and DOCSIS 3.0 cable technologies; results for AT&T include both DSL and U-Verse).

PERFORMANCE VARIATION BY ACCESS TECHNOLOGY

As shown in Chart 4, there is some variance in performance by technology during peak periods. DSL on average meets 82 percent of advertised download speed during peak periods, cable meets 93 percent and fiber-to-the-home meets 114 percent of advertised speeds. Upload performance is, as noted, generally better than download performance during peak periods with all technologies meeting advertised upload speeds by 95 percent or better.

Download Peak Period Throughput

As shown in Chart 5, peak period performance varies by service tier among ISPs included in this study. Even during peak periods, the vast majority of service tiers offer performance levels approximately 80 percent or more of advertised speeds. Fiber-to-the-home services typically outperform other service tiers, offering performance levels approximately 115 percent of advertised rates during peak periods. Other ISPs are either close to or exceed advertised rates.

Upload Peak Period Throughput

With the exception of some fiber-to-the-home service offerings, consumer broadband services are typically offered with asymmetric download and upload rates, with the download rate typically many times faster than the upload rate. In general, the ratio of actual to advertised speed for upload performance is slightly superior to the ratio measured for download performance. Fiber-to-the-home services outperform cable and DSL in upload throughput, with many of the current services available on the market operating at symmetric speeds or speeds that are much closer to symmetric than those offered by their DSL and cable counterparts. On average, all technologies and speed bands deliver at least 84 percent of the advertised upload rate. Many cable service tiers exceed 100 percent of the advertised upstream rate. As with the downstream throughput results, fiber-to-the-home services continually deliver over 100 percent of the advertised upload speeds.

While not saying it in so many words, it looks like 4G WiMAX provider Clearwire is chucking that technology moving forward and joining the Long-Term Evolution (LTE) bandwagon in the United States.

The carrier now says it will add “LTE Advanced-ready” technology first to its 4G WiMAX markets following tests that showed download speeds exceeding 120 Mbps. According to the carrier, LTE Advanced is a 4G technical standard that calls for peak download mobile speeds of at least 100 Mbps. As such, Clearwire says its LTE network will be “LTE Advanced-ready, meaning that it will use an ultra-high-capacity spectrum configuration that is superior to the typical configuration of the slower, more capacity-constrained commercial LTE network designs in the United States of today.”

“Clearwire plans to raise the bar again for mobile broadband service in the United States,” says John Stanton, Clearwire’s chairman and interim CEO. “Our leadership in launching 4G services forced a major change in the competitive mobile-data landscape. Now we plan to bring our considerable spectrum portfolio to bear to deliver an LTE network capable of meeting the future demands of the market.”

Adds CTO Dr. John Saw, “This is the future of mobile broadband. “Our extensive trial has clearly shown that our ‘LTE Advanced-ready’ network design, which leverages our
deep spectrum with wide channels, can achieve far greater speeds and capacity than any other network that exists today. Clearwire is the only carrier with the unencumbered spectrum portfolio required to achieve this level of speed and capacity in the United States.”

He continues, “In addition, the 2.5 GHz spectrum band in which we operate is widely allocated worldwide for 4G deployments, enabling a potentially robust, cost-effective and global ecosystem that could serve billions of devices. We anticipate that the economies of scale derived from this global ecosystem will act as a catalyst for the development of thousands of low-cost devices and applications.”

Saw also took a swipe at beleaguered 4G startup Lightsquared (see related story in this issue) by noting, “And, since we currently support millions of customers in the 2.5 GHz band, we know that our LTE network won’t present harmful interference issues with GPS or other sensitive spectrum bands.”

And all it takes is money. Clearwire’s LTE implementation plan will need more financing, and company officials already are weighing the options, which could include a renewed effort to sell unused spectrum (although this has failed in the past). If it does secure deployment cash, Clearwire could go with Time Division Duplex (TDD) LTE technology, thus reusing its flexible all-IP network architecture and upgrading base station radios and some core network elements. This could save the carrier money as well.

Such an implementation would include using multicarrier, or multichannel, wideband radios that will be carrier-aggregation capable. “Carrier aggregation is a key feature of LTE Advanced that will enable Clearwire to further leverage its vast spectrum depth to create larger “fat pipes” for deploying mobile broadband service,” it explains.

Clearwire is quick to reiterate it will not leave its current WiMAX customers stranded in light of this new (but not unexpected) technology decision. The carrier WiMAX offering currently covers approximately 132 million people while serving 7.65 million retail and wholesale customers and some 110 WiMAX-enabled devices. It anticipates serving approximately 10 million 4G customers by year’s end.

Innovation

M2M – The Abbreviated Form of Machine-to-Machine This document and Blog was created by Jack Brown (a.k.a. ngntechtalk). To see a .pdf version of this particular document and others please click here or above. M2M is a term used to refer to machine-to-machine communication, i.e., automated data exchange between machines. (“Machine” may also refer to […] […]

M2M ecosystem (devices and services) is undoubtedly the second highest revenue generating area for the mobile network operators after the mobile handset ecosystem. The mobile operator’s business-model is transitioning from a voice-only-service model to a data-only-service model as a result of following factors – Improving mobile broadband technology, connect […]

What Is a Cloud Service Broker Gartner believes that Cloud Service Brokers (CSB’s) are one of the most necessary and attainable opportunities for service providers, distributors and enterprise IT organizations. CSB’s will broker relationships between a service consumer and multiple cloud providers. “The future of cloud computing will be permeated with the no […]