“With this, a computer can actually do psychology, it can predict and potentially control human behaviour. It’s what the scientologists try to do but much more powerful. It’s how you brainwash someone. It’s incredibly dangerous.

“It’s no exaggeration to say that minds can be changed. Behaviour can be predicted and controlled. I find it incredibly scary. I really do. Because nobody has really followed through on the possible consequences of all this. People don’t know it’s happening to them. Their attitudes are being changed behind their backs.”

– Jonathan Rust, Director, Cambridge University Psychometric Centre.

So metadata about your internet behavior is valuable, and can be used against your interests. This data is abundant, and Target only has a drop in the ocean compared to Facebook, Google and a few others. Thanks to its multi-pronged approach (Search, AdSense, Analytics, DNS, Chrome and Android), Google has detailed numbers on everything you do on the internet, and because it reads all your (sent and received) Gmail it knows most of your private thoughts. Facebook has a similar scope of insight into your mind and motivations.

Ajit Pai, the new FCC chairman, was a commissioner when the privacy regulations that were repealed last week were instituted last fall. He wrote a dissent arguing that applying internet privacy rules only to ISPs (Internet Service Providers, companies like AT&T or Comcast) was not only unfair, but ineffective in protecting your online privacy, since the “edge providers” (companies like Google, Facebook, Amazon and Netflix) are not subject to FCC regulations (but the FTC instead), and they are currently far more zealous and successful at destroying your privacy than the ISPs:

“The era of Big Data is here. The volume and extent of personal data that edge providers collect on a daily basis is staggering… Nothing in these rules will stop edge providers from harvesting and monetizing your data, whether it’s the websites you visit or the YouTube videos you watch or the emails you send or the search terms you enter on any of your devices.”

True as this is, it would be naive to expect now-chairman Pai to replace the repealed privacy regulations with something consistent with his concluding sentiment in that dissent:

“After all, as everyone acknowledges, consumers have a uniform expectation of privacy. They shouldn’t have to be network engineers to understand who is collecting their data. And they shouldn’t need law degrees to determine whether their information is protected.”

So it’s not as though your online privacy was formerly protected and now it’s not. It just means the ISPs can now compete with Google and Facebook to sell details of your activity on the internet. There are still regulations in place to aggregate and anonymize the data, but experiments have shown anonymization to be surprisingly difficult.

Last week’s legislation was a baby-step towards what the big ISPs would like, which is to ‘own’ all the data that they pipe to you, charging content providers differentially for bandwidth, and filtering and modifying content (for example by inserting or substituting ads in web pages and emails). They are currently forbidden to do this by the FCC’s 2015 net neutrality regulations.

“Without Part 68, users of the public switched network would not have been able to connect their computers and modems to the network, and it is likely that the Internet would have been unable to develop.”

For almost all US consumers internet access service choice is limited to a duopoly (telco and cableco). On the other hand internet content services participate in an open market teeming with competition (albeit with near-monopolies in their domains for Google and Facebook). This is thanks to the net neutrality regulations that bind the ISPs:

No Blocking: broadband providers may not block access to legal content, applications, services, or non-harmful devices.

No Throttling: broadband providers may not impair or degrade lawful Internet traffic on the basis of content, applications, services, or non-harmful devices.

No Paid Prioritization: broadband providers may not favor some lawful Internet traffic over other lawful traffic in exchange for consideration of any kind — in other words, no “fast lanes.” This rule also bans ISPs from prioritizing content and services of their affiliates.

If unregulated, ISPs will be compelled by structural incentives to do all these things and more, as explained by the FCC:

“Broadband providers function as gatekeepers for both their end user customers who access the Internet, and for edge providers attempting to reach the broadband provider’s end-user subscribers. Broadband providers (including mobile broadband providers) have the economic incentives and technical ability to engage in practices that pose a threat to Internet openness by harming other network providers, edge providers, and end users.”

It’s not a simple issue. ISPs must have robust revenues so they can afford to upgrade their networks; but freedom to prioritize, throttle and block isn’t the right solution. Without regulation, internet innovation suffers. Instead of an open market for internet startups, gatekeepers like AT&T and Comcast pick winners and losers.

Net neutrality simply means an open market for internet content. Let’s keep it that way!

Glancing through the FCC proposal, it does indeed look consumer friendly, and so is likely to be fought tooth and nail by the broadband providers. So I encourage you to write a supportive note to the FCC – either on the ACLU link above or directly with the FCC.

I got an email from the Heartland Institute today, purporting to give an expert opinion about today’s Net Neutrality ruling. The money quote reads: “The Internet is not broken, it is a vibrant, continually growing market that has thrived due to the lack of regulations that Title II will now infest upon it.”

This is wrong both on Internet history, and on the current state of broadband in the US.

It was the common carriage regulatory requirement on voice lines that first enabled the Internet to explode into the consumer world, by obliging the phone companies to allow consumers to hook up modems to their voice lines. It is the current unregulated environment in the US that has caused our Internet to become, if not broken, at least considerably worse than it is in many other countries:

Videos burn up a lot more bandwidth than written words, per hour of entertainment. The Encyclopedia Britannica is 0.3 GB in size, uncompressed. The movie Despicable Me is 1.2 GB, compressed. Consequently we should not be surprised that most Internet traffic is video traffic:

The main source of the video traffic is Netflix, followed by YouTube:

Internet Service Providers would like to double-dip, charging you for your Internet connection, and also charging Netflix (which already pays a different ISP for its connection) for delivering its content to you. And they do.

To motivate content providers like Netflix to pay extra, ISPs that don’t care about their subscribers could hold them to ransom, using network congestion to make Neflix movies look choppy, blocky and freezy until Neflix coughs up. And they do:

This example illustrates the motivation structure of the industry. Bandwidth demand is continuously growing. The two basic strategies an ISP can use to cope with the growth are either to increase capacity or to ration the existing bandwidth. The Internet core is sufficiently competitive that its capacity grows by leaps and bounds. The last mile to the consumer is far less competitive, so the ISP has little motivation to upgrade its equipment. It can simply prioritize packets from Netflix and whoever else is prepared to pay the toll, and let the rest drop undelivered.

One might expect customers to complain if this was happening in a widespread way. And they do:

Free market competition might be a better answer to this particular issue than regulation, except that this problem isn’t really amenable to competition; you need a physical connection (fiber ideally) for the next generation of awesome immersive Internet. Running a network pipe to the home is expensive, like running a gas pipe, or a water pipe, or a sewer, or an electricity supply cable, or a road; so like all of those instances, it is a natural monopoly. Natural monopolies work best when strongly regulated, and the proposed FCC Title II action on Net Neutrality is a good start.

Digital Rights Management

Unrelated but easily confused with Net Neutrality is the issue of copyright protection. The Stop Online Piracy Act, or SOPA, was defeated by popular outcry for being too expansive. The remedies proposed by SOPA were to take down websites hosting illegal content, and to oblige ISPs to block illegal content from their networks.

You might have noticed in the first graphic above, about 3% of what consumers consume (“Downstream”) online is “filesharing,” a.k.a music and video piracy. It is pretty much incontrovertible that the Internet has devastated the music business. One might debate whether it was piracy or iTunes that did it in, but either way the fact of Internet piracy gave Steve Jobs a lot of leverage in his negotiations with the music industry. What’s to prevent a similar disembowelment of the movie industry, when a consumer in Dallas can watch a movie like “Annie” for free in his home theater before it has even been released?

The studio that distributes the movie would like to make sure you pay for seeing it, and don’t get a pirated copy. I think so too. This is a perfectly reasonable position to take, and if the studio was also your ISP, it might feel justified in blocking suspicious content. In the US it is not unusual for the studio to be your ISP (for example if your ISP is Comcast and the movie is Despicable Me). In a non-net-neutral world an ISP could block content unilaterally. But Net Neutrality says that an ISP can’t discriminate between packets based on content or origin. So in a net-neutral world, an ISP would be obliged to deliver pirated content, even when one of its own corporate divisions was getting ripped off.

This dilemma is analogous to free speech. The civilized world recognizes that in order to be free ourselves, we have to put up with some repulsive speech from other people. The alternative is censorship: empowering some bureaucrat to silence people who say unacceptable things. Enlightened states don’t like to go there, because they don’t trust anybody to define what’s acceptable. Similarly, it would be tough to empower ISPs to suppress content in a non-arbitrary but still timely way, especially when the content is encrypted and the source is obfuscated. Opposing Net Neutrality on the grounds of copyright protection is using the wrong tool for the job. It would be much better to find an alternative solution to piracy.

My thoughts on network neutrality can be found here and some predictions contingent on its loss here, so obviously I am disheartened by this latest ruling. The top Google hit on this news is currently a good story at GigaOm, and further down Google’s hit list is a thoughtful article in Forbes, predicting this result, but coming to the wrong conclusion.

I am habitually skeptical of “slippery slope” arguments, where we are supposed to fear something that might happen, but hasn’t yet. So I sympathize with pro-ISP sentiments like that Forbes article in this regard. On the other hand, I view businesses as tending to be rational actors, maximizing their profits under the rules of the game. If the rules of the game incent the ISPs to move in a particular direction, they will tend to move in that direction. Because competition is so limited among broadband ISPs (for any home in America there are rarely more than two options, regardless of the actual number of ISPs in the nation), they are currently incented to ration their bandwidth rather than to invest in increasing it. This decision is a push in that same direction.

Arguably the Internet was born of Federal action that forced a corporation to do something it didn’t want to do: without the Carterfone decision, there would have been no modems in the US. Without modems, the Internet would never have gotten off the ground.

Arguments that government regulation could stifle the Internet miss the point that all business activity in the US is done under government rules of various kinds: without those rules competitive market capitalism could not work. So the debate is not over whether the government should ‘interfere,’ but over what kinds of interference the government should do, and with what motivations. I take the liberal view that a primary role of government is to protect citizens from exploitation by predators. I am an enthusiastic advocate of competitive-market capitalism too, where it can exist. The structure of capitalism pushes corporations to charge as much as possible and provide as little as possible for the money (‘maximize profit’). In a competitive market, the counter-force to this is competition: customers can get better, cheaper service elsewhere, or forgo service without harm. But because of the local lack of competition, broadband in the US is not a competitive market, so there is no counter-force. And since few would argue that you can live effectively in today’s US without access to the Internet, you can’t forgo service without harm.

Accelerometers were exotic in smartphones when the first iPhone came out – used mainly for sensing the orientation of the phone for displaying portrait or landscape mode. Then came the idea of using them for dead-reckoning-assist in location-sensing. iPhones have always had accelerometers; since all current smartphones are basically copies of the original iPhone, it is actually odd that some smartphones lack accelerometers.

Predictably, when supplied with a hardware feature, the app developer community came up with a ton of creative uses for the accelerometer: magic tricks, pedometers, air-mice, and even user authentication based on waving the phone around.

Not all sensor technologies are so fertile. For example the proximity sensor is still pretty much only used to dim the screen and disable the touch sensing when you hold the phone to your ear or put it in your pocket.

So what about the user-facing camera? Is it a one-trick pony like the proximity sensor, or a springboard to innovation like the accelerometer? Although videophoning has been a perennial bust, I would argue for the latter: the you-facing camera is pregnant with possibilities as a sensor.

Looking at the Aberdeen report, I was curious to see “gesture recognition” on a list of features that will appear on 60% of phones by 2018. The others on the list are hardware features, but once you have a camera, gesture recognition is just a matter of software. (The Kinect is a sidetrack to this, provoked by lack of compute power.)

In a phone, compute power means battery-drain, so that’s a limitation to using the camera as a sensor. But each generation of chips becomes more power-efficient as well as more powerful, and as phone makers add more and more GPU cores, the developer community delivers new useful uses for them that max them out.

Gesture recognition is already here with Samsung, and soon every Android device. The industry is gearing up for innovation in phone based computer vision with OpenVX from Khronos. When always-on computer vision becomes feasible from a power-drain point of view, gesture recognition and face tracking will look like baby-steps. Smart developers will come up with killer applications that are currently unimaginable. For example, how about a library implementation of Paul Ekman’s emotion recognition algorithms to let you know how you are really feeling right now? Or, in concert with Google Glass, so you will never again be oblivious to your spouse’s emotional temperature.
Update November 19th: Here‘s some news and a little bit of technology background on this topic…
Update November 22:It looks like a company is already engaged on the emotion-recognition technology.

To get up to speed on the challenges and opportunities of WebRTC, and the future of real-time voice/video communications, Wirevolution’s Charlie Gold will be attending the WebRTC World Conference next month and writing about it here. The conference runs Nov 19-21 at the Convention Center in Santa Clara, CA.

At the conference we hope to make sense of the wide range of applications integrating WebRTC, and to relate them to integration opportunities for service providers. The applications range from standard video enabled apps such as webcasting and security, to enterprise applications such as CRM and contact centers, to emerging opportunities such as virtual reality and gaming. Service providers can combine webRTC with IMS and RCS, and can use it to manage network capabilities and end users’ quality of experience.

The conference is organized around four tracks: developer workshops, B2C, enterprise, and service providers.

The developer workshop topics include concepts and structures of WebRTC, implementation and build options, standardization efforts, signaling options, applications to the internet of things (IoT), codec evolution, monitoring and alarms, firewall traversal with STUN/TURN/ICE, and large scale simultaneous delivery in applications such as webcasting, gaming and virtual reality (VR) and security.

Business and consumer applications sessions cover successful deployment strategies of use cases like collaboration and conferencing, call centers and the Internet of Things (IoT). Other sessions on this track cover security, device requirements and regulatory issues.

Service provider workshops include IMS value in a world of WebRTC, how to use WebRTC, deployment strategies, how to extend existing services and offer new services using WebRTC, using WebRTC to acquire new users, and understanding the network impact of WebRTC.

The enterprise track has additional sessions on integrating WebRTC into your contact center and websites (public, supplier, internal). These sessions cover details like mapping out your integration strategy between WebRTC and SIP, using a media server vs. direct media interoperation; and how to deploy a WebRTC portal.

In Wi-Fi multi path used to mean the way that signals traversing different spatial paths arrive at different times. Before MIMO this was considered an impairment to the signal, but with multiple antennas this ‘spatial diversity’ is used to deliver the huge speed increases of 802.11n over 802.11g.

An interesting graphic from ABI research projects ongoing rapid growth in Wi-Fi in phones out to 2017, when the penetration will be approaching 100%. This doesn’t seem to be unrealistically fast to me, since the speed at which feature phones have been displaced by low-cost smartphones implies that by that time practically all phones will be smartphones, and since carriers are increasingly attracted to Wi-Fi for data offload from their cellular networks.

Enterprise mobility is one of the fastest growing areas of business, allowing companies to virtually connect with customers and employees from anyplace in the world. CIOs are facing more decisions than ever when it comes to managing their mobile workforce. Employees expect to be able to do their work on multiple platforms, from desktops and laptops to tablets and smartphones.

This session will dive into the various components of an enterprise mobility solution, provide best practices to ensure they are successful and explain how they integrate together to enable companies to grow their business. Topics will include: mobile enterprise application platforms, enterprise app stores, mobile device management, expense management, and analytics.

BYOD (Bring Your Own Device) has been in full swing for a couple of years now, and there’s no going back. Enterprises have adopted a policy of allowing users to use their own devices to access corporate networks and resources. With it comes the cost savings of not having to purchase as many mobile devices, and user satisfaction increases when they are able to choose their preferred devices and providers (and avoid having to carry multiple devices). But the benefits don’t come without challenges — the user experience must be preserved, security policies must accommodate these multiple devices and operating systems, and IT has to content with managing applications and access across different platforms. This session looks at what businesses can do to mitigate risks and ensure performance while still giving your users the device freedom they demand.

Supported by Session Border Controllers (SBCs) and Unified Communications (UC), enterprises can enable workers to essentially carry their desk phone extensions and features with them, wherever they are working on any given day – via VoIP clients and other UC applications on smartphones, tablets, and other mobile devices. With rich UC applications features such as call transfer, conference call, corporate directory listings, and presence, workers can collaborate and communicate in real-time, increasing productivity by maintaining an always one presence.

Enterprises that have embraced SBCs, and other components of UC security, are proving they can securely protect and extend communications to external parties, unlocking new ways of collaborating with clients, partners, distributed employees and the supply chain. This session will consider the Enterprise SBC as a means of satisfying security and privacy requirements, with signaling and traffic encryption, media and signaling forking, network demarcation, and threat detection and mitigation, enabling enterprises to capture the cost benefits of VoIP and UC, while maintaining essential security postures and access to multi-mobile communications across the network, anytime, anywhere.

802.11n claims a maximum raw data rate of 600 megabits per second. 802.11ac and WiGig each claims over ten times this – about 7 megabits per second. So why do we need them both? The answer is that they have different strengths and weaknesses, and to some extent they can fill in for each other, and while they are useful for some of the same things, they each have use cases for which they are superior.

The primary difference between them is the spectrum they use. 802.11ac uses 5 GHz, WiGig uses 60 GHz. There are four main consequences of this difference in spectrum:

Available bandwidth

Propagation distance

Crowding

Antenna size

Available bandwidth:

There are no free lunches in physics. The maximum bit-rate of a wireless channel is limited by its bandwidth (i.e. the amount of spectrum allocated to it – 40 MHz in the case of old-style 802.11n Wi-Fi), and the signal-to-noise ratio. This limit is given by the Shannon-Hartley Theorem. So if you want to increase the speed of Wi-Fi you either have to allocate more spectrum, or improve the signal-to-noise ratio.
The amount of spectrum available to Wi-Fi is regulated by the FCC. At 5 GHz, Wi-Fi can use 0.55 GHz of spectrum. At 60 GHz, Wi-Fi can use 7 GHz of spectrum – over ten times as much. 802.11ac divides its spectrum into five 80 MHz channels, which can be optionally bonded into two and a half 160 MHz channels, as shown in this FCC graphic:

“Worldwide, the 60 GHz band has much more spectrum available than the 2.4 GHz and 5 GHz bands – typically 7 GHz of spectrum, compared with 83.5 MHz in the 2.4 GHz band.
This spectrum is divided into multiple channels, as in the 2.4 GHz and 5 GHz bands. Because the 60 GHz band has much more spectrum available, the channels are much wider, enabling multi-gigabit data rates. The WiGig specification defines four channels, each 2.16 GHz wide – 50 times wider than the channels available in 802.11n.”

So for the maximum claimed data rates, 802.11ac uses channels 160 MHz wide, while WiGig uses 2,160 MHz per channel. That’s almost fourteen times as much, which makes life a lot easier for the engineers.

Propagation Distance:

60 GHz radio waves are absorbed by oxygen, but for LAN-scale distances this is not a significant factor. On the other hand, wood, bricks and particularly paint are far more opaque to 60 GHz waves:

5 GHz spectrum is also used by weather radar (Terminal Doppler Weather Radar or TDWR), and the FCC recently ruled that part of the 5 GHz spectrum is now completely prohibited to Wi-Fi. These channels are marked “X” in the graphic above. All the channels in the graphic designated “U-NII2″ and “U-NII 3″ are also subject to a requirement called “Dynamic Frequency Selection” or DFS, which says that if activity is detected at that frequency the Wi-Fi device must not use it.

5 GHz spectrum is not nearly as crowded as the 2.4 GHz spectrum used by most Wi-Fi, but it’s still crowded and cramped compared to the wide open vistas of 60 GHz. Even better, the poor propagation of 60 GHz waves means that even nearby transmitters are unlikely to cause interference. And with with beam-forming (discussed in the next section), even transmitters in the same room cause less interference. So WiGig wins on crowding in multiple ways.

Antenna size:

Antenna size and spacing is proportional to the wavelength of the spectrum being used. This means that 5 GHz antennas would be about an inch long and spaced about an inch apart. At 60 GHz, the antenna size and spacing would be about a tenth of this! So for handsets, multiple antennas are trivial to do at 60 GHz, more challenging at 5 GHz.

What’s so great about having multiple antennas? I mentioned earlier that there are no free lunches in physics, and that maximum bit-rate depends on channel bandwidth and signal to noise ratio. That’s how it used to be. Then in the mid-1990s engineers discovered (invented?) an incredible, wonderful free lunch: MIMO. Two adjacent antennas transmitting different signals on the same frequency normally interfere with each other. But the MIMO discovery was that if in this situation you also have two antennas on the receiver, and if there are enough things in the vicinity for the radio waves to bounce off (like walls, floors, ceilings, furniture and so on), then you can take the jumbled-up signals at the receiver, and with enough mathematical computer horsepower you can disentangle the signals completely, as if you had sent them on two different channels. And there’s no need to stop at two antennas. With four antennas on the transmitter and four on the receiver, you get four times the throughput. With eight, eight times. Multiple antennas receiving, multiple antennas sending. Multiple In, Multiple Out: MIMO. This kind of MIMO is called Spatial Multiplexing, and it is used in 802.11n and 802.11ac.

Another way multiple antennas can be used is “beam-forming.” This is where the same signal is sent from each antenna in an array, but at slightly different times. This causes interference patterns between the signals, which (with a lot of computation) can be arranged in such a way that the signals reinforce each other in a particular direction. This is great for WiGig, because it can easily have as many as 16 of its tiny antennas in a single array, even in a phone, and that many antennas can focus the beam tightly. Even better, shorter wavelengths tend to stay focused. So most of the transmission power can be aimed directly at the receiving device. So for a given power budget the signal can travel a lot further, or for a given distance the transmission power can be greatly reduced.

You know from a previous post how 802.11n gets to 600 megabits per second. 802.11ac does just three things to increase that by 1,056%:

It adds a new Modulation and Coding Scheme (MCS) called 256-QAM. This increases the number of bits transmitted per symbol from 6 to 8, a factor of 1.33.

It increases the maximum channel width from 40 MHz to 160 MHz (160 MHz is optional, but 80 MHz support is mandatory.) This increases the number of subcarriers from 108 to 468, a factor of 4.33.

It increases the maximum MIMO configuration from 4×4 to 8×8, increasing the number of spatial streams by a factor of 2. Multi-User MIMO (MU-MIMO) with beamforming means that these spatial streams can be directed to particular clients, so while the AP may have 8 antennas, the clients can have less, for example 8 clients each with one antenna.

Put those factors together and you have 1.33 x 4.33 x 2 = 11.56. Multiply the 600 megabits per second of 802.11n by that factor and you get 600 x 11.56 = 6,933 megabits per second for 802.11ac.

Note that nobody does this yet, and 160 MHz channels and 8×8 MIMO are likely to remain unimplemented for a long time. For example Broadcom’s recently announced BCM4360 and Qualcomm’s QCA9860 do 80 MHz channels, not 160 MHz, and 3 x 3 MIMO, so they claim maximum raw bit-rates of 1.3 gigabits per second. Which is still impressive.

Maximum theoretical raw bit-rate is a fun number to talk about, but of course in the real world that will (almost) never happen. What’s more important is the useful throughput (raw bit-rate minus MAC overhead) and rate at range, the throughput you are likely to get at useful distances. This is very difficult, and it is where the manufacturers can differentiate with superior technology. For phone chips power efficiency is also an important differentiator.

These chips are based on ARM’s big.LITTLE architecture. big.LITTLE chips aren’t just multi-core, they contain cores that are two different implementations of the same instruction set: a Cortex A7 and one or more Cortex A15s. The Cortex A7 has an identical instruction set to the A15, but is slower and more power efficient – ARM says it is the most power-efficient processor it has ever developed. The idea is that phones will get great battery life by mainly running on the slow, power-efficient Cortex A7, and great performance by using the A15 on the hopefully rare occasions when they need its muscle. Rare in this context is relative. Power management on modern phones involves powering up and powering down subsystems in microseconds, so a ‘rarely’ used core could still be activated several times in a single second.

The Cortex A15 and the Cortex A7 are innovative in another way, too: they are the first cores based on the ARMv7-A architecture. This is ARM’s first architecture with hardware support for virtualization.

Even without hardware support, virtualization on handsets has been around for a while; phone OEMs use it to make cheaper smartphones by running Android on the same CPU that runs the cellular baseband stack. ARM says:

Virtualization in the mobile and embedded space can enable hardware to run with less memory and fewer chips, reducing BOM costs and further increasing energy efficiency.

This application, running Android on the same core as the baseband, does not seem to have taken the market by storm. I presume because of performance. Even the advent of hardware support for virtualization may not rescue this application, since mobile chip manufacturers now scale performance by adding cores, and Moore’s law is rendering multicore chips cheap enough to put into mass-market smartphones.

So what about other applications? The ARM piece quoted above goes on to say:

Virtualization also helps to address safety and security challenges, and reduces software development and porting costs by man years.

In 2010 Red Bend Software, a company that specializes in manageability software for mobile phones, bought VirtualLogix, one of the three leading providers of virtualization software for phones (the other two are Trango, bought by VMWare in 2008 and OK Labs.)

In view of Red Bend’s market, it looks as if they acquired VirtualLogix primarily to enable enterprise IT departments to securely manage their employees’ phones. BYOD (Bring Your Own Device) is a nightmare for IT departments; historically they have kept chaos at bay by supporting only a limited number of devices and software setups. But in the era of BYOD employees demand to use a vast and ever-changing variety of devices. Virtualization enables Red Bend to add a standard corporate software load to any phone.

This way, a single phone has a split personality, and the hardware virtualization support keeps the two personalities securely insulated from each other. On the consumer side, the user downloads apps, browses websites and generally engages in risky behavior. But none of this impacts the enterprise side of the phone, which remains secure.

The FCC has proposed a date of 2018 to sunset the Public Service Telephone Network (PSTN) and move the nation to an all IP network for voice services. This session will explore the emerging trends in the Telco Cloud with case studies. Learn how traditional telephone companies are adapting to compete, and new opportunities for service providers, including leveraging cloud computing and Infrastructure as a Service (IaaS) systems that are being deployed with scalable commodity hardware to deliver voice and video services including IVR, IVVR, conferencing plus Video on Demand and local CDNs.

In related news, a group of industry experts is collaborating on a plan for this transition. The draft can be found here. I volunteered as the editor for one of the chapters, so the current outline roughs out some of my opinions on this topic. This is a collaborative project, so please contact me if you can help to write it.

As 4G mobile networks continue to be rolled out and new devices are adopted by end users, mobile video conferencing is becoming an increasingly important component in today’s Unified Communications ecosystem. The ability to deliver enterprise-grade video conferencing including high definition voice, video and data-sharing will be critical for those playing in this space. Mobile video solutions require vendors to consider a number of issues including interoperability with new and traditional communications platforms as well as mobile operating systems, user interfaces that maximize the experience, and the ability to interoperate with carrier networks. This session will explore the business-class mobile video platforms available in the market today as well as highlight some end-user experiences with these technologies.

The concept of visuals with voice is a compelling one, and there are numerous kinds of visual content that you may want to convey. For example, when you do a video call with FaceTime or Skype, you can switch the camera to show what you are looking at if you wish, but you can’t share your screen or photos during a call.

FaceTime, Skype and Google Talk all use the data connection for both the voice and video streams, and the streams travel over the Internet.

A different, non-IP technology for videophone service called 3G-324M, is widely used by carriers in Europe and Asia. It carries the video over the circuit-switched channel, which enables better quality (lower latency) than the data channel. An interesting application of this lets companies put their IVR menus into a visual format, so instead of having to listen through a tedious listing of options that you don’t want, you can instantly select your choice from an on-screen menu. Dialogic makes back-end equipment that makes applications like on-screen IVR possible on 3G-324M networks.

Radish Systems uses a different method to provide a similar visual IVR capability for when your carrier doesn’t support 3G-324M (none of the US carriers do). The Radish application is called Choiceview. When you make a call from your iPhone to a Choiceview-enabled IVR, you dial the call the regular way, then start the Choiceview app on your iPhone. The Choiceview IVR matches the Caller ID on the call with your phone number that you typed into the app setup, and pushes a menu to the appropriate client. So the call goes over the old circuit-switched network, while Choiceview communicates over the data network. Choiceview is strictly a client-server application. A Choiceview server can push any data to a phone, but the phone can’t send data the other way, neither can two phones exchange data via Choiceview.

So this ITExpo session will try to make sense of this mix: multiple technologies, multiple geographies and multiple use cases for visual data exchange during phone calls.

First impression is very good. The industrial design on this makes the iPhone look clunky. The screen is much bigger, the overall feel reeks of quality, just like the iPhone. The haptic feedback felt slightly odd at first, but I think I will like it when I get used to it.

I was disappointed when the phone failed to detect my 5GHz Wi-Fi network. This is like the iPhone, but the Samsung Galaxy S2 and Galaxy Nexus support 5 Ghz, and I had assumed parity for the Razr.

Oddly, bearing in mind its dual core processor, the Droid Razr sometimes seems sluggish compared to the iPhone 4. But the Android user interface is polished and usable, and it has a significant user interface feature that the iPhone sorely lacks: a universal ‘back’ button. The ‘back’ button, like the ‘undo’ feature in productivity apps, fits with the way people work and learn: try something, and if that doesn’t work, try something else.

The Razr camera is currently unusable for me. The first photo I took had a 4 second shutter lag. On investigation, I found that if you hold the phone still, pointed at a static scene, it takes a couple of seconds to auto-focus. If you wait patiently for this to happen, watching the screen and waiting for the focus to sharpen, then press the shutter button, there is almost no shutter lag. But if you try to ‘point and shoot’ the shutter lag can be agonizingly long – certainly long enough for a kid to dodge out of the frame. This may be fixable in software, and if so, I hope Motorola gets the fix out fast.

While playing with the phone, I found it got warm. Not uncomfortably hot, but warm enough to worry about the battery draining too fast. Investigating this, I found a wonderful power analysis display, showing which parts of the phone are consuming the most power. The display, not surprisingly, was consuming the most – 35%. But the second most, 24%, was being used by ‘Android OS’ and ‘Android System.’ As the battery expired, the phone kindly suggested that it could automatically shut things off for me when the power got low, like social network updates and GPS. It told me that this could double my battery life. Even so, battery life does not seem to be a strength of the Droid Razr. Over a few days, I observed that even when the phone was completely unused, the battery got down to 20% in 14 hours, and the vast majority of the power was spent on ‘Android OS.’

So nice as the Droid Razr is, on balance I still prefer the iPhone.

P.S. I had a nightmare activation experience – I bought the phone at Best Buy and supposedly due to a failure to communicate between the servers at Best Buy and Verizon, the phone didn’t activate on the Verizon network. After 8 hours of non-activation including an hour on the phone with Verizon customer support (30 minutes of which was the two of us waiting for Best Buy to answer their phone), I went to a local Verizon store which speedily activated the phone with a new SIM.

Deciding on the contract, I was re-stunned to rediscover that Verizon charges $20 per month for SMS. I gave this a miss since I can just use Google Voice, which costs $480 less over the life of the contract.

The iPhone 4S even has some worse specifications than the iPhone 4. It is 3 grams heavier and its standby battery life is 30% less. The screen is no larger – it remains smaller than the standard set by the competition. On the other hand the user experience is improved in several ways: the phone is more responsive thanks to a faster processor; it takes better photographs; and Apple has taken yet another whack at the so-far intractable problem of usable voice control. A great benefit to Apple, though not so much to its users, is that the new Qualcomm baseband chip works for all carriers worldwide, so Apple no longer needs different innards for AT&T and Verizon (though Verizon was presumably disappointed that Apple didn’t add a chip for LTE support).

Since its revolutionary debut, the history of the iPhone has been one of evolutionary improvements, and the improvements of the iPhone 4S over the iPhone 4 are in proportion to the improvements in each of the previous generations. The 4S seems to be about consolidation, creating a phone that will work on more networks around the world, and that will remain reliably manufacturable in vast volumes. It’s a risk-averse, revenue-hungry version, as is appropriate for an incumbent leader.

The technical improvements in the iPhone 4S would have been underwhelming if it had been called the iPhone 5, but for a half-generation they are adequate. By mid-2012 several technologies will have ripened sufficiently to make a big jump.

We have been trying to come up with a solution to expand the capabilities of the iPhone so developers can write great apps for it, but keep the iPhone secure. And we’ve come up with a very. Sweet. Solution. Let me tell you about it. An innovative new way to create applications for mobile devices… it’s all based on the fact that we have the full Safari engine in the iPhone. And so you can write amazing Web 2.0 and AJAX apps that look and behave exactly like apps on the iPhone, and these apps can integrate perfectly with iPhone services. They can make a call, check email, look up a location on Gmaps… don’t worry about distribution, just put ‘em on an internet server. They’re easy to update, just update it on your server. They’re secure, and they run securely sandboxed on the iPhone. And guess what, there’s no SDK you need! You’ve got everything you need if you can write modern web apps…

But the platform and the developer community weren’t ready for it, so Apple was quickly forced to come up with an SDK for native apps, and the app store was born.

So it seems that Apple was four years early on its iPhone developer solution, and that in bowing to public pressure in 2007 to deliver an SDK, it made a ton of money that it otherwise wouldn’t have:

A web service which mirrors or enhances the experience of a downloaded app significantly weakens the control that a platform company like Apple has over its user base. This has already been seen in examples like the Financial Times newspaper’s HTML5 app, which has already outsold its former iOS native app, with no revenue cut going to Apple.

Service providers can offer any product they wish. But consumers have certain expectations when a product is described as ‘Internet Service.’ So net neutrality regulations are similar to truth in advertising rules. The primary expectation that users have of an Internet Service Provider (ISP) is that it will deliver IP datagrams (packets) without snooping inside them and slowing them down, dropping them, or charging more for them based on what they contain.

“At a presentation to investors in London on May 10, analysts questioned where KPN had obtained the rapid adoption figures for WhatsApp. A midlevel KPN executive explained that the operator had deployed analytical software which uses a technology called deep packet inspection to scrutinize the communication habits of individual users. The disclosure, widely reported in the Dutch news media, set off an uproar that fueled the legislative drive, which in less than two months culminated in lawmakers adopting the Continent’s first net neutrality measures with real teeth. New York Times

Taking the analogy with the postal service a little further: the postal service charges by volume. The ISP industry behaves similarly, with tiered rates depending on bandwidth. Net neutrality advocates don’t object to this.

The postal service also charges by quality of service, like delivery within a certain time, and guaranteed delivery. ISPs don’t offer this service to consumers, though it is one that subscribers would probably pay for if applied voluntarily and transparently. For example, suppose I wish to subscribe to 10 megabits per second of Internet connectivity, I might be willing to pay a premium for a guaranteed minimum delay on UDP packets. The ISP could then add value for me by prioritizing UDP packets over TCP when my bandwidth demand exceeded 10 megabits per second. Is looking at the protocol header snooping inside the packets? Kind of, because the TCP or UDP header is inside the IP packet, but on the other hand, it might be like looking at a piece of mail to see if it is marked Priority or bulk rate.

A subscriber may even be interested in paying an ISP for services based on deep packet inspection. In a recent conversation, an executive at a major wireless carrier likened net neutrality to pollution. I am not sure what he meant by this, but he may have been thinking of spam-like traffic that nobody wants, but that neutrality regulations might force a service provider to carry. I use Gmail as my email service, and I am grateful for the Gmail spam filter, which works quite well. If a service provider were to use deep packet inspection to implement malicious-site blocking (like phishing site blocking or unintentional download blocking) or parental controls, I would consider this a service worth paying for, since the PC-based capabilities in this category are too easily circumvented by inexperienced users.

Notice that all these suggestions are for voluntary services. When a company opts to impose a product on a customer when the customer prefers an alternative one, the customer is justifiably irked.

What provoked KPN to start blocking WhatsApp, was that KPN subscribers were abandoning KPN’s SMS service in favor of WhatsApp. This caused a revenue drop. Similarly, as VoIP services like Skype grow, voice revenues for service providers will drop, and service providers will be motivated to block or impair the performance of those competing services.

The dumb-pipe nature of IP has enabled the explosion of innovation in services and products that we see on the Internet. Unfortunately for the big telcos and cable companies, many of these innovations disrupt their other service offerings. Internet technology enables third parties to compete with legacy cash cows like voice, SMS and TV. The ISP’s rational response is to do whatever is in its power to protect those cash cows. Without network neutrality regulations, the ISPs are duty-bound to their investors to protect the profitability of their other product lines by blocking the competitors on their Internet service, just as KPN did. Net neutrality regulation is designed to prevent such anti-competitive behavior. A neutral net obliges ISPs to allow competition on their access links.

So which is the free-market approach? Allowing network owners to do whatever they want on their networks and block any traffic they don’t like, or ensuring that the Internet is a level playing field where entities with the power to block third parties are prevented from doing so? The former is the free market of commerce, the latter is the free market of ideas. In this case they are in opposition to each other.

I have some deep seated opinions about user interfaces and usability. It normally only takes me a few seconds to get irritated by a new application or device, since they almost always contravene one or more of my fundamental precepts of usability. So when I see a product that gets it righter than I could have done myself, I have to say it warms my heart.

I just noticed a few minutes ago, using Chrome, that the tabs behave in a better way than on any other browser that I have checked (Safari, Firefox, IE8). If you have a lot of tabs open, and you click on an X to close one of them, the tabs rearrange themselves so that the X of the next tab is right under the mouse, ready to get clicked to close that one too. Then after closing all the tabs that you are no longer interested in, when you click on a remaining one, the tabs rearrange themselves to a right size. This is a very subtle user interface feature. Chrome has another that is a monster, not subtle at all, and so nice that only stubborn sour grapes (or maybe patents) stop the others from emulating it. That is the single input field for URLs and searches. I’m going to talk about how that fits with my ideas about user interface design in just a moment, but first let’s go back to the tab sizing on closing with the mouse.

I like this feature because it took a programmer some effort to get it right, yet it only saves a user a fraction of a second each time it is used, and only some users close tabs with the mouse (I normally use Cmd-W), and only some users open large numbers of tabs simultaneously. So why did the programmer take the trouble? There are at least two good reasons: first, let’s suppose that 100 million people use the Chrome browser, and that they each use the mouse to close 12 tabs a day, and that in 3 of these closings, this feature saved the user from moving the mouse, and the time saved for each of these three mouse movements was a third of a second. The aggregate time saved per day across 100 million users is 100 million seconds. At 2,000 working hours per year, that’s more than 10 work-years saved per day. The altruistic programmer sacrificed an hour or a day or whatever of his valuable time, to give the world far more. But does anybody apart from me notice? As I have remarked before, at some level the answer is yes.

The second reason it was a good idea for the programmer to take this trouble is to do with the nature of usability and choice of products. There is plenty of competition in the browser market, and it is trivial for a user to switch browsers. Usability of a program is an accretion of lots of little ingredients. So in the solution space addressed by a particular application, the potential gradation of usability is very fine-grained, each tiny design decision moving the needle a tiny increment in the direction of greater or lesser usability. But although ease of use of an application is an infinitely variable property, whether a product is actually used or not is effectively a binary property. It is a very unusual consumer (guilty!) who continues to use multiple browsers on a daily basis. Even if you start out that way you will eventually fall into the habit of using just one. For each user of a product, there is a threshold on that infinite gradation of usability, that balances against the benefit of using the product. If the product falls below that effort/benefit threshold it gradually falls into disuse. Above that threshold the user forms the habit of using it regularly. Many years ago I bought a Palm Pilot. For me, that user interface was right on my threshold. It teetered there for several weeks as I tried to get into the habit of depending on it, but after I missed a couple of important appointments because I had neglected to put them into the device, I went back to my trusty pocket Day-Timer. For other people, the Palm Pilot was above their threshold of usability, and they loved it, used it and depended on it. Not all products are so close to the threshold of usability. Some fall way below it. You have never heard of them – or maybe you have: how about the Apple Newton? And some land way above it; before the iPhone nobody browsed the Internet on their phones – the experience was too painful. In one leap the iPhone landed so far above that threshold that it routed the entire industry.
The point here is that the ‘actual use’ threshold is a a razor-thin line on the smooth scale of usability, so if a product lies close to that line, the tiniest, most subtle change to usability can move it from one side of the line to the other. And in a competitive market where the cost of switching is low, that line isn’t static; the competition is continuously moving the threshold up. This is consistent with “natural selection by variation and survival of the fittest.” So product managers who believe their usability is “good enough,” and that they need to focus on new features to beat the competition are often misplacing their efforts – they may be moving their product further to the right on the diagram above than they are moving it up.

Now let’s go on to Chrome’s single field for URLs and searches. Computer applications address complicated problem spaces. In the diagram below, each circle represents the aggregate complexity of an activity performed with the help of a computer. The horizontal red line represents the division between the complexity handled by the user, and that handled by the computer. In the left circle most of the complexity is dealt with by the user, in the right circle most is dealt with by the computer. For a given problem space, an application will fall somewhere on this line. For searching databases HAL 9000 has the circle almost entirely above this line, SQL is way further down. The classic example of this is the graphical user interface. It is vastly more programming work to create a GUI system like Windows than a command-line system like MS-DOS, and a GUI is correspondingly vastly easier on the user.
Its single field for typing queries and URLs clearly makes Chrome sit higher on this line than the browsers that use two fields. With Chrome the user has less work to do: he just gives the browser an instruction. With the others the user has to both give the instruction and tell the computer what kind of instruction it is. On the other hand, the programmer has to do more work, because he has to write code to determine whether the user is typing a URL or a search. But this is always going to be the case when you make a task of a given complexity easier on the user. In order to relieve the user, the computer has to handle more complexity. That means more work for the programmer. Hard-to-use applications are the result of lazy programmers.

The programming required to implement the single field for URLs and searches is actually trivial. All browsers have code to try to form a URL out of what’s typed into the address field; the programmer just has to assume it’s a search when that code can’t generate a URL. So now, having checked my four browsers, I have to partially eat my words. Both Firefox and IE8, even though they have separate fields for web addresses and searches, do exactly what I just said: address field input that can’t be made into a URL is treated as a search query. Safari, on the other hand, falls into the lazy programmer hall of shame.

This may be a result of a common “ease of use” fallacy: that what is easier for the programmer to conceive is easier for the user to use. The programmer has to imagine the entire solution space, while the user only has to deal with what he comes across. I can imagine a Safari programmer saying “We have to meet user expectations consistently – it will be confusing if the address field behaves in an unexpected way by doing a search when the user was simply mistyping a URL.” The fallacy of this argument is that while the premise is true (“it is confusing to behave in an unexpected way,”) you can safely assume that an error message is always unexpected, so rather than deliver one of those, the kind programmer will look at what is provoking the error message, and try to guess what the user might have been trying to achieve, and deliver that instead.

There are two classes of user mistake here: one is typing into the “wrong” field, the other is mistyping a URL. On all these browsers, if you mistype a URL you get an unwanted result. On Safari it’s an error page, on the others it’s an error page or a search, depending on what you typed. So Safari isn’t better, it just responds differently to your mistake. But if you make the other kind of “mistake,” typing a search into the “wrong” field, Safari gives an error, while the others give you what you actually wanted. So in this respect, they are twice as good, because the computer has gracefully relieved the user of some work by figuring out what they really wanted. But Chrome goes one step further, making it impossible to type into the “wrong” field, because there is only one field. That’s a better design in my opinion, though I’m open to changing my mind: the designers at Firefox and Microsoft may argue that they are giving the best of both worlds, since users accustomed to separate fields for search and addresses might be confused if they can’t find a separate search field.

I mentioned earlier that the Wi-Fi Alliance requires MIMO for 802.11n certification except for phones, which can be certified with a single stream. This waiver was for several reasons, including power, size and the difficulty of getting two spatially separated antennas into a handset. Atheros and Marvell appear to have overcome those difficulties; both have announced 2×2 Wi-Fi chips for handsets. Presumably TI and Broadcom will not be far behind.

The AR6004 can use both the 2.4GHz and the 5GHz bands and is capable of real world speeds as high as 170Mbps. Yet the firm claims its chip consumes only 15% more power than the current AR6003, which delivers only 85Mbps. It will be available in sample quantities by the end of this quarter and in commercial quantities in the first quarter of next year.

The 88W8797 from Marvell uses external PAs and LNAs, but saves space a different way, by integrating Bluetooth and FM onto the chip. The data sheet on this chip doesn’t mention as many of the 802.11n robustness features as the Atheros one does, so it is unclear whether the chip supports LDPC, for example.

Both chips claim a maximum 300 Mbps data rate. Atheros translates this to an effective throughput of 170 Mbps.

Of course, these chips will be useful in devices other than handsets. They are perfect for tablets, where there is plenty of room for two antennas at the right separation.

Martin Stanford of Sky TV’s “Tech Talk” does a video blog demonstrating HD Voice on a Nokia phone. There is no latency, indicating some kind of post-processing of the video, but it’s still a nicely done and illustrative demo of HD Voice.

Several things jump out at me. First, cable is faster than DSL, and wireless is the slowest. Second, again no surprise, urban is faster than rural. But the big surprise to me is the Verizon number. They have spent a ton on FIOS, and according to Trefis about half of Verizon’s broadband customers are now on FiOS. So according to these numbers, even if we supposed that Verizon’s non-FiOS customers were getting a bandwidth of zero, the average bandwidth available to a FiOS customer appears to be less than 5 megabits per second.

Since FiOS is a very fast last mile, the bottleneck might be in the backhaul, or, more likely, in some bandwidth-throttling device. Whichever way you slice it, it’s hard to cast these numbers in a positive light for Verizon.

Update January 31, 2011: This story in the St. Louis Business Journal says that the Charter, the ISP with the best showing in the Netflix measurements, is increasing its speed further, with no increase in price. This is good news. It is time that ISPs in the US started to compete on speed.

Contemplating the graphs, the lines appear to cluster to some extent in three bands, centered on 1.5 mbps, 2 mbps and 2.5 mbps. If this is evidence of traffic shaping, these are the numbers that ISPs should be using in their promotional materials, rather than the usual “up to” numbers that don’t mention minimums or averages.

Mobility is taking the enterprise space by storm – everyone is toting a smartphone, tablet, laptop, or one of each. It’s all about what device happens to be tIn today’s distributed workforce environment, it’s essential to be able to communicate to employees and customers across the globe both efficiently and effectively. Prior to today, doing so was far more easily said than done because, not only was the technology not in place, but video wasn’t accepted as a form of business communication. Now that video has burst onto the scene by way of Apple’s Facetime, Skype and Gmail video chat, consumers are far more likely to pick video over voice – both in their home and at their workplaces. But, though demand has never been higher, enterprise networks still experience a slow-down when employees attempt to access video streams from the public Internet because the implementation of IP video is not provisioned properly. This session will provide an overview of the main deployment considerations so that IP video can be successfully deployed inside or outside the corporate firewall, without impacting the performance of the network, as well as how networks need to adapt to accommodate widespread desktop video deployments. It will also expose the latest in video compression technology in order to elucidate the relationship between video quality, bandwidth, and storage. With the technology in place, an enterprise can efficiently leverage video communication to lower costs and increase collaboration.

VBrick claims to be the leader in video streaming for enterprises. Radvision and LifeSize (a subsidiary of Logitech) are oriented towards video conferencing rather than streaming. It will be interesting to get their respective takes on bandwidth constraints on the WLAN and the access link, and what other impairments are important.

Mobility is taking the enterprise space by storm – everyone is toting a smartphone, tablet, laptop, or one of each. It’s all about what device happens to be the most convenient at the time and the theory behind unified communications – anytime, anywhere, any device. The adoption of mobile devices in the home and their relevance in the business space has helped drive a new standard for enterprise networking, which is rapidly becoming a wireless opportunity, offering not only the convenience and flexibility of in-building mobility, but WiFi networks are much easier and cost effective to deploy than Ethernet. Furthermore, the latest wireless standards largely eliminate the traditional performance gap between wired and wireless and, when properly deployed, WiFi networks are at least as secure as wired. This session will discuss the latest trends in enterprise wireless, the secrets to successful deployments, as well as how to make to most of your existing infrastructure while moving forward with your WiFi installation.

The panelists are:

Shawn Tsetsilas, Director, WLAN, Cellular Specialties, Inc.

Perry Correll, Principal Technologists, Xirrus Inc.

Adam Conway, Vice President of Product Management, Aerohive

Cellular Specialties in this context is a system integrator, and one of their partners is Aerohive. Aerohive’s special claim to fame is that they eliminate the WLAN controller, so each access point controls itself in cooperation with its neighbors. The only remaining centralized function is the management. Aerohive claims that this architecture gives them superior scalability, and a lower system cost (since you only pay for the access points, not the controllers).

Xirrus’s product is unusual in a different way, packing a dozen access points into a single sectorized box, to massively increase the bandwidth available in the coverage areas.

So is it true that Wi-Fi has evolved to the point that you no longer need wired ethernet?

Voice over WLAN has been deployed in enterprise applications for years, but has yet to reach mainstream adoption (beyond vertical markets). With technologies like mobile UC, 802.11n, fixed-mobile convergence and VoIP for smartphones raising awareness/demand, there are a number of vendors poised to address market needs by introducing new and innovative devices. This session will look at what industries have already adopted VoWLAN and why – and what benefits they have achieved, as well as the technology trends that make VoWLAN possible.

All three of these companies have a venerable history in enterprise Wi-Fi phones; the two original pioneers of enterprise Voice over Wireless LAN were Symbol and Spectralink, which Motorola and Polycom acquired respectively in 2006 and 2007. Cisco announced a Wi-Fi handset (the 7920) to complement their Cisco CallManager in 2003. But the category has obstinately remained a niche for almost a decade.

It has been clear from the outset that cell phones would get Wi-Fi, and it would be redundant to have dedicated Wi-Fi phones. And of course, now that has come to pass. The advent of the iPhone with Wi-Fi in 2007 subdued the objections of the wireless carriers to Wi-Fi and knocked the phone OEMs off the fence. By 2010 you couldn’t really call a phone without Wi-Fi a smartphone, and feature phones aren’t far behind.

So this session will be very interesting, answering questions about why enterprise voice over Wi-Fi has been so confined, and why that will no longer be the case.

Back in February 2009 I wrote about how Atheros’ new chip made it possible for a phone to act as a Wi-Fi hotspot. A couple of months later, David Pogue wrote in the New York Times about a standalone device to do the same thing, the Novatel MiFi 2200. The MiFi is a Wi-Fi access point with a direct connection to the Internet over a cellular data channel. So you can have “a personal Wi-Fi bubble, a private hot spot, that follows you everywhere you go.”

The type of technology that Atheros announced at the beginning of 2009 was put on a standards track at the end of 2009; the “Wi-Fi Direct” standard was launched in October 2010. So far about 25 products have been certified. Two phones have already been announced with Wi-Fi Direct built-in: the Samsung Galaxy S and the LG Optimus Black.

Everybody has a cell phone, so if a cell phone can act as a MiFi, why do you need a MiFi? It’s another by-product of the dysfunctional billing model of the mobile network operators. If they simply bit the bullet and charged à la carte by the gigabyte, they would be happy to encourage you to use as many devices as possible through your phone.

WiFi Direct may force a change in the way that network operators bill. It is such a compelling benefit to consumers, and so trivial to implement for the phone makers, that the mobile network operators may not be able to hold it back.

So if this capability proliferates into all cell phones, we will be able to use Wi-Fi-only tablets and laptops wherever we are. This seems to be bad news for Novatel’s MiFi and for cellular modems in laptops. Which leads to another twist: Qualcomm’s Gobi is by far the leading cellular modem for laptops, and Qualcomm just announced that it is acquiring Atheros.

A slide from the webinar shows how network operators could charge by the type of content being transported rather than by bandwidth:

In an earlier post I said that strict net neutrality is appropriate for wired broadband connections, but that for wireless connections the bandwidth is so constrained that the network operators must be able to ration bandwidth in some way. The suggestion of differential charging for bandwidth by content goes way beyond mere rationing. The reason this is egregious is that the bandwidth costs the same to the wireless service provider regardless of what is carried on it. Consumers don’t want to buy content from Internet service providers, they want to buy connectivity – access to the Internet.

In cases where a carrier can legitimately claim to add value it would make sense to let them charge more. For example, real-time communications demands traffic prioritization and tighter timing constraints than other content. Consumers may be willing to pay a little bit more for the better sounding calls resulting from this.

But this should be the consumer’s choice. Allowing mandatory charging for what is currently available free on the Internet would mean the death of the mobile Internet, and its replacement with something like interactive IP-based cable TV service. The Internet is currently a free market where the best and best marketed products win. Per-content charging would close this down, replacing it with an environment where product managers at carriers would decide who is going to be the next Facebook or Google, kind of like AOL or Compuserve before the Internet. The lesson of the Internet is that a dumb network connecting content creators with content consumers leads to massive innovation and value creation. The lesson of the PSTN is that an “intelligent network,” where network operators control the content, leads to decades of stagnation.

In a really free market, producers get paid for adding value. Since charging per content by carriers doesn’t add value, but merely diverts revenue from content producers to the carriers, it would be impossible in a free market. If a wireless carrier successfully attempted this, it would indicate that wireless Internet access is not a free market, but something more like a monopoly or cartel which should be regulated for the public good.

Although phone numbers are an antiquated kind of thing, we are sufficiently beaten down by the machines that we think of it as natural to identify a person by a 10 digit number. Maybe the demise of the numeric phone keypad as big touch-screens take over will change matters on this front. But meanwhile, phone numbers are holding us back in important ways. Because phone numbers are bound to the PSTN, which doesn’t carry video calls, it is harder to make video calls than voice, because we don’t have people’s video addresses so handy.

This year, three new products attempted to address this issue in remarkably similar ways – clearly an idea whose time has come. The products are Apple’s FaceTime, Cisco’s IME and a startup product called Tango.

In all three of these products, you make a call to a regular phone number, which triggers a video session over the Internet. You only need the phone number – the Internet addressing is handled automatically. The two problems the automatic addressing has to handle are finding a candidate address, then verifying that it is the right one. Here’s how each of those three new products does the job:

1. FaceTime. When you first start FaceTime, it sends an SMS (text message) to an Apple server. The SMS contains sufficient information for the Apple server to reliably associate your phone number with the XMPP (push services) client running on your iPhone. With this authentication performed, anybody else who has your phone number in their address book on their iPhone or Mac can place a videophone call to you via FaceTime.

2. Cisco IME (Inter-Company Media Engine). The protocol used by IME to securely associate your phone number with your IP address is ViPR (Verification Involving PSTN Reachability), an open protocol specified in several IETF drafts co-authored by Jonathan Rosenberg who is now at Skype. ViPR can be embodied in a network box like IME, or in an endpoint like a phone of PC.
Here’s how it works: you make a phone call in the usual way. After you hang up, ViPR looks up the phone number you called to see if it is also ViPR-enabled. If it is, ViPR performs a secure mutual verification, by using proof-of-knowledge of the previous PSTN call as a shared secret. The next time you dial that phone number, ViPR makes the call through the Internet rather than through the phone network, so you can do wideband audio and video with no per-minute charge. A major difference between ViPR and FaceTime or Tango is that ViPR does not have a central registration server. The directory that ViPR looks up phone numbers in is stored in a distributed hash table (DHT). This is basically a distributed database with the contents stored across the network. Each ViPR participant contributes a little bit of storage to the network. The DHT itself defines an algorithm – called Chord – which describes how each node connects to other nodes, and how to look up information.

3. Tango, like FaceTime, has its own registration servers. The authentication on these works slightly differently. When you register with Tango, it looks in the address book on your iPhone for other registered Tango users, and displays them in your Tango address book. So if you already know somebody’s phone number, and that person is a registered Tango user, Tango lets you call them in video over the Internet.

An interesting story from Bloomberg says that Ericsson is contemplating owning a wireless network infrastructure. Ericsson is already one of the top 5 mobile network operators worldwide, but it doesn’t own any of the networks it manages – it is simply a supplier of outsourced network management services.

The idea here is that Ericsson will own and manage its own network, and wholesale the services on it to MVNOs. If this plan goes through, and if Ericsson is able to stick to the wholesale model and not try to deliver services direct to consumers, it will be huge for wireless network neutrality. It is a truly disruptive development, in that it could lower barriers to entry for mobile service providers, and open up the wireless market to innovation at the service level.

[update] Upon reflection, I think this interpretation of Ericsson’s intent is over-enthusiastic. The problem is spectrum. Ericsson can’t market this to MVNOs without spectrum. So a more likely interpretation of Ericsson’s proposal is that it will pay for infrastructure, then sell capacity and network management services to spectrum-owning mobile network operators. Not a dumb pipes play at all. It is extremely unlikely that Ericsson will buy spectrum for this, though there are precedents for equipment manufacturers buying spectrum – Qualcomm and Intel have both done so.

[update 2] With the advent of white spaces, Ericsson would not need to own spectrum to offer a wholesale service from its wireless infrastructure. The incremental cost of provisioning white spaces on a cellular base station would be relatively modest.

The term “QoS” is used ambiguously. The two main categories of definition are first, QoS Provisioning: “the capability of a network to provide better service to selected network traffic,” which means packet prioritization of one kind or another, and second more literally: “Quality of Service,” which is the degree of perfection of a user’s audio experience in the face of potential impairments to network performance. These impairments fall into four categories: availability, packet loss, packet delay and tampering. Since this sense is normally used in the context of trying to measure it, we could call it QoS Metrics as opposed to QoS Provisioning. I would put issues like choice of codec and echo into the larger category of Quality of Experience, which includes all the possible impairments to audio experience, not just those imposed by the network.

By “tampering” I mean any intentional changes to the media payload of a packet, and I am OK with the negative connotations of the term since I favor the “dumb pipes” view of the Internet. On phone calls the vast bulk of such tampering is transcoding: changing the media format from one codec to another. Transcoding always reduces the fidelity of the sound, even when transcoding to a “better” codec.

Networks vary greatly in the QoS they deliver. One of the major benefits of going with VoIP service provided by your ISP (Internet Service Provider) is that your ISP has complete control over QoS. But there is a growing number of ITSPs (Internet Telephony Service Providers) that contend that the open Internet provides adequate QoS for business-grade telephone service. Skype, for example.

But it’s nice to be sure. So I have added a “QoS Metrics” category in the list to the right of this post. You can use the tools there to check your connection. I particularly like the one from Voxygen, which frames the test results in terms of the number of simultaneous voice sessions that your WAN connection can comfortably handle. Here’s an example of a test of ten channels:

Aerohive claims to be the first example of a third-generation Wireless LAN architecture.

The first generation was the autonomous access point.

The second generation was the wireless switch, or controller-based WLAN architecture.

The third generation is a controller-less architecture.

The move from the first generation to the second was driven by enterprise networking needs. Enterprises need greater control and manageability than smaller deployments. First generation autonomous access points didn’t have the processing power to handle the demands of greater network control, so a separate category of device was a natural solution: in the second generation architecture, “thin” access points did all the real-time work, and delegated the less time-sensitive processing to powerful central controllers.

Now the technology transition to 802.11n enables higher capacity wireless networks with better coverage. This allows enterprises to expand the role of wireless in their networks, from convenience to an alternative access layer. This in turn further increases the capacity, performance and reliability demands on the WLAN.

Aerohive believes this generational change in technology and market requires a corresponding generational change in system architecture. A fundamental technology driver for 802.11n, the ever-increasing processing bang-for-the-buck yielded by Moore’s law, also yields sufficient low-cost processing power to move the control functions from central controllers back to the access points. Aerohive aspires to lead the enterprise Wi-Fi market into this new architecture generation.

Superficially, getting rid of the controller looks like a return to the first generation architecture. But an architecture with all the benefits of a controller-based WLAN, only without a controller, requires a sophisticated suite of protocols by which the smart access points can coordinate with each other. Aerohive claims to have developed such a protocol suite.

The original controller-based architectures used the controller for all network traffic: the management plane, the control plane and the data plane. The bulk of network traffic is on the data plane, so bottlenecks there do more damage than on the other planes. So modern controller-based architectures have “hybrid” access points that handle the data plane, leaving only the control and management planes to the controller device (Aerohive’s architect, Devin Akin, says:, “distributed data forwarding at Layer-2 isn’t news, as every other vendor can do this.”) Aerohive’s third generation architecture takes it to the next step and distributes control plane handling as well, leaving only the management function centralized, and that’s just software on a generic server.

Aerohive contends that controller-based architectures are expensive, poorly scalable, unreliable, hard to deploy and not needed. A controller-based architecture is more expensive than a controller-less one, because controllers aren’t free (Aerohive charges the same for its APs as other vendors do for their thin ones: under $700 for a 2×2 MIMO dual-band 802.11n device). It is not scalable because the controller constitutes a bottleneck. It is not reliable because a controller is a single point of failure, and it is not needed because processing power is now so cheap that all the functions of the controller can be put into each AP, and given the right system design, the APs can coordinate with each other without the need for centralized control.

Distributing control in this way is considerably more difficult than distributing data forwarding. Control plane functions include all the security features of the WLAN, like authentication and admission, multiple VLANs and intrusion detection (WIPS). Greg Taylor, wireless LAN services practice lead for the Professional Services Organization of BT in North America says “The number one benefit [of a controller-based architecture] is security,” so a controller-less solution has to reassure customers that their vulnerability will not be increased. According to Dr. Amit Sinha, Chief Technology Officer at Motorola Enterprise Networking and Communications, other functions handled by controllers include “firewall, QoS, L2/L3 roaming, WIPS, AAA, site survivability, DHCP, dynamic RF management, firmware and configuration management, load balancing, statistics aggregation, etc.”

The communications market has been evolving to fixed high definition voice services for some time now, and nearly every desktop phone manufacturer is including support for G.722 and other codecs now. Why? Because HD voice makes the entire communications experience a much better one than we are used to.

But what does it mean for the wireless industry? When will wireless communications become part of the HD revolution? How will handset vendors, network equipment providers, and service providers have to adapt their current technologies in order to deliver wireless HD voice? How will HD impact service delivery? What are the business models around mobile HD voice?

This session will answer these questions and more, discussing both the technology and business aspects of bringing HD into the mobile space.

Visual communications are becoming more and more commonplace. As networks improve to support video more effectively, the moment is right for broad market adoption of video conferencing and collaboration systems.

Delivering high quality video streams requires expertise in both networks and audio/video codec technology. Often, however, audio quality gets ignored, despite it being more important to efficient communication than the video component. Intelligibility is the key metric here, where wideband audio and voice quality enhancement algorithms can greatly improve the quality of experience.

This session will cover both audio and video aspects of today’s conferencing systems, and the various criteria that are used to evaluate them, including round-trip delay, lip-sync, smooth motion, bit-rate required, visual artifacts and network traversal – and of course pure audio quality. The emphasis will be on sharing best practices for building and deploying high-definition conferencing systems.

I will be moderating a session at ITExpo West on Monday 4th October at 2:15 pm: “The State of VoIP Peering,” will be held in room 304C.

Here’s the session description:

VoIP is a fact – it is here, and it is here to stay. That fact is undeniable. To date, the cost savings associated with VoIP have largely been enough to drive adoption. However, the true benefits of VoIP will only be realized through the continued growth of peering, which will keep calls on IP backbones rather than moving them onto the PSTN. Not only will increased peering continue to reduce costs, it will increase voice call quality – HD voice, for instance, can only be delivered on all-IP calls.

Of course, while there are benefits to peering, traditional carriers have traditionally not taken kindly to losing their PSTN traffic, for which they are able to bill by the minute. But, as the adoption of IP communications continues to increase – and of course the debate continues over when we will witness the true obsolescence of the PSTN – carriers will have little choice but to engage in peering relationships.

This session will offer an market update on the status of VoIP peering and its growth, as well as trends and technologies that will drive its growth going forward, including wideband audio and video calling.

This is shaping up to be a fascinating session. Rico can tell us about the hardware technologies that are enabling IP end-to-end for phone calls, and Mark and Grant will give us a real-world assessment of the state of deployment, the motivations of the early adopters, and the likely fate of the PSTN.

For now, all White Spaces devices will use a geolocation database to avoid interfering with licensed spectrum users. The latest FCC Memorandum and Order on TV White Spaces says that it is still OK to have a device that uses spectrum sensing only (one that doesn’t consult a geolocation database for licensed spectrum users), but to get certified for sensing only, a device will have to satisfy the FCC’s Office of Engineering and Technology, then be approved by the Commissioners on a case-by-case basis.

So all the devices for the foreseeable future are going to use a geolocation database. But they will have spectrum-sensing capabilities too, in order to select the cleanest channel from the list of available channels provided by the database.

Fixed devices (access points) will normally have a wired Internet connection. Once a fixed device has figured out where it is, it can query the database over the Internet for a list of available channels. Then it can advertise itself on those channels.

Mobile devices (phones, laptops etc.) will normally have non-whitespace connections to the Internet too, for example Wi-Fi or cellular data. These devices can know where they are by GPS or some other location technology, and query the geolocation database over their non-whitespace connection. If a mobile device doesn’t have non-whitespace Internet connectivity, it can sit and wait until it senses a beacon from a fixed whitespace device, then query the geolocation database over the whitespace connection. There is a slight chance at this point that the mobile device is using a licensed frequency inside the licensee’s protected contour. This chance is mitigated because the contour includes a buffer zone, so a mobile device inside a protected contour should be beyond the range of any whitespace devices outside that contour. The interference will also be very brief, since when it gets the response from the database it will instantly switch to another channel.

Nine companies have proposed themselves as geolocation database providers. Here they are, linked to the proposals they filed with the FCC:

Actually, a geolocation database is overkill for most cases. The bulk of the information is just a reformatting of data the FCC already publishes online; it’s only 37 megabytes compressed. It could be kept in the phone since it doesn’t change much; it is updated weekly.

The proposed database will be useful for those rare events where the number of wireless microphones needed is so large that it won’t fit into the spectrum reserved for microphones, though in this case spectrum sensing would probably suffice. In other words, the geolocation database is a heavyweight solution to a lightweight problem.

Clayton Christensen turned business thinking upside-down in 1997 with his book “The Innovator’s Dilemma” where he popularized his term “disruptive technology” in an analysis of the disk drive business. Since then abuse and over-use have rendered the term a meaningless cliche, but the idea behind it is still valid: well-run large companies that pay attention to their customers and make all the right decisions can be defeated in the market by upstarts that emerge from low-end niches with lower-cost, lower performance products.

PicoChip is following Christensen’s script faithfully. First it made a low-cost consumer-oriented chip that performed many of the functions of a cellular base station. Now it has added in some additional base station functions to address the infrastructure market.

Traditional infrastructure makers now face the prospect of residential device economics moving up to the macrocell.From Rethink Wireless

TV White Spaces Second MO&O: A Second Memorandum Opinion and Order that will create opportunities for investment and innovation in advanced Wi-Fi technologies and a variety of broadband services by finalizing provisions for unlicensed wireless devices to operate in unused parts of TV spectrum.

Early discussion of White Spaces proposed that client devices would be responsible for finding vacant spectrum to use. This “spectrum sensing” or “detect and avoid” technology was sufficiently controversial that a consensus grew to supplement it with a geolocation database, where the client devices determine their location using GPS or other technologies, then consult a coverage database showing which frequencies are available at that location.

The Associated Press predicts that the Order will go along with the Internet companies, and ditch the spectrum sensing requirement.

Some of the technology companies behind the white spaces are fighting a rearguard action, saying there are good reasons to retain spectrum sensing as an alternative to geolocation. The broadcasting industry (represented by the NAB and MSTV) want to require both. It will be interesting to see if the FCC leaves any spectrum sensing provisions in the Order.

Unlike its perplexing McAfee move, Intel had to acquire Infineon. Intel must make the Atom succeed. The smartphone market is growing fast, and the media tablet market is in the starting blocks. Chips in these devices are increasingly systems-on-chip, combining multiple functions. To sell application processors in phones, you must have a baseband story. Infineon’s RF expertise is a further benefit.

As Linley Gwennap said when he predicted the acquisition a month ago, the fit is natural. Intel needs 3G and LTE basebands, Infineon has no application processor.

Intel has been through this movie before, for the same strategic reasons. It acquired DSP Communications in 1999 for $1.6 Bn. The idea there was to enter the cellphone chip market with DSP’s baseband plus the XScale ARM processor that Intel got from DEC. It was great in theory, and XScale got solid design wins in the early smart-phones, but Intel neglected XScale, letting its performance lead over other ARM implementations dwindle, and its only significant baseband customer was RIM.

In 2005, Paul Otellini became CEO; at that time AMD was beginning to make worrying inroads into Intel’s market share. Otellini regrouped – he focused in on Intel’s core business, which he saw as being “Intel Architecture” chips. But XScale runs an instruction set architecture that competes with IA, namely ARM. So rather than continuing to invest in its competition, Intel instead dumped off its flagging cellphone chip business (baseband and XScale) to Marvell for $0.6 Bn, and set out to create an IA chip that could compete with ARM in size, power consumption and price. Hence Atom.

But does instruction set architecture matter that much any more? Intel’s pitch on Atom-based netbooks was that you could have “the whole Internet” on them, including the parts that run only on IA chips. But now there are no such parts. Everything relevant on the Internet works fine on ARM-based systems like Android phones. iPhones are doing great even without Adobe Flash.

So from Intel’s point of view, this decade-later redo of its entry into the cellphone chip business is different. It is doing it right, with a coherent corporate strategy. But from the point of view of the customers (the phone OEMs and carriers) it may not look so different. They will judge Intel’s offerings on price, performance, power efficiency, wireless quality and how easy Intel makes it to design-in the new chips. The same criteria as lasttime.

The non-discrimination paragraph is weakened by the kinds of words that are invitations to expensive litigation unless they are precisely defined in legislation. It doesn’t prohibit discrimination, it merely prohibits “undue” discrimination that would cause “meaningful harm.”

The managed (or differentiated) services paragraph is an example of what Ammori calls “an obvious potential end-run around the net neutrality rule.” I think that Google and Verizon would argue that their transparency provisions mean that ISPs can deliver things like FIOS video-on-demand over the same pipe as Internet service without breaching net neutrality, since the Internet service will commit to a measurable level of service. This is not how things work at the moment; ISPs make representations about the maximum delivered bandwidth, but for consumers don’t specify a minimum below which the connection will not fall.

The examples the Google blog gives of “differentiated online services, in addition to the Internet access and video services (such as Verizon’s FIOS TV)” appear to have in common the need for high bandwidth and high QoS. This bodes extremely ill for the Internet. The evolution to date of Internet access service has been steadily increasing bandwidth and QoS. The implication of this paragraph is that these improvements will be skimmed off into proprietary services, leaving the bandwidth and QoS of the public Internet stagnant.

The exclusion of wireless many consider egregious. I think that Google and Verizon would argue that there is nothing to stop wireless being added later. In any case, I am sympathetic to Verizon on this issue, since wireless is so bandwidth constrained relative to wireline that it seems necessary to ration it in some way.

The Network Management paragraph in the statement document permits “reasonable” network management practices. Fortunately the word “reasonable” is defined in detail in the statement document. Unfortunately the definition, while long, includes a clause which renders the rest of the definition redundant: “or otherwise to manage the daily operation of its network.” This clause appears to permit whatever the ISP wants.

So on balance, while it contains a lot of worthy sentiments, I am obliged to view this framework as a sellout by Google. I am not alonein thisassessment.

My opinion on this is that ISPs deserve to be fairly compensated for their service, but that they should not be permitted to double-charge for a consumer’s Internet access. If some service like video on demand requires prioritization or some other differential treatment, the ISP should only be allowed to charge the consumer for this, not the content provider. In other words, every bit traversing the subscriber’s access link should be treated equally by the ISP unless the consumer requests otherwise, and the ISP should not be permitted to take payments from third parties like content providers to preempt other traffic. If such discrimination is allowed, the ISP will be motivated to keep last-mile bandwidth scarce.

Internet access in the US is effectively a duopoly (cable or DSL) in each neighborhood. This absence of competition has caused the US to become a global laggard in consumer Internet bandwidth. With weak competition and ineffective regulation, a rational ISP will forego the expense of network upgrades.

ISPs like AT&T view the Internet as a collection of pipes connecting content providers to content consumers. This is the thinking behind Ed Whitacre’s famous comment, “to expect to use these pipes for free is nuts!” Ed was thinking that Google, or Yahoo or Vonage are using his pipes to his subscribers for free. The “Internet community” on the other hand views the Internet as a collection of pipes connecting people to people. From this other point of view, the consumer pays AT&T for access to the Internet, and Google, Yahoo and Vonage each pay their respective ISPs for access to the Internet. Nobody is getting anything for free. It makes no more sense for Google to pay AT&T for a subscriber’s Internet access than it would for an AT&T subscriber to pay Google’s connectivity providers for Google’s Internet access.

When the iPhone came out it redefined what a smartphone is. The others scrambled to catch up, and now with Android they pretty much have. The iPhone 4 is not in a different league from its competitors the way the original iPhone was. So I have been trying to decide between the iPhone 4 and the EVO for a while. I didn’t look at the Droid X or the Samsung Galaxy S, either of which may be better in some ways than the EVO.

Each hardware and software has stronger and weaker points. The Apple wins on the subtle user interface ingredients that add up to delight. It is a more polished user experience. Lots of little things. For example I was looking at the clock applications. The Apple stopwatch has a lap feature and the Android doesn’t. I use the timer a lot; the Android timer copied the Apple look and feel almost exactly, but a little worse. It added a seconds display, which is good, but the spin-wheel to set the timer doesn’t wrap. To get from 59 seconds to 0 seconds you have to spin the display all the way back through. The whole idea of a clock is that it wraps, so this indicates that the Android clock programmer didn’t really understand time. Plus when the timer is actually running, the Android cutely just animates the time-set display, while the Apple timer clears the screen and shows a count-down. This is debatable, but I think the Apple way is better. The countdown display is less cluttered, more readable, and more clearly in a “timer running” state. The Android clock has a wonderful “desk clock” mode, which the iPhone lacks, I was delighted with the idea, especially the night mode which dims the screen and lets you use it as a bedside clock. Unfortunately when I came to actually use it the hardware let the software down. Even in night mode the screen is uncomfortably bright, so I had to turn the phone face down on the bedside table.

The EVO wins on screen size. Its 4.3 inch screen is way better than the iPhone’s 3.5 inch screen. The “retina” definition on the iPhone may look like a better specification but the difference in image quality is indistinguishable to my eye, and the greater size of the EVO screen is a compelling advantage.

The iPhone has far more apps, but there are some good ones on the Android that are missing on the iPhone, for example the amazing Wi-Fi Analyzer. On the other hand, this is also an example of the immaturity of the Android platform, since there is a bug in Android’s Wi-Fi support that makes the Wi-Fi Analyzer report out-of-date results. Other nice Android features are the voice search feature and the universal “back” button. Of course you can get the same voice search with the iPhone Google app, but the iPhone lacks a universal “back” button.

The GPS on the EVO blows away the GPS on the iPhone for accuracy and responsiveness. I experimented with the Google Maps app on each phone, walking up and down my street. Apple changed the GPS chip in this rev of the iPhone, going from an Infineon/GlobalLocate to a Broadcom/GlobalLocate. The EVO’s GPS is built-in to the Qualcomm transceiver chip. The superior performance may be a side effect of assistance from the CDMA radio network.

Incidentally, the GPS test revealed that the screens are equally horrible under bright sunshine.

The iPhone is smaller and thinner, though the smallness is partly a function of the smaller screen size.

The EVO has better WAN speed, thanks to the Clearwire WiMax network, but my data-heavy usage is mainly over Wi-Fi in my home, so that’s not a huge concern for me.

Battery life is an issue. I haven’t done proper tests, but I have noticed that the EVO seems to need charging more often than the iPhone.

Shutter lag is a major concern for me. On almost all digital cameras and phones I end up taking many photos of my shoes as I put the camera back in my pocket after pressing the shutter button and assuming the photo got taken at that time rather than half a second later. I just can’t get into the habit of standing still and waiting for a while after pressing the shutter button. The iPhone and the EVO are about even on this score, both sometimes taking an inordinately long time to respond to the shutter – presumably auto-focusing. The pictures taken with the iPhone and the EVO look very different; the iPhone camera has a wider angle, but the picture quality of each is adequate for snapshots. On balance the iPhone photos appeal to my eye more than the EVO ones.

For me the antenna issue is significant. After dropping several calls I stuck some black electrical tape over the corner of the phone which seems to have somewhat fixed it. Coverage inside my home in the middle of Dallas is horrible for both AT&T and Sprint.

The iPhone’s FM radio chip isn’t enabled, so I was pleased when I saw FM radio as a built-in app on the EVO, but disappointed when I fired it up and discovered that it needed a headset to be plugged in to act as an antenna. Modern FM chips should work with internal antennas. In any case, the killer app for FM radio is on the transmit side, so you can play music from your phone through your car stereo. Neither phone supports that yet.

So on the plus side, the EVO’s compelling advantage is the screen size. On the negative side, it is bulkier, the battery life is less, the software experience isn’t quite so polished.

The bottom line is that the iPhone is no longer in a class of its own. The Android iClones are respectable alternatives.

We are half way through the year, so it’s time for another look at Wi-Fi phone certifications. Three things jump out this time. First, a leap in the number of Wi-Fi phone models in the second quarter of 2010. Second, the arrival of 802.11n in handsets, and third Samsung’s market-leading commitment to 802.11n. According to Rethink Wireless “Samsung’s share of the smartphone market was only about 5% in Q1 but it aims to increase this to almost 15% by year end.” Samsung Wi-Fi-certified a total of 73 dual mode phones in the first six months of 2010, three times as many as second place LG with 23. In the 11n category, Samsung’s lead was even more dominating: its 40 certifications were ten times either of the second place OEMs.

Here is a chart of dual mode phones certified with the Wi-Fi Alliance from 2008 to June 30th 2010. We usually do this chart stacked, but side-by-side gives a clearer comparison between feature phones and smart phones. Note that up to the middle of 2009, smart phones outpaced feature phones, but then it switched. This is a natural progression of Wi-Fi into the mass market, but may also be exaggerated by a quirk of reporting: of HTC’s 17 certifications in the first half of 2010, it only categorized one as a smart phone.

The chart below shows the growth of 802.11n. It starts in January 2010 because only one 11n phone was certified in 2009, at the end of December. As you can see, the growth is strong. I anticipate that practically all new dual mode phone certifications will be for 802.11n by the end of 2010.

Below is the same chart sliced by manufacturer instead of by month. The iPhone is missing because it wasn’t certified until July, and the iPad is missing because it’s not a phone. With only one 802.11n phone, Nokia has become a technology laggard, at least in this respect. The RIM Pearl 8100/8105 certifications are the only ones with STBC, an important feature for phones because it improves rate at distance. All the major chips (except those from TI) support STBC, so the phone OEMs must be either leaving it disabled or just not bothering to certify for it.

The Wi-Fi certification process for Wi-Fi Direct is scheduled to be launched by the end of 2010, but there are already two pre-standard implementations of this concept, My Wi-Fi, an Intel product which ships in Centrino 2 systems, and Wireless Hosted Network which ships in all versions of Windows 7.

The Wi-Fi Direct driver makes a single Wi-Fi adapter on the PC look like two to the operating system: one ordinary one that associates with a regular Access Point, and a second acting as a “Virtual Access Point.” The virtual access point (Microsoft calls it a “SoftAP”) actually runs inside the Wi-Fi driver on the PC (labeled WPAN I/F in the Intel diagram below).

To the outside world the Wi-Fi adapter also looks like two devices, each with its own MAC address: one the PC just like without Wi-Fi Direct, and the other an access point. Devices that associate with that access point join the PC’s PAN (Personal Area Network).

Here’s a slide from one of their promotional presentations giving a comparison with Bluetooth and proprietary technologies:

The essence of Ozmo’s approach is low cost, multi-device, low bandwidth and low power consumption. Wi-Fi Direct has another use case that is high bandwidth, with no requirement for low power.

If you want to stream video from your PC to a monitor using traditional Wi-Fi (“infrastructure mode”) each packet goes from the PC to the access point, then from the access point to the TV, so it occupies the spectrum twice for each packet. Wi-Fi Direct effectively doubles the available throughput, since each packet flies through the ether only once, directly from the PC to the TV. But it actually does better than that. Supposing the PC and the TV are in the same room, but the access point is in a different room, the PC can transmit at much lower power. Another similar Wi-Fi Direct session can then happen in another room in the house. Without Wi-Fi direct the two sessions would have to share the access point, taking turns to use the spectrum. So we get increased aggregate throughput both from halving the number of packet transmissions, and from allowing simultaneous use of the spectrum by multiple sessions (if they are far enough apart).

ABI came out with a press release last week saying that 770 million Wi-Fi chips will ship in 2010. This is an amazing number. Where are they all going? Fortunately ABI included a bar-chart with this information in the press release. Here it is (click on it for a full-size view):

The y axis isn’t labeled, but the divisions appear to be roughly 200 million units.

This year shows roughly equal shipments going to phones, mobile PCs, and everything else. There is no category of Access Points, so presumably less of those are sold than “pure VoWi-Fi handsets.” I find this surprising, since I expect the category of pure VoWi-Fi handsets to remain moribund. Gigaset, which makes an excellent cordless handset for VoIP, stopped using Wi-Fi and went over to DECT because of its superior characteristics for this application.

There is also no listing for tablet PCs, a category set to boom; they must be subsumed under MIDs (Mobile Internet Devices).

The category of “Computer Peripherals” will probably grow faster than ABI seems to anticipate. Wireless keyboards and mice use either Bluetooth or proprietary radios currently, but the new Wi-Fi alliance specification “Wi-Fi Direct” will change that. Ozmo is aiming to use Wi-Fi to improve battery life in mice and keyboards two to three-fold. Since all laptops, most all-in-one PCs and many regular desktops already have Wi-Fi built-in (that’s at least double the Bluetooth attach rate) this may be an attractive proposition for the makers (and purchasers) of wireless mice and keyboards. Booming sales of tablet PCs may further boost sales of wireless keyboards and mice.

The most convenient route between telephone service providers is through the PSTN, since you can’t offer phone service without connecting to it. Because of this convenience telephone service providers tend to consider PSTN connectivity adequate, and don’t take the additional step of delivering IP connectivity. This is unfortunate because it inhibits the spread of high quality wideband (HD Voice) phone calls. For HD voice to happen, the two endpoints must be connected by an all-IP path, without the media stream crossing into the PSTN.

For example, OnSIP is my voice service provider. Any calls I make to another OnSIP subscriber complete in HD Voice (G.722 codec), because I have provisioned my phones to prefer this codec. Calls I make to phone numbers (E.164 numbers) that don’t belong to OnSIP complete in narrowband (G.711 codec), because OnSIP has to route them over the PSTN. If OnSIP was able to use an IP address for these calls instead of an E.164 number, it could avoid the PSTN and keep the call in G.722.

Xconnect has just announced an HD Voice Peering initiative, where multiple voice service providers share their numbers in a common directory called an ENUM directory. When a subscriber places a call, the service provider looks up the destination number in the ENUM directory; if is there, it returns a SIP address (URI) to substitute for the phone number, and the call can complete without going over the PSTN. About half the participants in the Xconnect trial go a step further than ENUM pooling: they interconnect (“peer”) their networks directly through an Xconnect router, so the traffic doesn’t need to traverse the public Internet. [See correction in comments below]

There are other voice peering services that support this kind of HD connection, notably the VPF (Voice Peering Fabric). The VPF has an ENUM directory, but as the name suggests, it does not offer ENUM-only service; all the member companies interconnect their networks on a VPF router.

Some experts maintain that for business-grade call quality, it is essential to peer networks rather than route over the public Internet. Packets that traverse the public Internet are prone to delay and loss, while properly peered networks deliver packets quickly and reliably. In my experience, this has not been an issue. My access to OnSIP and to Vonage is over the public Internet, and I have never had any quality issues with either provider. From this I am inclined to conclude that explicit peering of voice networks is overkill, and that if you have a VoIP connection all that is needed for HD voice communication is to list your phone number in an ENUM directory. Presumably the voice service providers in Xconnect’s trial that are not peering share this opinion.

Xconnect and the VPN only add to their ENUM directories the numbers owned by their members. But even if you are not a customer of one of their members, you can still list your number in an ENUM directory, e164.org. This way, anybody who checks for your number in the directory can route the call over the Internet. Calls made this way don’t need to use SIP trunks, and they can complete in HD voice.

If you happen to have an Asterisk PBX, you can easily provision it to check in a list of ENUM directories before it places a call.

On a recent extended trip to England, I discovered Stanza, an e-reader application for the iPhone. Not only did it demonstrate for me that the iPad will obsolete the Kindle, but that the iPhone can do a pretty good job of it already.
Surprisingly, the iPhone surpasses a threshold of usability that makes it more of a pleasure than a pain to use as an e-reader. This is due to the beautiful design and execution of Stanza. The obvious handicap of the iPhone as an e-reader is the small screen size, but Stanza does a great job of getting around this. It turns out that reading on the iPhone is quite doable, and better than a real book in several ways:

It is an entire library in your pocket – you can have dozens of books in your iPhone, and since you have your iPhone with you in any case, they don’t take any pocket space at all.

You can read it in low-light conditions without any additional light source.

You can read it even when you are without your spectacles, since you can easily resize the text as big as you like.

It doesn’t cost anything. If you enjoy fiction, there is really no need to buy a book again, since there are tens of thousands of good books in the public domain downloadable free from sites like Gutenberg.org and feedbooks.com. Almost all the best books ever written are on these sites, including all the Harvard Classics and numerous more recent works by great authors like William James, James Joyce, Joseph Conrad and Philip K. Dick.

You can search the text in a book and instantly find the reference you are looking for.

It has a built-in dictionary, so any word you don’t know you can look up instantly.

It keeps your place – every time you open the app it takes you to the page you were reading.

You can make annotations. This isn’t really better than a paper book, since you can easily write marginal notes in one of those, but with Stanza you don’t have to hunt around for a pencil in order to make a note.

You don’t have to go to a bookstore or library to get a book. This is a mixed benefit, since it is always so enjoyable to hang out in bookstores and libraries, but when you suddenly get a hankering to take another look at a book you read a long time ago, you can just download it immediately.

All these benefits will apply equally to the iPad and the others in the 2010 crop of tablet PCs, which will also have the benefit of larger screens. But Stanza on the iPhone has showed me that good user interface design can compensate for major form factor handicaps.

The FCC has launched a broadband speed and quality test presumably to gather information about the real state of broadband in the US. This is a great initiative and I encourage you to go and run the test.

I tried it myself, and it wouldn’t let me take the test because I put in an English zip code, because that is where I happen to be this week. So I put in my Dallas zip code instead and it ran the test. I hope that there is some check in there that compares the zip code to the geographical location of my IP address, and discards the ones that don’t match, or the results will presumably be worthless.

Something is wrong with broadband access in the US. It was ranked 15th in the world in 2008 on a composite score of household penetration, speed and price.

Google is setting out to demonstrate a better way, though other countries already offer such demonstrations. The current international benchmark for price and speed is Stockholm at $11 per month for 100 mbps. There are similar efforts in the US, for example Utopia in Utah. One of the key features of these implementations of fiber as a utility is that the supplier of the fiber does not supply content, since this would impose a structural conflict of interest.

Google does supply content, so it will be interesting to see how it deals with this conflict. I doubt there will be any problems in the short term, but in the long term it will be very hard to resist the impulse to use all the competitive tools available; “Don’t be evil” isn’t a useful guideline to a long, gentle slope.

OK, it’s easy to be cynical, but at least Google is trying to do something to improve the broadband environment in the US, and it may be a long time before the short term allure of preferred treatment for its own content outweighs the strategic benefit of improved national broadband infrastructure. And this initiative will undoubtedly help to accelerate the deployment of fiber to the home, if only by goading the incumbents.

Rethink mentions iCall and Skype as beneficiaries. Another notable one is Fring. Google Voice is not yet in this category, since it uses the cellular voice channel rather than the data channel, so it is not strictly speaking VoIP; the same applies to Skype for the iPhone.

According to Boaz Zilberman, Chief Architect at Fring, the Fring iPhone client needed no changes to implement VoIP on the 3G data channel. It was simply a matter of reprogramming the Fring servers to not block it. Apple also required a change to Fring’s customer license agreements, requiring the customer to use this feature only if permitted by his service provider. AT&T now allows it, but non-US carriers may have different policies.

Boaz also mentioned some interesting points about VoIP on the 3G data channel compared with EDGE/GPRS and Wi-Fi. He said that Fring only uses the codecs built in to handsets to avoid the battery drain of software codecs. He said that his preferred codec is AMR-NB; he feels the bandwidth constraints and packet loss inherent in wireless communications negate the audio quality benefits of wideband codecs. 3G data calls often sound better than Wi-Fi calls – the increased latency (100 ms additional round-trip according to Boaz) is balanced by reduced packet loss. 20% of Fring’s calls run on GPRS/EDGE, where the latency is even greater than on 3G; total round trip latency on a GPRS VoIP call is 400-500ms according to Boaz.

As for handsets, Boaz says that Symbian phones are best suited for VoIP, the Nokia N97 being the current champion. Windows Mobile has poor audio path support in its APIs. The iPhone’s greatest advantage is its user interface, it’s disadvantages are lack of background execution and lack of camera APIs. Android is fragmented: each Android device requires different programming to implement VoIP.

Well, the Apple iPad is out. Time will tell whether its success will equal that of the iPhone, the Apple TV or the MacBook Air. I’m confident it will do better than the Newton. The announcement contained a few interesting points, the most significant of which is that it uses a new Apple proprietary processor, the A4. Some reviewers have described the iPad as very fast, and with good battery life; these are indications that the processor is power efficient. Because of its software similarities to the iPhone, the architecture is probably ARM-based, with special P.A. Semi sauce for power and speed. On the other hand, it could be a spin of the PWRficient CPU, which is PowerPC based. In that light, it is interesting to review Apple’s reasons for abandoning the Power PC in 2005. Maybe Apple’s massive increase in sales volume since then has made Intel’s economies of scale less overwhelming?

At CES last week Josh Silverman, Skype’s CEO mentioned that Skype’s international voice traffic went up 75% in 2009. This has now been approximately confirmed by Telegeography, which now puts Skype’s share of international voice traffic at 13%, up from 8% in 2008. That’s an increase of over 60% year on year.

Josh Silverman also mentioned that Skype was being downloaded at a rate of well over 300,000 downloads per day. Yes, per day. This number matches CKIPE’s observation that Skype added 2.5 million new users in the 11 days after Christmas 2009.

When Android came out a couple of years ago, Matt Lewis of Rethink Wireless saw it as an opportunity to avoid the fragmentation that open source projects are prone to:

Google is not a handset OS company. Android is simply a means to an end – the end being to create a vast new expanse of real estate which Google can beam its advertising inventory to. This demands a level of consistency and interpretability from Android so that, regardless of who implements the platform on whichever device, application compatibility is maintained.

As for Android developers, many are angry that there is no SDK as yet for Nexus One. This, in turn, has highlighted the issue of fragmentation, with different OS releases and even different devices requiring different SDKs, with limited compatibility between apps written for the various versions. Until there is an SDK for Android 2.1, the latest OS upgrade, which so far runs only on Nexus One, programmers cannot be sure their apps will work properly with the new handset.

At the HD Voice Summit in Las Vegas last week, Alan Percy of AudioCodes gave a presentation of the state of deployment of HD Voice, citing three levels of deployment: announced interest, trials and service deployment.

At CES last week I stopped in a booth lined with LCD TVs from several different manufacturers, displaying 3D images to people wearing polarizing glasses. According to a person manning the booth, the technique used in these TVs is to encode the images on alternate scan lines, and to have a polarizing filter attached to the screen, arranged in horizontal stripes, one stripe per row of pixels, with each row polarized orthogonally to the adjacent ones. The effect was very good, roughly equivalent to cinema 3D.

The booth also had a DLP projector projecting 3D onto a screen. This required bulkier glasses. According to the person in the booth, the technique used here was to encode the left and right images in alternating frames, with shuttering in the glasses synchronized with the display, occluding the right eye when a left frame is showing, and vice versa. This flavor of 3D didn’t work for me – maybe my glasses were broken…

50% of consumers say they would change their telephone service provider to get better sound quality, according to Tom Lemaire, Sr. Director of Business Development at Orange/France Telecom North America, speaking at the CES HD Voice Summit this week (Orange/France Telecom has the largest deployment of HD Voice of any traditional telco). Rich Buchanan, Chief Marketing Officer at Ooma, said at the same session that his surveys show that 65% of consumers would change their provider to get better voice quality.

Bearing in mind that we know from observation that consumers value both mobility and price above call quality, these survey numbers fall into the “interesting if true” category.

Lemaire and Buchanan also said that their logs show that the average call in HD lasts longer than the average narrowband call, though they didn’t give numbers.

The tablet wars are imminent, with Microsoft, Google and Apple breaking out their big guns. Here’s what you will be doing with yours later this year:

Internet browser of course: think iPhone experience with a bigger screen. It will be super-fast with 802.11n in your home, and somewhat slower when you are out and about, tethering to your cell phone for wide-area connectivity. You don’t need a cellular connection in the Internet Tablet itself, though the cellcos wish you would.

TV accessory: treat it as a personal picture-in-picture display. View the program guide without disturbing the other people watching the main screen. Use it for voting on shows like American Idol. Use it as a remote to change channels and set up recordings.

TV replacement: a 10 inch screen at two feet is the same relative size as a 50 inch screen at ten feet. Use it with Hulu and the other streaming video services.

VideoPhone: some Internet Tablets will have hi-res user-facing cameras and high definition microphones and speakers: the perfect Skype phone to keep on your coffee table. How about on your fridge door for an always-on video connection to the grandparents? Or in a suitable charging base, a replacement office desk phone.

Electronic picture frame: sure it’s overkill for such a trivial application, but when it’s not doing anything else, why not?

eBook reader: maybe not in 2010, but as screen and power technology evolve the notion of a special-function eBook reader will become as quaint as a Wang word processor. (Never heard of a Wang word processor? I rest my case.)

Home remote: take a look at AMX. This kind of top-of-the-line home control will be available to the masses. Set the thermostat, set the burglar alarm, look at the front door webcam when the doorbell rings…

Game console: look at all the games for the iPhone. Many of them will work on Apple’s iSlate from day one. And you can bet there will be plenty of cool games for Android, and even Windows-based Internet Tablets.

PND display: Google Maps on the iPhone is miraculously good, but it’s not perfect. The display is way too small for effective in-car navigation. It’s possible that some Internet Tablets will have GPS chips in them (GPS only adds a few dollars to the bill of materials), but for this application there’s no need. Tether it to your cell phone for the Internet connectivity and the GPS, and use the tablet for display and input only.

2010 will be the year of the Internet Tablet. The industry has pretty much converged on the form factor: ten-inch-plus screen, touch interface, Wi-Fi connectivity. What’s a little more up in the air are minor details that will provide differentiation, like cellular connectivity, cameras, speakers and microphones. Apple will jump-start the category, but there will quickly be a slew of contenders at sub-$200 price points.

Several technology advances have converged to make now the right time. Low-cost, low energy ARM processors like the Qualcomm Snapdragon have enough processing muscle to drive PC-scale applications, and their pricing piggy-backs on the manufacturing scale of phones. 802.11n is fast enough for responsive web-based applications and HD video streaming. LCD screens continue to get cheaper. Personal Wi-Fi networks enable tethering and wireless keyboards for when you need them.

This also the perfect form factor for grade school kids. Once the screen resolutions get high enough books will disappear almost overnight. No more backs bent under packs laden with schoolbooks. Just this.

The fall 2009 crop of ultimate smartphones looks more penultimate to me, with its lack of 11n. But a handset with 802.11n has come in under the wire for 2009. Not officially, but actually. Slashgear reports a hack that kicks the Wi-Fi chip in the HTC HD2 phone into 11n mode. And the first ultimate smartphone of 2010, the HTC Google Nexus One is also rumored to support 802.11n.

These are the drops before the deluge. Questions to chip suppliers have elicited mild surprise that there are still no Wi-Fi Alliance certifications for handsets with 802.11n. All the flagship chips from all the handset Wi-Fi chipmakers are 802.11n. Broadcom is already shipping volumes of its BCM4329 11n combo chip to Apple for the iTouch (and I would guess the new Apple tablet), though the 3GS still sports the older BCM4325.

There are several good reasons why Wi-Fi 802.11n hasn’t made its way into mobile phones hardware just yet. Increased power consumption is just not worth it if the speed will be limited by other factors such as under-powered CPU or slow-memory…

But is it true that 802.11n increases power consumption at a system level? In some cases it may be: the Slashgear report linked above says: “some users have reported significant increases in battery consumption when the higher-speed wireless is switched on.”

This reality appears to contradict the opinion of one of the most knowledgeable engineers in the Wi-Fi industry, Bill McFarland, CTO at Atheros, who says:

The important metric here is the energy-per-bit transferred, which is the average power consumption divided by the average data rate. This energy can be measured in nanojoules (nJ) per bit transferred, and is the metric to determine how long a battery will last while doing tasks such as VoIP, video transmissions, or file transfers.

For example, Table 1 shows that for 802.11g the data rate is 22 Mbps and the corresponding receive power-consumption average is around 140 mW. While actively receiving, the energy consumed in receiving each bit is about 6.4 nJ. On the transmit side, the energy is about 20.4 nJ per bit.

Looking at these same cases for 802.11n, the data rate has gone up by almost a factor of 10, while power consumption has gone up by only a factor of 5, or in the transmit case, not even a factor of 3.

Thus, the energy efficiency in terms of nJ per bit is greater for 802.11n.

Here is his table that illustrates that point:

Source: Wireless Net DesignLine 06/03/2008

The discrepancy between this theoretical superiority of 802.11n’s power efficiency, and the complaints from the field may be explained several ways. For example, the power efficiency may actually be better and the reports wrong. Or there may be some error in the particular implementation of 802.11n in the HD2 - a problem that led HTC to disable it for the initial shipments.

Either way, 2010 will be the year for 802.11n in handsets. I expect all dual-mode handset announcements in the latter part of the year to have 802.11n.

As to why it took so long, I don’t think it did, really. The chips only started shipping this year, and there is a manufacturing lag between chip and phone. I suppose a phone could have started shipping around the same time as the latest iTouch, which was September. But 3 months is not an egregious lag.

It points out that with “3% of smartphone users now consuming 40% of network capacity,” the carrier has to draw a line. Presumably because if 30% of AT&T’s subscribers were to buy iPhones, they would consume 400% of the network’s capacity.

Wireless networks are badly bandwidth constrained. AT&T’s woes with the iPhone launch were caused by lack of backhaul (wired capacity to the cell towers), but the real problem is on the wireless link from the cell tower to the phone.

This provokes a dilemma - not just for AT&T but for all wireless service providers. Ideally you want the network to be super responsive, for example when you are loading a web page. This requires a lot of bandwidth for short bursts. So imposing a bandwidth cap, throttling download speeds to some arbitrary maximum, would give users a worse experience. But users who use a lot of bandwidth continuously - streaming live TV for example - make things bad for everybody.

The cellular companies think of users like this as bad guys, taking more than their share. But actually they are innocently taking the carriers up on the promises in their ads. This is why the Rethink piece says “many observers think AT&T - and its rivals - will have to return to usage-based pricing, or a tiered tariff plan.”

Actually, AT&T already appears to have such a policy - reserving the right to charge more if you use more than 5GB per month. This is a lot, unless you are using your phone to stream video. For example, it’s over 10,000 average web pages or 10,000 minutes of VoIP. You can avoid running over this cap by limiting your streaming videos and your videophone calls to when you are in Wi-Fi coverage. You can still watch videos when you are out and about by downloading them in advance, iPod style.

I just clicked on a calendar appointment enclosure in an email. Nothing happened, so I clicked again. Then suddenly two copies of the appointment appeared in my iCal calendar. Why on earth did the computer take so long to respond to my click? I waited an eternity - maybe even as long as a second.

The human brain has a clock speed of about 15 Hz. So anything that happens in less than 70 milliseconds seems instant. The other side of this coin is that when your computer takes longer than 150 ms to respond to you, it’s slowing you down.

I have difficulty fathoming how programmers are able to make modern computers run so slowly. The original IBM PC ran at well under 2 MIPS. The computer you are sitting at is around ten thousand times faster. It’s over 100 times faster than a Cray-1 supercomputer. This means that when your computer keeps you waiting for a quarter of a second, equally inept programming on the same task on an eighties-era IBM PC would have kept you waiting 40 minutes. I don’t know about you, but I encounter delays of over a quarter of a second with distressing frequency in my daily work.

I blame Microsoft. Around Intel the joke line about performance was “what Andy giveth, Bill taketh away.” This was actually a winning strategy for decades of Microsoft success: concentrate on features and speed of implementation and never waste time optimizing for performance because the hardware will catch up. It’s hard to argue with success, but I wonder if a software company obsessed with performance could be even more successful than Microsoft?

I wrote earlier about FCC chairman Julius Genachowski’s plans for regulations aimed at network neutrality. The FCC today came through with a Notice of Proposed Rule Making. Here are the relevant documents from the FCC website:

The NPRM itself is a hefty document, 107 pages long; if you just want the bottom line, the Summary Presentation is short and a little more readable than the press release. The comment period closes in mid-January, and the FCC will respond to the comments in March. I hesitate to guess when the rules will actually be released - this is hugely controversial: 40,000 comments filed to date. Here is a link to a pro-neutrality advocate. Here is a link to a pro-competition advocate. I believe that the FCC is doing a necessary thing here, and that the proposals properly address the legitimate concerns of the ISPs.