Location privacy is getting a fair bit of attention at the moment and I was quite interested to read this blog about how informed users need to be about applications that use your smarphone’s AGPS to determine location information.

To briefly cover the basis of how location positioning uses AGPS on modern smartphones: a phone’s GPS module often needs “assistance data” to help it determine an accurate location “fix” to represent your position on a digital map. This assistance data is provided via an internet connection (e.g over 3G) which downloads additional positional information, called “Assisted GPS” to the phone’s location software to help it determine its position in conjunction with the standard GPS based data which the phone is reading over direct radio with whichever satellites are “in view” directly above.

Generally four satellites are needed to determine a position accurate to about 50 metres. With AGPS, the phone’s software can determine a Time to First Fix (TTFF) more quickly and potentially more accurately than relying on GPS signals alone. In urban environments AGPS is crucial to providing a sound service experience. For assistance data to be effective the smartphone’s location software must take signal strength readings and collect data from local WiFi networks as well as local cell towers. This data is uploaded over the Internet to the service provider’s AGPS and using some highly sophisticated algorithms with a stored base of geo-coded WiFi networks, a user position can be accurately calculated, usually to within 20 metres.

The interesting point about using WiFi network signals to help provide assistance data is that a massive database of geo-coded WiFi networks is needed for it to be used as an effective positioning service. This database needs to be centralised and in the control of a company providing a location service. With a comprehensive “geo-database” of referenced WiFi signals in place it represents a hugely powerful positioning tool to support mapping and other mobile locations services. The use of location services in city environments has become very important. There are now a multitude of location-based applications which depend on fast and accurate positioning and WiFi Positioning Systems or (WPS) have become the most important basis for supporting these new applications. WPS has now become so efficient and accurate that in urban environments GPS is generally not required.

The two companies regarded as having the largest geo-database of WiFi signals are Skyhook Wireless and Google. Both obtain this data by “war driving” which involves slowly driving a specially equipped vehicle through every relevant street picking up every WiFi signal, geo-coding it and storing to a central database. Needless to say, collecting and formatting this raw geo-data is a hugely expensive affair and deploying a service based on this technique has already brought Google and Skyhook into conflict.

As well as war driving there is a less costly method to acquiring positioning data which involves the phone passively collecting readings of local WiFi networks, recording them to a database on your phone with a geo-coded reference which is uploaded as raw geo-data to a central server maintained by a service provider or device manufacturer.

Apple, Google and Nokia’s Ovi are the probably the most recognizable names that use this technique. To use it effectively these companies need exclusive control of the geo-coding API on the smartphone and actively prevent other services other than their own from using it. The amount of data that is stored will be dependent how often you use any type of location service on your smartphone. Gradually and over time, this technique builds a highly effective geo-database of WiFi networks, cell tower information and GPS co-ordinates.

The issue with this “crowd sourcing” technique is that it is much less well known and the service providers that use it to obtain raw geo data and build a large geo-database would rather you were not aware you are actively assisting them in doing this because you might want to question what this data is being used for and what is the risk of your privacy being violated.

Modern smartphones have become very powerful geo-coding devices capable of storing historical data based on the different signals the device has recorded when it is in use. Combine this with the availability of so many WiFi networks whose signals can be instantly geo-coded and smartphones have become the ideal geo-referencing medium which service providers of mapping applications are using to build and refine their central geo-database.

Currently, there are no rulings or good practice guidelines that instruct the service provider on what they are not allowed to do with this data and this is one key reason why fears arise over for the potential abuse of user location data. The privacy concern with this type of location service is when does actual geo-data become private information? Companies that deploy crowd-sourcing will swear blind it is only ever used to improve the quality of their service, and not to generate user “profiles” which monitor an individual’s movements.

Once the application has obtained the location information from the smartphone platform API, that data is pushed to a server in the control of the service provider. You can be assured that the data is accompanied by a date and time stamp plus will be referenced with some sort of user ID and stored for analysis at some later date. Does this overstep the imaginary line or is this just good business practice to understand how customers are using the service?

User data is very different to user information. The data becomes information when it is manipulated to provide some new context on how the individual is behaving which is effectively user profiling. When a company crosses this line without informing the individual concerned we can say quite emphatically that the individual’s privacy has been violated. The issue is we trust these companies not to go down this route but in reality we are powerless to stop them so is this in itself a necessary cause for concern and warrants the need for control? …Unfortunately, the nature of the technology implies there is virtually no basis for control.

If we want to control the availability of user data then we have to identify with whom the responsibility resides to ensure there is no risk to our privacy being violated. Identifying whom this might be is not straightforward since it could be down to any of the following:

1. The service provider that delivers the location service (such as Google Maps or Nokia’s Ovi Maps)
2. A 3rd party supplier of the raw geo-data that is used by the application to enable positioning (such as Skyhook)
3. The application store that initially sold you the service (such as Ovi or Android Market)
4. The mobile network operator that provided the smartphone
5. The mobile device manufacturer (such Apple, Motorola or Nokia)
6. The mobile software platform (such as Android, Apple iOS or Symbian)
7. The user who downloads the service and accepts the End User License Agreement

As we can see, the basis for identifying responsibility is extremely fragmented and exacerbated even further when you realise that each of these players will be governed or regulated differently according to the country in which they’re based.

Since comprehensive guarantees covering all parties are largely impossible, self-regulation is perhaps a good common sense route forward to provide assurances at some level. The principle way in which self regulation can help avert consumer mistrust is by major service providers taking the initiative and being entirely open about how user location data is used. The problem here is that no one firm is leading by example and being open with the way in which user data is being managed. This is largely because by being open, the company is potentially exposing itself to unwanted scrutiny.

By having no basis to protect user privacy through regulation we instinctively look to the major service providers for assurances. The ultimate assurance would be more openness on their part on how this data is used and specifically to provide guarantees that personal profiling is not used. However rather than actually being more open, the service providers prefer to avoid explicitly talking about personal data and say instead they have adopted a dogma that would never allow them to behave in an “evil” way.

With no real assurances from the major service providers we therefore have to establish our own understanding of the risks. This means we need better awareness of the technology that enables these services because this technology has quickly become standardised for most types of mobile phones. In a technology sense, privacy violation is down to the use of a specific API on the smartphone which will listen for, then geo-code received WiFi signals without the user knowing anything about it. Access to this API by a third-party service is regulated by the platform provider so some element of user protection is already in place but this does not always result in full privacy protection. Mobile location services are becoming increasingly widespread and sophisticated so if location privacy is an issue for the future then better general awareness today of how location technology is being used is not such a bad thing and will help users better understand the real risks with using mobile location services.

Google’s focus seems to be a mixture of content aggregation and content delivery, packaged with information services leveraged from the Internet. Its success depends crucially on whether or not we are prepared to merge our conventional viewing experience with our daily internet activity. This strategy will cut directly into current service providers pricing models which bundle access and content as a single package, much like the approach of Virgin Media and British Telecom in particular. This will force the network providers back down the path to offering only network access and reduce their ability to bundle in content based services.

Apple’s strategy on the other hand is services and hardware. A system which will complement your standard viewing experience by allowing you to access and enjoy your personal content and other content that you would have purchased from iTunes. This is a more straightforward strategy and is clearly looking to identify with how content is being consumed today. The emphasis with Apple TV is about creating a personal network of easily accessible personal and purchased content to be consumed under your control using devices manufactured by Apple.

Describing the 2 strategies in this way, you may think there is not a lot of cross over but they both fundamentally clash on a key technological issue that could greatly affect the success of one over the other and relates to how the home WiFi network can be used as a dedicated network for sharing or distributing content to other devices around the home. Google seems to be pointing towards DLNA as the basis for allowing different home devices such as the TV, the games console, the laptop, and the mobile phone to all use DLNA to easily connect with one another. Apple on the other hand, is very definitely not using DLNA preferring its own Airplay technology. Both technologies use your home WiFi network to control and distribute content between compatible devices so they fundamentally work in the same way. Importantly, these forms of local connectivity allow you to consume content in different ways using different devices which is important to both Apple’s and Google’s ambitions in your living room.

Although there is no clear endorsement of DLNA from Google, it is an open specification and would seem to fit Google’s need for a reliable service enabler that easily interoperates and shares content between different vendor specific devices e.g. a mobile phone manufactured by Motorola, or a TV set from Samsung, or home PC from Sony. These are all companies that currently have a stake-holding in Android so you would imagine they are firmly interested in the opportunity to cross sell home entertainment hardware that can run Google TV over a WiFi home network. A potential issue for Google is that DLNA is based on Microsoft’s technology since DLNA uses UPnP as it primary networking protocol and Microsoft also play a large and important role within the DLNA organisation which manages the certification of services that use DLNA. This may indeed not be an issue since USB is now ubiquitous across all forms of smartphones, which was also invented by Microsoft, but you somehow get the impression the stakes are higher when it comes to home entertainment.

To what extent Google actually endorses DLNA as a complementary technology to Google TV remains to be seen but the fact remains that the battle for the living room is coming.

You might think there is nothing new in that statement and in general mobile evolution has been vast and wide-reaching, but if you take a closer look, traditional mobile operators’ services haven’t progressed at the same pace, even since the full roll-out of 3G.

What has changed is the way we access and use the Internet on a mobile device. The Internet is home to a vast array of communications services for which we only need a constant connection for instant online presence, chat, video calling, content sharing and multimedia streaming. So on one side we have seemingly endless innovation and on the other, very little.

But this new evolution in mobile services based on Internet access has its drawbacks. There is no consistent user experience between the service providers, the clients could be better “integrated” with the mobile phone, there is virtually no customer support, service availability can be patchy, and very importantly, they can be vulnerable to security flaws.

Enabling mobile operators to host these new experiences in mobile multimedia and social networking as well as integrating the services with the mobile devices is an important challenge, since if these services are to exist in a mobile network environment they must deliver on all the service facets that are expected from a mobile operator.

The challenge is to deliver a consistent user experience across different types of smartphones, each with different technical capabilities being used on different mobile networks. All in all, this means that service interoperability between different devices and across different networks can be an issue on more than one level and is therefore highly complex.

An initiative being led by the GSMA and supported by the industry’s major players, including Nokia, Sony Ericsson, France Telecom and Telefonica, has been addressing this for some time under a set of specifications known as Rich Communication Suite (RCS). These specifications define the path for the development and interoperability of mobile multimedia and data communication services such as mobile chat, enhanced phone book services, social networking, and content sharing across networks and devices. This will allow the next evolution in new mobile services requiring presence, peer-to-peer media streaming or content sharing to really take-off.

Before we can get to this stage, however, interoperability remains the key challenge, and Nokia and Symbian both see open source development as the route to effective interoperability and long-term development. Consequently, Nokia have contributed to the Symbian platform a number of enablers important for RCS, such as full IMS functionality and peer-to-peer media streaming, to help improve and encourage interoperability between RCS services operating on Symbian.

So, what does this mean? Quite simply, anyone can go onto the Symbian developer website and take the code contributed by Nokia, build on it and contribute to its development in open source. As a working example of this, Nokia and Neusoft are collaborating on Symbian’s RCS development plan for S^4 and will be showcasing their RCS services at SEE 2010 this year. This open working ethos is especially important with projects like RCS that require the collaboration of many for the benefit of interoperability and service advancement across the industry.

OMA DiagMon
In the world of over 4 billion GSM and 3G users, wireless Operators spend millions of dollars for post-sale services. The consumer base is exponentially increasing day by day and manageability and service of mobile phones is getting complex with various hardware and software capabilities being added.

What does it offer?

The Diagnostics Management Object (DiagMon) V1.0 Enabler [DiagMon-TS] supports the following functionality:
1) Diagnostics Policies Management: Support for specification and enforcement of policies related to the management of diagnostics features and data.
2) Fault Reporting: Enable the device to report faults to the network as the trouble is detected at the device.
3) Performance Monitoring: Enable the device to measure, collect and report key performance indicators (KPIs) data Has seen by the device such as on a periodic basis.
4) Device Interrogation: Enable the network to query the device for additional diagnostics data in response to a fault
5) Remote Diagnostics Procedure Invocation: Enable management authorities to invoke specific diagnostics procedures embedded in the device to perform routine maintenance and diagnostics.
6) Remote Device Repairing: Enable management authorities to invoke specific repairing procedures based on the results of diagnosis procedures.

How can you benefit from DiagMon?

The OMA-DiagMon enables management authorities to proactively detect and repair troubles even before the users are impacted, or to determine actual or potential problems with a device.

Network Effect for OMA-DM based technology- OMA DM clients are growing, operators are investing in a server, consequently driving more demand for phones with the client already bundled onto them.

For end users this means better experience in terms of remote problem solving, reducing the need to visit the customer contact centers to the very minimum.

518M is the number of subscribers China Mobile currently has. To put that in perspective that’s over 60% more than Vodafone‘s 323M – the next biggest, and over two and a half times that of Telefonica, the 3rd biggest operator in the world. That’s a pretty significant market for anyone who’s anything to do with mobile therefore, and don’t forget that that’s purely a domestic market.

The news that senior representatives from China Mobile came to London to visit Symbian and put pen to paper on a MoU between our two organisations is certainly not insignificant.

What, however, has that got to do with connectivity (more than any other operator that is)? The answer is TD-SCDMA – a 3G telephony standard was, until now, one of the lesser user parts of the 3GPP specification. I say “until now” because the licensing regulators in China have assigned their three national operators three different telephony standards: China Unicom – FD-WCDMA (3G as we know it in Europe, Japan and most of the world); China Telecom – CDMA2000 (3G as used throughout widely in the US); China Mobile – yes, you’ve guessed it – TD-SCDMA.

So what is TD-SCDMA? Like other UMTS technologies it uses a Wideband-CDMA mechanism to communicate, but whereas the predominant UMTS technology separates the uplink and downlink channels by transmitting them on different frequencies (Frequency Division) China Mobile’s radio technology separates them in time instead (Time Division). Additionally, the “S” in TD-SCDMA means that the radio signals are transmitted synchronously between the mobile and base-station (to allow for better interference rejection from other users). There are a number of pros and cons to using this mechanism, but overall these differences aren’t what’s relevant – what is, however, is that it means a whole new set of modems and software for mobile devices.

So what does it mean for Symbian – well fortunately the Symbian architecture was pretty well thought out from the start to enable maximum flexibility for future technologies – let’s face it, 3G didn’t even exist when the telephony subsystem (ETel) was first conceived.

I’ve been in Beijing and Shanghai in the last few weeks following up with a number of companies working with Symbian and TD-SCDMA including T3G (a subsidiary of one our board members: ST-Ericsson) and Nokia’s TD team who not long ago launched the first open TD-SCDMA device – running Symbian. On Friday they announced a long-term partnership for TD-SCDMA on Symbian.

So, Symbian is ready for TD-SCDMA – what’s next? Well we’re looking to get contributions of the telephony adaptation code from a number of TD-modem makers so OEMs can easily integrate TD-SCDMA into their devices, ready to target those 518M subscribers.

IMS is one of those really interesting technology enigmas. Everyone expects IP based mobile network infrastructure to eventually replace the circuit switched technology but for as long as IMS has been around most perceptions seem to be that it hasn’t quite ever delivered. I think there are several reasons for this, the main one being that Mobile Network Operators do not need to make the change. They would gain no real competitive advantage in doing so since conventional non IP services such as voice and SMS remain cost efficient, highly reliable and very profitable.

What IMS has suffered from is a lack of context in relation to real services. SIP based VoIP alone does not differentiate itself sufficiently from conventional voice to warrant the level of investment in infrastructure: IMS is a risk to operators rather than an opportunity. This lack of service context is something the main proponent of IMS, the GSMA, has recently addressed with a set of new specifications around Rich Communications Suite (RCS).

RCS is a set of IP services defined in the context of IMS and provide operators with a service based approach to deploying IMS and deliver enhanced communications services. These services are designed to be future oriented in the sense that they are defined by how communications has begun to evolve. For example, social networking, instant messaging, content and media sharing are all services that are now very familiar to everyone but they continue to remain outside the realm of conventional mobile communications since we use different machines, service providers and clients to run them. RCS seeks to marry all these services and provide the mobile network operator with the opportunity to start to develop them and most importantly, brand them. By branding services that manage social media and content sharing, operators have a clear basis to compete with Apple and Google in particular, which are starting to threaten the operators’ position as primary service providers of next generation services. This is known as the “dumb pipe” concept.

What RCS promises is a basis where conventional mobile services are enhanced and enriched. For example, the Enhanced Address Book (EAB) definition combines presence with conventional contacts management and can relay social media information such as status updates. With presence, Instant Messaging is enabled with content or media file sharing. All these services can be underpinned by conventional circuit-switched voice meaning the operator does not have to deploy RCS at the same time as SIP based VoIP so there is no dependency on operators to switch off their mainstay communications services.

So what does this mean for Symbian? Potentially it could be significant. RCS is currently being deployed using standalone clients and it seems there a few announcements being lined up for Mobile World Congress next February. Our contribution plan already includes MSRP which is a key enabler for RCS. Availability at source of the core enablers as part of the Symbian platform means service providers have an easier, more cost effective basis to deploy the services. Over the next few months I am going to be getting into more detail about where RCS is going and will be using this blog to share my thoughts. In the meantime please take a look at our roadmap and feel free to provide any comment below or email me at richardc[at]symbian[dot]org if you have a particular interest in making RCS happen.

If providers of digital maps needed any more reminders that high-granular geo-data is becoming more and more free to access, then it came with the recent announcement that the UK based, and Government owned, Ordnance Survey organisation plans to make its content freely available as an on-line mapping service sometime next year. The Ordnance Survey‘s work goes all the way back to the Napoleonic wars and is one of the largest producers of maps in the world so you can imagine the level of detail that will soon be available to browse on-line.

This announcement, however interesting it is among map and navigation content providers, would have paled in significance to Google’s recent announcement to offer a turn-by-turn navigation for free as part of Google Maps.

This declaration puts providers of navigation services in no doubt of Google’s new disruptive intentions. Navigation is by far the main bread winner in a $2 billion mapping market yet the geo-data content to enable navigation is no longer a significant asset by itself and the value of licensing this content is falling fast as competition gets stiffer.

The question for the larger established providers of high-end mobile navigation services is how to adapt their services and compete in a market where users are now less willing to pay. Making the geo-data content more accessible is one option to consider. An open development strategy can reduce development and integration costs and it allows application developers to be more productive with fewer restrictions on licensing.

I am not suggesting that geo-data should be open source but the basis by how geo-data is used within an application environment could be. Furthermore, in the context of The Symbian Foundation, it soon will be open source!

In the same way Google is delighting its users by offering free navigation, the same opportunity exists for other content providers wanting to offer their version of navigation on Symbian. The Symbian platform now makes available a Map and Navigation Framework to any content provider looking to easily develop on-device navigation services. This framework is a Symbian initiative specifically designed to enable rapid service development for any content provider wishing to leverage the Symbian platform for high-grade navigation services.

The market value for navigation is shrinking in the sense that less people are prepared to pay for the service but usage of the service is most certainly set to increase, massively so. Geo-data providers have the opportunity to explore new business models based on making their content more freely available using more relaxed, flexible licensing. To facilitate this, Symbian has established a content delivery framework within the platform that is open to all application developers and content providers. This is not about content providers offering a public API to allow the development of simple widget based applications that run in the browser. These services are not always going to be competitive enough. This new framework is designed to allow content providers to work with developers and build on-board or embedded functionality which complement widgets.

Symbian’s Map and Navigation Framework extends the range of services that content providers offer application developers. Currently, virtually all content providers allow runtime access to their content using a public API which is great to allow developers to render standard content within the application. The Symbian Map and Navigation Framework gives content providers an additional option. It is a basis to allow their developer community to build new services in application environments other than widgets. It means application developers can more easily build working prototypes without the need to negotiate content licensing separately. Making the content accessible over a pre-integrated framework as part of the device allows more developers to build better applications with that content.

Making the content available and accessible based on flexible licensing options can encourage a different range of applications to be developed using the content. It’s this type of model that potentially allows providers of geo-data to work more extensively with different application providers with the open source nature of Symbian being the main catalyst.

Our role is to shape and manage the direction of the connectivity technologies in Symbian and their interaction with the rest of the platform. Our goals are to build Symbian into the most complete, competitive and open platform available and to ensure that a large and healthy community thrives around it. How do we actually do that:

By working with the Symbian community to understand their needs and drive the technology forward.

We work with members who are interested in contributing, helping them through the process.

Derive the roadmaps and set out the strategy for the connectivity technology.

Engage with the community to help foster their ideas and see them realised in the platform

We are all about engagement with the rest of the community so please hook up with us to discuss the future of the platform. You can find us in the forums and mail lists or contact us direct.