Archive

GiffGaff is probably the most innovative solution that I have seen in months that has come from an established provider (Telefonica / O2). It uses the community and social networks for support (socialCRM), sales and marketing. By doing so it saves costs and offers this in price reductions, prepaid discounts and hard cash back to its subscribers.

The idea is great and it really uses the latest social networks. Unfortunately GiffGaff is (still) limited to prepaid and pure telecom services (SMS, calls & data). It would be great if they could offer also long tail services.

I have been looking into virtualization but what I find are mainly operation system based virtualizations. What I am looking for are application, integration and datastore virtualization solutions. Google’s App Engine and Oracle’s JRocket Virtual come closed to what I am looking for application virtualization. Why do you need an operating system if you could virtualize your application directly? It would save resources and would be more secure. My ideal solution allows developers to write applications and run them on a virtual application server. This virtual app server can scale applications horizontally over multiple machines. Each application is running in a sandbox hence badly written or unsecure applications will run out of resources and are not able to impact other applications. We would need a similar solution for integration solutions. Both would need out of the box support for multi-tenancy in which either a tenant gets a separate instance or multiple tenants can share one instance if supported by the software. Integration should be separated from the application logic and so should data storage.

Integration is key because the virtual applications could be running on a public cloud but would have to be able to interact with on-site systems. Enormous high-throughput, security, multi-tenancy and resistance to failure are key. One API can be linked to multiple back-office systems or different versions. Different versions of an API can be link to the same back-office system to prepare applications before a major back-office upgrade.

A distributed multi-tenant data store should hold all the end-user and application data. Ideally in a schema-less manner that avoids having to migrate data for data schema changes.

All these virtual elements should be managed by an automated scaling and highly distributed administration that can let applications grow or shrink based on demand, assure integration links are always up and get re-established if they fail, store data in a limitless way, etc. But there is more. The administration should allow to deploy different versions of the same application or integration and allow for step-wise migration to new versions and fast roll-backs.

Why do we need all this?

The first company that will have such elements at its disposal will have enormous competitive advantages in delivering innovative services quickly. They can launch new applications quickly and scale them to millions of users in hours. They can integrate diverse sources and make them universally available to be re-used by multiple applications. They can store data without having an army of DBAs for every application. They can try out new features and quickly scale them up or kill them. In short they can innovate on a daily basis.

The Google’s of this world understood years ago that a good architecture is a very powerful competitive weapon. There is a valid trend to offshore technical work. However technical work should be separated in extremely high-value and routine. Never off-shore high-value work. Also never assume that because the resources are expensive, it must be high-value. Defining and implementing this innovation architecture is extremely high-value. Writing applications on top of it is routine at least starting from number 5.

Launching thousands of services in a long tail marketplace might not be as hard as it used to be. However supporting millions of users with these thousands of services definitely is. Technology seems not to be the limitation of telco long tail, support and monetization are.

What support is needed?

Consumers as well as small, medium and large enterprises have different support needs. For simplicity let’s focus on small and medium. Hundreds of thousands of those are available in most countries. Often IT skills are in the best case basic. No dedicated IT staff. Just a helpful colleague if any. Time and resource shortage are plenty.

Before reaping the benefit of any long tail service, people will have to learn about what is being offered: product awareness. Additionally once the product is purchased, help with configuration/customization, product training, product integration, consultancy, product questions, etc. Finally when things go wrong: rapid workarounds and bug fixes.

Traditionally telecom operators have used sales teams, help desks and support organizations to offer more basic types of support. Scaling these organizations up to provide the previously listed items is often not possible. And if it were, it would be economically inviable.

Why is long tail support different?

Google, among others, promotes a services-based marketplace inside its Google App Marketplace. Although a step in the right direction, it will not resolve all the issues.

These long tail services could be an answer for established brands and the more straight-forward support tasks like product training. However a developer that on a Sunday afternoon builds a cool app and all of a sudden is surprised that 50.000 companies downloaded it on Monday, is not able to offer any reasonable support.

What do small mom&pops support services need?

Specialization and economies of scale would be two key factors. The “lucky developer” has specialized skills around application development. However does he or she has knowledge about how to integrate a corporate single sign-on solution into it? Probably not. Also when the developer will be helping one company, he will not have time to help another one.

So our “lucky developer” will need people with additional skills and be able to increase his/her bandwidth.

Option 1: Community Support

By offering the tools on the marketplace for an online support community to build around this “lucky app”, companies can help one another and don’t repeatedly ask the developer the same type of questions. Some communities have demonstrated to be offering faster and better support then most commercial support organizations. However there is a problem here. Bug fixes can only be provided by the “lucky developer”. (S)He can choose to open source the application code but that would very likely allow others to quickly copy and extend the app and destroy all market advantage.

Option 2: Commercial Product Support

The “lucky developer” can foresee potential success and hire some external company that gets trained on the app and is able to resolve most of the bug fixes. A trusted third-party that can have an escrow with the “lucky developer” and take over development in case something happens to him or her.

However this would take time and would only take place for those apps that have a steady growth to success, not an overnight craze.

Some tools could be beneficial here. Version control to share proprietary code with authorized third-parties to let them generate patches and in case of a deployed application access to a mechanism to test and deploy an updated version. Also standardized CRM solutions and multi-channel helpdesk access can offer a unified and high-quality service even for one person support companies.

Option 3: Commercial Specialized Services

Even if a third-party company gets trained on a product, this does not take away that customers will demand specialized services that are outside of the scope of product support. Examples could be security audits, SLAs about service availability, integration support & consultancy, performance benchmarking, commercial volume discounts & pricing, marketing, legal support, etc.

By itself this can be a totally new services marketplace in which both the “lucky programmer” as well as its customers can contract these services.

Tools are completely different based on which service is offered so standardized tools are difficult. Probably tools could become SaaS offerings from third-parties.

Option 4: Reputation

Bringing together community support, commercial product support and commercial specialized services is not enough by itself. All tools will not help without one key aspect: reputation.

If a security expert has found security holes in some of the most famous Internet sites and he certifies your application then this means that your application is having a reputation of being save. The higher the reputation, probably the higher the fees the “lucky programmer” has to pay. So not everybody will be able to afford the best, especially in the beginning. But then again, sometimes companies with a top reputation might want to offer their services for free for those “lucky programmers” that are likely to get them free press.

The same is true for buyers. If you see that an SLA validation authority, that is generally trusted, is certifying that the services was up for 99,9999% in the last 24 months then you probably want to buy this service over a service that is slightly cheaper but has no reputation for reliability. Also you will want to buy bug fixing support from an organization that was able to meet a very tough SLA in the last 24 months and has all its customers raving about it.

With the world looking more at XML, SOAP and REST these days, it is perhaps anti-natural to think binary again. However with Protocol Buffers [Protobuf], Thrift, Avro and BSON being used by the large dotcoms, thinking binary feels modern again…

How can we apply binary to telecom? Binary SIP?

SIP is a protocol for handling sessions for voice, video and instant messaging. It is a dialect of XML. For a SIP session to be set-up a lot of communication is required between different parties. What if that communication is substituted by a binary protocol based for instance on protocol buffers? Google’s protocol buffers can dramatically reduce network loads and parsing, even between 10 to a 100 times compared to regular XML.

Performance – faster parsing and lower load means that more can be done for less. One server can handle more clients.

Scalability – distributing the handling of SIP sessions over more machines becomes easier if each transaction can be handled faster.

Disadvantages:

No easy debugging – SIP can be human ready hence debugging is “easier”. However in practice tools could be written that allow binary debugging.

Syncing client & server – clients and server libraries need to be in sync otherwise parsing can not be handled. Protocol buffers ignores extensions that are unknown so there is some freedom for an old client to connect to a newer server or vice-versa.

Firewalls/Existing equipment – a new binary protocol can not be interchanged with existing equipment. A SIP to binary SIP proxy is necessary.

It would be interesting to see if a binary SIP prototype joined with the latest NOSQL data stores can compete with commercial SIP/IMS equipment in scalability, latency and performance.

UPDATE: There is a new social graph player that implements Pregel on Hadoop: Giraph

Lately there is a lot of talk going on about graph databases and its main applications for things like social graphs. Google’s Pregel and the bulk synchronous parallel model are also important hints. Building on the mobile social graph idea, I am evaluating different graph databases. For revenue sharing engagements, cost is critical. As such real “open source” solutions are preferable over expensive licenses.

On paper the most promising one was Neo4J. After making some tests with it, I discovered however a quite important limitation: There is no remote thread-safe API. This means that when making a multi-threaded solution you run into problems when updating relationships between nodes. Under stress you are likely to want to update a relationship while another thread has a lock and as such you run into problems.

Sones has a very restrictive open source version, so not really useful.

OrientDB looks very promising for some applications but is not really build to execute complex graph algorithms like large scale pagerank.

Infogrid is extremely complex with a lot of individual components that are all in different stages of development. However there are some promising aspects.

Hama is one of the most promising technology-wise but until you can actually store data in Hadoop and quickly manipulate large sets of matrices is unusable for the moment. However having a group like Apache and more importantly having an Apache license should make it the best option. Especially for businesses that want to evaluate Graph databases and don’t want to spend fortunes on licenses or open source their complete solution when it is only a minor part in a larger solution.

FlockDB is very ruff around the edges (still). It might fit Twitter’s needs but most other people would like partitioning over multiple servers to be transparent and would like to traverse a graph.

In short there is no real solution yet, instead there are a lot of promises. Although commercial options exist, there are too few big ongoing graph projects in Telecom that would justify expensive licenses. Telecom is not a mature graph market yet. It is just starting or graph databases are used on side projects only. Since graph databases are an infrastructure element, having a open-source business-friendly license is preferable. Money can still be make via consultancy, support, administrative tools and a revenue sharing market place for re-usable algorithms. It is now more important to be market-leader in this developing market, then to have the highest sales volume of a niche market.

Why is a graph database important to telecom?

If I call you and you call me then we have a relationship. If I am the key “connector”, “maven” or “salesman” (See The Tipping Point) among my friends or business contacts then I would be the perfect marketing objective. Unfortunately RDBMs are not good at finding those profiles between millions of subscribers.

This is an open invitation for people to join forces and build tomorrow’s architecture, preferably with an Apache License, extremely scalable (billions not thousands) and with support for complex algorithms.

Google has changed very little to its basic architecture building blocks over the years. Everything runs on top of the Google File System and Bigtable. Except for Google Instant which is reversing Map-Reduce usage, new services have been reusing their existing architecture.

Similar observations can be made for the rest of the main players. So why is it that Telecom operators have not invested in one architecture to launch multiple services? No idea.

One architecture for VAS

The concept is simple. Create one common architecture. This architecture should have multiple components:

An asset exposure layer – applications can re-use network assets and get isolated from internal complexities

Presentation layer – facilitate mobile GUI and Web 2.0 development

Application Engine – allows applications to run and focus on business logic instead of scaling and integration

Continuous Deployment – instead of monthly big-bang deployments, incremental daily or weekly releases are possible, even hourly like some dotcoms.

Unified Administration – one place to know what is happening both technically and business-wise with the applications.

Long-Tail Business Link – all business and accounting transactions for customers, partners, providers, etc. are centralized.

etc.

Having such an architecture in place would allow telco innovations to be brought to market at least ten times faster. Application and service designers would have to focus on business logic and not on the rest. Administrators would have one platform to manage and not a puzzle of systems. Integrations would have to be done ones to a common integration layer.

Building such an architecture should be done in the dotcom style and not a telco RFQ. Only by doing iterative projects which bring together the components can you build an architecture that is really used and not a side project that starts to have its own life.

It even makes sense to open source the architecture. Telco’s business is not about building architectures hence having a common platform that was started by one would benefit the whole industry. It even would give a competitive advantage to the one that started the architecture for knowing it better than any competitor. Of course for this to happen, a telco has to recognize that their future major competitors are not the neighboring telco but a global dotcom…

Disclaimer

All the contents of the Blog, EXCEPT FOR COMMENTS AND QUOTED MATERIAL, constitute the opinion of the Author, and the Author alone; they do not represent the views and opinions of the Author’s employers, supervisors, nor do they represent the view of organizations, businesses or institutions the Author is a part of.

The Author is not responsible for the content of any comments made by the Commenter(s).

While we have made every attempt to ensure that the information contained in this Blog has been obtained from reliable sources, the Author is not responsible for any errors or omissions, or for the results obtained from the use of this information. All information in this Blog is provided "as is", with no guarantee of completeness, accuracy, timeliness or of the results obtained from the use of this information, and without warranty of any kind.