Archive

In a video posted on Youtube in January 2011, PHD student [now Dr. not surprisingly] Zdenek Kalal shows off his doctor’s thesis: Predator. Predator is a computer vision algorithm that shows how this nacent industry has matured in a few years.

First you define the object you want to track. In this case Zdenek selected his face.

Afterwards the face is automatically recognized. Even when the head is moved sideways.

Finally it is even possible to recognize a face between many others on a photo.

Computer Vision is one of those domains that has been underutilized by most, except of course for Facebook, Google, etc. However in the age where people are moving from voice to video chat and even continuous live broadcasting, everybody that wants to add extra value towards end-users, or customers/advertisers, should be looking at the possibilities of computer vision. Imagine what is possible if you combine a Kinect or Leap with Predator: online advertisers and secret services ‘ paradise.

Cloudify, from the scalability experts GigaSpaces, is still its early stages. Unlike Google App Engine, Azure, Heroku, etc. this PaaS is more focused on the application life cycle and not on being a “transparent” application server and database. The main focus is automating application and services deployment, monitoring, autoscaling, etc. The closest competitor would be Scalr.

Unlike Scalr, Cloudify’s focus is on Cloud-neutrality. Cloudify is not focusing on using specific Amazon services for scalability but instead to make a neutral Cloud platform. The advantage is that every possible Cloud being it private or public can be used and scenarios like hybrid clouds with Cloud bursting from private to public cloud are possible. The deep understanding of large-scale architectures in a company like GigaSpaces is a guarantee that Cloudify will scale in the future.

Cloudify is still missing some important functionality like security, multi-tenancy, integrations with lower-level automation frameworks (e.g. Chef and Puppet), complex upgrade management [e.g. rolling upgrades, MySQL schema upgrades, A/B testing of new features, etc.], etc. However the roadmap is pointing towards most of these items.

Software architects should understand the possibilities Cloudify, Scalr, etc. bring. By having a reusable automation framework companies are able to spend more development and operations time on bringing new business features and less on reinventing the wheel.

A lot of people are talking about home automation, M2M in cars, etc. However there is a simpler solution than investing thousands of euros to automate everything. What if a new standard was developed that would combine UPnP, REST and WiFi and it would be embedded in most consumer appliances, cars, etc.? The idea is simple: allow devices to be discovered [UPnP] connected to your home network [WiFi] and allow them to expose their main functionality [REST].

What would be the big deal?

At the moment you can connect your SmartTV to your home network and download a mobile app that will discover your television and allow it to be controlled remotely. This is all nice and well. However it keeps on limiting the consumer on using one app per device. The real difference would be if every device could be integrated with via a very easy API [REST]. Ideally there would be standard APIs with the minimum common functionality per type of device, for instance for cars, fridges, ovens, radios, etc.

What type of use cases are possible?

At home people could turn the oven on and get a notification on their mobile when it is warmed up or when your pie is ready even. Parents can get alerted when their children left the fridge open.

On the road your car could talk to your iPad and the entertainment system could be driven from your iPad. New apps could be downloaded and installed inside the car.

At work people could be voting about the temperature of the air-conditioning. The Coke machine could be linked to your Paypal and you would not have to carry coins any more.

A lot more use cases are possible. However easy integration [REST], auto-discovery [UPnP] and connectivity [WiFi] are the basics…

Twitter is having a Real-Time Analytics solution that could easily become as important as Hadoop. They talked about open sourcing it but so far have not done so.

This post is an open invitation to Twitter open source Rainbird and accelerate Real-Time Analytics adoption in the world. Hadoop has changed thousands if not millions of companies. Rainbird could do a similar thing.

In order to gather people around this subject, I am proposing that you include #TWOSRB in your tweets. #TWOSRB stands for Twitter please Open Source RainBird:Tweet #TWOSRB
//

For the last year and a half Telruptive focused on trying to save operators from becoming bit pipes and with it trying to save employment in the telecom industry. This has been a major limitation for the type of blog posts that could be published. Starting today Telruptive’s focus has been extended. Any innovation, disruptive technology or business practice that has to do with communication between people as well as machines is valid. Communication is not seen as pure telecommunication but is seen in its widest interpretation, moving information between one or more parties.

Why is saving operators no longer a priority?

There has been no proof in the last year and a half that most operators will not become bit pipes. Most operators will either become bit pipes, consolidate or worse. Telecom solution providers will either shrink, consolidate or worse. Only real innovative operators will have a chance to be active outside of communication infrastructure. Unfortunately there are very few of those. LTE will seriously disrupt the operator’s monopoly on voice calls. iMessage, Whatsapp and similar services already crossed the tipping point and are disrupting the SMS business. Operators ‘ answer has been nothing or too-few-too-late. The telecom industry resembles the titanic more each day. It was once the most luxurious cruise ship of its time. But disruptive icebergs are making it sink. Instead of building lifeboats with material found on board, the operators seems to have taken the decision to play music and await what will happen.

Telruptive wants to inform innovators about new ways of communicating, new disruptive technologies they should use, new disruptive business models they should implement, etc. Innovators can be operators, telecom solution providers but can also be dotcoms or people not linked to the telecom industry. This is what Telruptive will be focusing on in the future.

Innovation is a high-risk activity. You invest in something with the only certainty that you know (some of) your costs and none of your future revenues. Traditional wisdom tells managers to focus on a business case. If the business case is more positive than the other alternatives and gives a good return-on-investment, then you should invest. However this approach is flawed when dealing with innovative projects. There is not reference to calculate future revenues. Yes you can “guestimate” and make nice assumptions. However no business case would have indicated that you should invest in a 23-year-old that has put photo’s of his fellow students online. Some years later that photo page is worth many billions. For every positive example unfortunately there are a long list of failures.

The solution: focus on incremental innovation. Or not?

Nokia would be the best example of this strategy. You make the best hardware platform, a relatively easy software and make sure people can reliably make calls and send messages. Every investment decision had a positive ROI and positive margins. Unfortunately Nokia’s stock is close to becoming junk.

Can you make a business case for highly innovative projects?

Yes you can make a business case. Especially costs can be estimated and some high-level revenue estimates can be made. As long as this business case is used to validate if the project is economically viable, then there is no problem. The major problem is when this business case is compared with incremental innovation projects or investments in the core business. The outcome will be always negative. Disruptive innovations tend to go for lower margin business with inferior offerings that often cannibalize the core business. Over time the disruptive innovation will move up the value ladder and will be able to substitute the core business. Unfortunately the Innovator Dilemma in which you attack your core business and substitute it with an inferior margin business is difficult to accept by conventional managers. There are some companies that have excelled at this. The best example is Amazon that is seeing its core business of book sales being threatened by electronic books. The answer has been to provide e-book readers and tablets below the hardware costs with the idea to dominate the electronic book market by offering a total solution to easily buy books.

Ostrich techniques

The technique used by most companies when faced with disruptive innovation attacks is to consider them inferior and to ignore them. Unfortunately over time these solutions will substitute the existing offerings. This process is currently happening: e.g. SMS versus Whatsapp, LBS versus Mobile Phone location, calls versus Skype or Voxtrot, etc.

Unlike incremental innovation, being first in the market for disruptive innovation is key because the winner takes most of market. Number two can still take some market share but number three is no longer profitable. Examples: Google Search/Adwords/Youtube, Facebook, Linkedin, Twitter, etc.

The worst strategy for operators is the ostrich technique because implementing LTE will offer disruptive innovators all the tools they need to offer voice services over the top.

Discovery-driven planning versus business cases

In 1995 Harvard Business Review came up with discovery-driven planning. The idea has been successfully implemented by venture capitalists. You do not give money to a new venture to develop a new product, launch it and expand globally. You give money to develop a prototype in a few months. If this goal is met, you give money to validate the prototype with early adopters, etc.

Operators should start using discovery-driven planning to introduce disruptive innovations. Employees, partners, customers, etc. can “complain” about inefficiencies in the current offerings. The most urgent “inefficiencies” are selected, for instance via voting. Afterwards small innovation groups, made up out of experts in different domains, are formed to find solutions on paper for these “inefficiencies”. These paper-based solutions are presented to selected early adopters. Via continuous feedback the solution can be designed, the future price can be determined, the costs can be estimated and a high-level business case can be made. Early adopters are asked to find beta users. If a certain number of beta users express interest in the solution then the team will receive funding for a prototype.

Beta users are able to see the prototype come to live and to give continuous feedback. The prototype should evolve from paper to a real service in as few months as possible, 2-6. Afterwards the beta users get a limited amount of time to start subscribing to the real service and to extend the number of beta users. If a certain limit is reached within a certain time frame, then the beta product will get a next investment round. This investment round will bring the product from closed beta to a public launch. The last stage is expansion. If the public launch is successful then the last round of funding is provided that allows the service to expand, e.g. within all markets of the operator.

Any idea/service that does not make a stage gets killed. The complete disruptive innovation program should get a budget and should be initially independent from the core business. Direct support from the CEO and other senior executives is a must. Business cases are used to set prices, etc. but not to compare disruptive innovations with core business investments.

Quagga might remind a little majority of people of an extint African zebra. However Quagga is also the name of an open source project that focuses on the future of networking. It is one of the projects that is being boosted by Google push for Open Source Networking. Google has joined hands with the Internet Systems Consortium to found Open Source Routing. Open Source Routing focuses on bringing Open Source solutions for Openflow, Software-defined networking and other technologies that are needed in today’s Webscale networking. Google also is pushing the ALTO protocol in order to improve quality of service for P2P and more importantly content delivery networks.

Google’s dream is to do the same with networking at it did with servers. Buy cheap commodity hardware and make resilient systems via software solutions. This strategy is directly in conflict with companies like Cisco or Juniper that focus on expensive proprietary hardware solutions. Google is trying to find cheap hardware in order to install Open vSwitch and other similar software on it.

Telecom operators and solution providers are wise to evaluate participating in the Open Source Routing effort. Verizon is one of the pioneers in trying out Openflow and the benefits it can have for carriers. Expect a lot of innovation from companies without a big brand to come in the coming months, examples could be bigswitch, fastly, pica8, etc.

Disclaimer

All the contents of the Blog, EXCEPT FOR COMMENTS AND QUOTED MATERIAL, constitute the opinion of the Author, and the Author alone; they do not represent the views and opinions of the Author’s employers, supervisors, nor do they represent the view of organizations, businesses or institutions the Author is a part of.

The Author is not responsible for the content of any comments made by the Commenter(s).

While we have made every attempt to ensure that the information contained in this Blog has been obtained from reliable sources, the Author is not responsible for any errors or omissions, or for the results obtained from the use of this information. All information in this Blog is provided "as is", with no guarantee of completeness, accuracy, timeliness or of the results obtained from the use of this information, and without warranty of any kind.