I am not going to ask you how are you doing. For everyone in the Amazon Web Services eco-system, the last 24 hours have been brutal. But I’d like to share my perspective with you, and offer a couple of suggestions:

I believe that in the long run this will be a positive day for the cloud computing movement. Naysayers seeking evidence to avoid the cloud have new ammunition, those hyping the cloud are experiencing its limitations, and the leading cloud provider, your company, is learning from the major outage the importance of being humble and cooperative.

I also believe that the way AWS behaves needs to change. You built the leading infrastructure-as-a-service provider with a level of secrecy typical of a stealth startup or a dominant enterprise software platform vendor. It works for Apple – they deliver a complete integrated value chain. But it is not your position in the cloud ecosystem. Today’s outage shows that secrecy doesn’t and won’t work for an IaaS provider. Compete on scale and enterprise readiness, and part of readiness is being open about your internal architectures, technologies and processes.

Our dev-ops people can’t read from the tea-leaves how to organize our systems for performance, scalability and most importantly disaster recovery. The difference between “reasonable” SLAs and “five-9s” is the difference between improvisation and the complete alignment of our respective operational processes. My ops people were ready at 1:00 am PT to start our own disaster recovery, but status updates completely failed to indicate the severity of the situation. We relied on AWS to fix the problem. Had we had more information, we would have made a different choice.

This brings me to my last point: communication. Your customers need a fundamentally different level of information about your platform. There are some very popular web sites that try to re-engineer the way AWS operates. These secondary sources – based on reverse engineering and conjecture – provide a higher level of communication than we get directly from the AWS pages. We live in the Twitter, Facebook, Wikipedia and Wikileaks days! There should not be communication walls between IaaS, PaaS, SaaS and customer layers of the cloud infrastructure.

Share this:

NetBeans was the first extensible Java IDE platform with plug-ins back in 1999. Systinet had a product that was actually called Web Application & Services Platform (WASP). But both NetBeans and Systinet were “only” what my investor Marc Andreessen calls Platform Level 2:

This is the kind of platform approach that historically has been used in end-user applications to let developers build new functions that can be injected, or “plug in”, to the core system and its user interface.

My goals for GoodData are different. I want to build a platform that would become a one stop shop for BI developers, architects and users. A BI platform that will provide a set of APIs where developers can define their own models, schemas, queries, metrics, reports and dashboards. The highest level of Marc’s platform taxonomy: Platform Level 3. But here I need to quote Marc again:

Level 3 platforms are much harder to build than Level 2 platforms.

Yes. Building GoodData – a multi-tenant, scalable and open BI platform – was not easy (and we are not finished yet) but the possibilities are endless. We opened the platform to developers only a few months ago and today we are announcing a number of partnerships that are only possible because of GoodData APIs. We call this program Powered by GoodData and it is available to all developers and architects who need have access to BI functionality.

Access to BI functionality… let’s stop here for one moment. Accessing BI functionality in the world of enterprise software usually meant a build-versus-buy/OEM decision (the third option – open source BI – is as complex as the build option and as expensive in the long term as the buy option). But now GoodData gives our partners a completely new option to access BI functionality: an API call.

Instead of building, managing and operating a datawarehousing and BI stack our partners rely on our cloud-based service to deliver that functionality for them. And we are delivering BI functionality to hundreds of companies and thousands of users via API calls. It is Tuesday afternoon and our BI platform served 1,218,689 REST API calls this week. Now that’s what I call a new way of access to BI functionality. And what Marc would call a Level 3 platform.

I’ve always found the BI industry’s fascination with elitism a throwback to the old days of IT. It seems that most of the industry calls users with no access to their tools “the masses”. And it gets worse. Bloggers from Endeca call them half jokingly “the angry mobs” and SAP has BI for “the rest of us”. All these words describe a business user who doesn’t have the time or skills to operate a complex BI solution designed for electrical engineers (who go by the name of IT). BI has penetration rate of 10% and everybody else is “the rest of us”.

It’s not just BI. The telco industry thinks their customers reside in “the last mile” – as far away from what’s important (the core of the network) as you can get. Shouldn’t their customers be in “the first mile”? And now BI is adopting the same “last mile” language, and the intent is the same: “keep my business users as far away as possible so I can focus on the core of my BI system.”

Making BI accessible to the “angry mobs” is in contradiction with BI industry’s quest for ever more complexity and hype. Petabyte warehouses, data visualization, social media analytics, predictive clustering, corporate performance management are the current industry buzzwords. Press releases and PowerPoint charts are full of names like Pig, Hadoop and Hive. These trends and tools were designed for the selected few; not for the average business user.

My vision for GoodData was always very different. Our goal is to get rid of the convoluted BI value chain. We are using the economics of the cloud to offer a service that can be used by a business audience. I am on a personal mission to support “the masses against the classes” and to build BI that is not a dumbed down version of an expensive, complex and brittle enterprise solution. I’ve always believed that the enterprise data warehouse is the place where data goes to die, leaving the poor business users with Excel spreadmarts.

This is why we just announced a fully integrated and free service: GoodData for Zendesk. Every Zendesk Plus+ customer gets free analytics from us; and the setup time is less than 5 minutes. And why free? We actually believe in what Dan Vesset wrote back in 2004. That once our users get a taste of what they can do with it they will start demanding more and more information and analytics. GoodData is BI for the business user. Something the elitist industry will call BI for “the masses”, “angry mobs” or even “the rest of us”…

Share this:

[tweetmeme]The Innovator’s Dilemma by Clayton M. Christensen is my favorite business book – its main idea (disruptive technologies serve new customer groups and “low-end” markets first) was the guiding principle of all my startups. The best part is that even though everybody can read about the power of disruptive technologies, there is no defense against them. Vendors can’t help themselves. They study The Innovator’s Dilemma, pay Christensen to speak to their managers, but their existing customer base and “brand promise” prevent them from releasing products that are limited, incomplete or outright “crappy.” That’s what makes them disruptive. And industry analysts seem to be the only hi-tech constituency that has either never read Christensen, or is still in absolute denial about it. It makes sense: a book claiming that “technology supply may not equal market demand” is heresy for people who spend their lives focused primarily on the technology supply side.

Christensen argues that vendors no longer develop features to satisfy their users, but just to maintain the price points and maintenance charges (can you name a new Excel feature?). But in many cases the vendor decisions are driven more by industry analysts and their longer and longer feature-list questionnaires. The criteria for inclusion into the Gartner Magic Quadrants and Forrester Waves seem to be copied straight from Christensen’s chapter: “Performance oversupply and the evolution of product competition”. Analysts are the best supporters that startups can have: they are being paid by the incumbents to keep them on a path of “performance oversupply”, making them so vulnerable to young vendors “not approved” by the same analysts!

Forester BI analyst Boris Evelson gives us a great example of this point in his blog about “Bottom Up And Top Down Approaches To Estimating Costs For A Single BI Report”. While Boris is a super smart BI analyst, he somehow failed to observe that his price point of $2,000 to $20,000 per report opens a huge space for economic disruption of the BI market. Anybody interested in power of disruptive technology in BI should listen to a recent GoodData webinar with Tina Babbi (VP of Sales and Services Operations at TriNet). Tina described how the economics of Cloud BI enabled her to shift TriNet’s sales organization “from anecdotal to analytical”. This would not be possible in the luxury-good version of BI, where each report costs thousands. Fortunately, Tina is paying less for a year for a “sales pipeline analytics” service delivered by GoodData than the established vendors would charge for a single report.

I hope Boris’ blog post will appear in one of the future editions of The Innovators Dilemma as a textbook example of how leading analysts failed to recognize that established products are being pushed aside by newer and cheaper products that, over time, get better and become a serious threat. And with friends like Forrester and Gartner, the incumbents don’t really need young and nimble enemies…

Share this:

Peter Yared wrote recently a BusinessWeek guest blog post called “Failure of Commercial Open Source Software.” Not surprisingly his post caused a lot of angry replies from people who work for COSS companies. “The emperor is not naked” they argued.

I believe that the COSS emperor is openly naked. And the discussion shouldn’t be whether COSS is a complete or a partial failure just because there are few successful exits that Peter neglected to mention. At the end of the day Peter’s comment that “selling software is miserable” is true. Every sales rep involved in selling COSS would agree (I’m interviewing many of them now). Selling COSS is no easier than selling any other form of software.

Any company using the word “open” should be able to explain the true cost of delivery (this is one of Peter’s points). And there is an obvious litmus test of openness of COSS companies: One that I would call “open pricing.” COSS companies should openly publish their price list and clearly mark what’s free and open and what’s paid and closed. Otherwise OSS is just a bait-and-switch to a familiar proprietary software tactic of customer lock-in. This is what OSS was supposed to get rid of in the first place.

Let’s take a look at some of COSS companies in the Business Intelligence space. The bait and switch is in a full swing here:

We announced GoodData pricing earlier today and I would actually argue that we are a more open company than any of companies listed above. Our customers know exactly what service they get and how much it will cost.

We stick to our company motto: GoodData = BI – BS. And at there is a lot of BS going on in COSS space. It may actually be its biggest failure.

Full disclosure: I have been a big believer in open source since we opensourced NetBeans more than 10 years ago.

Share this:

A long time ago I came to the conclusion that “independent industry analyst” was an oxymoron. But the willingness to sell independence for cash reached a new low with TDWI’s New SaaS Business Intelligence Portal. Please visit the link and see if there is any trace of independence left…

Share this:

Moore’s Law states that computer system performance/price ratio will double every two years. And that was very much my expectation when GoodData started using Amazon Web Services almost 2 years ago. But I had to wait until today to see Moore’s Law at work: Amazon announced 15% drop of EC2 prices. The price of the small Linux instance was constant at $0.10 per hour for the last two years – now it will be $0.085.

15% in 2 years – not exactly the exponential growth in the performance/price curve that I expected. And I started to wonder why. Here are my two explanations – I believe the second one is more likely:

AWS prices were set way too low to attract developers two years ago. Moore’s Law helped the price to catch up with the real cost of running the cloud.

AWS is a monopoly and Moore’s Law does not apply.

What? Cloud and monopoly? Isn’t utility computing a perfect example of fiercely competitive commodity where the price curve is shaped only by demand/supply? What would Nick Carr say? Unfortunately not. As much as we read about different cloud providers, AWS is the only real provider of “infrastructure as a service” in town. If you don’t want to be locked-in to proprietary Python or .Net libraries there is not that much choice.

Until we will see performance/price of AWS double every two years, we should still wonder about monopolistic pricing.

Share this:

Back in the old good days of enterprise software, we did not need to worry about our customers. We delivered bits on DVDs – it was up to the customers to struggle with installation, integration, management, customization and other aspects of software operations. We collected all the cash upfront, took another 25% in annual maintenance. Throwing software over the wall … that’s how we did it. Sometimes almost literally…

I now live in the SaaS world. My customers only pay us if we deliver a service level consistent with our SLAs. We are responsible for deployment, security, upgrades and so on. We operate software for our customers and we deliver it as service.

But there now seems to be a new way how to “throw software over the wall” again. Many software companies have repackaged their software as Amazon Machine Image (AMI) and relabeled them as SaaS or Cloud Computing. It’s so simple, it’s so clever: Dear customer, here is the image of our database, server, analytical engine, ETL tool, integration bus, dashboard etc. All you need it is go to AWS, get an account and start those AMIs. Scaling, integration, upgrades is your worry again. Welcome back to the world of enterprise software…

AMI is the new DVD and this approach to cloud computing is the worst thing that could happen to SaaS. And SaaS in my vocabulary is still Software as a Service…

Share this:

Terry Pratchett once wrote that “Gravity is a habit that is hard to shake off”. We could make a similar comment about the financials of SaaS BI companies. As much as startups in this field would like to shake off their bad economics, reality always catches up. We’re seeing one after another SaaS BI startup to go out of business. Back in June it was LucidEra and earlier this week Blink Logic ceased operations. But anybody who only briefly looked at Blink Logic’s finances (it was a public company) shouldn’t be surprised by this event.

Why do so many of the attempts to marry BI and SaaS fail? The problem is that Saas BI sounds simple … simple enough to take an existing BI asset (integration engine, open source analytical engine, columnar database, dashboarding, even domain expertise & consulting) and just host it! All it takes is VMware or an AWS account, web server and Flash or JavaScript. Some people call this a paradigm shift, I call it window dressing. LucidEra was essentially restarted Broadbase, BlinkLogic was once called DataJungle, PivotLink recently changed their name from SeaTab, Cloud9 Analytics has a secret history as Certive, Success Metrics morphed into Birst. I could go on…

Why do SaaS BI companies have bad economics? It’s an attractive market – one of the last few open spaces in software. BI requires dealing with lots of data, lots of compute power and many users. SaaS + BI seems obvious. But truthfully, it’s such a difficult opportunity that it requires a new approach, yet everybody is taking shortcuts. SaaS BI isn’t just hosted BI just as email is not just better faxing, wikis are not just simplified Microsoft Word. Some time ago I wrote a case study on how my former company, NetBeans, was able to successfully compete against giants like Symantec, Borland or IBM, this case study is very relevant to our SaaS BI discussion.

The SaaS BI paradigm shift needs to be truly transformational in order to be successful – something that will get BI above the 9% adoption flatline it’s been at for years. Not everybody gets this. One of the best analysts in this space Boris Evelson wrote a blog post earlier this week where he focuses on differentiation of SaaS BI startups. His first question is: VC backing. Is the firm backed by a VC with good track record in information management space? But LucidEra was very well funded by leading VCs. The correct question that Boris should have asked is: Are the backers of the company funding innovation? Do they understand that it takes three years to become an overnight success?

Share this:

It’s not a shock to state that cloud computing will disrupt the business model of commercial software. But how it will affect the open source movement?

The rise of open source is clearly linked to the rise of the web. Buy a commodity piece of hardware, download source code of any of the thousands of open source projects and start to “scratch your own itch”. My Linux box will communicate with your Linux box as long as we stick to some minimal set of protocols. The web is loosely coupled and software can be developed independently in a bazaar style.

It’s not quite as straightforward in the cloud. Clouds are also composed of thousands of commodity PCs, but the cloud operator manages the overall architecture and deployment – power supply, cooling, hypervisors, security, networks and so on. We don’t rely on minimal set of protocols in the cloud. On the contrary the cloud is defined by fairly complex, high level APIs. Even though the actual cloud OS may come from the open source domain, the tightly coupled nature of the cloud prevents users from modifying the cloud software.

There’s a lot of talk today about setting up private clouds with an Open Source Cloud OS, but the idea of private clouds is simply a delusion. Since the owner of private cloud has to purchase all required HW upfront, private clouds don’t provide the main benefit of cloud computing: elasticity. Other people will claim that clouds are not compatible with the open source movement or call it outright ‘stupidity’.

I see two possible solutions to this problem:

Benevolent dictator: Leading cloud providers (Amazon, Google, MSFT) will open-source their complete stack. This means that they would let the community to inspect the code, fix bugs, suggest improvements and define a clear roadmap similar to the Linux roadmap. This will also require a role of benevolent dictator to manage the evolution of the cloud. Given the level of investment required to build and operate the cloud I don’t believe that this is likely scenario. The new PC: The open source community accepts the cloud as the new HW/OS platform. Instead of building apps on top of x86 platforms (Wintel, Mac…), open source applications would be built on top of Amazon Web Services or Google AppEngine APIs. And these apps would handle the portability of data so that data doesn’t get locked in the cloud.

At the end of the day, cloud computing equals utility and utility creates stability. And a stable set of APIs, protocols and standards is a great place for open source to flourish. The best open source projects grew on top of stable standards: MySQL/SQL, Linux/x86, Firefox/http/HTML. I wonder what will be the most important OSS that will grow on top of the cloud…