May 2009

May 13, 2009

This week, Microsoft announced they will enter the technology space inhabited by my company, StreamBase. Naturally, I got a flurry of questions, ranging from: "Is Microsoft's entrance going to crush your business?" to "Could Microsoft entering your market be validating?"

The question behind these questions is: "How can a company with less than 100 people compete with Microsoft?"

First let's start with some background: what is complex event processing, or CEP? CEP is an enterprise software technology - big companies buy it to do big things. CEP is kind of like a database, but turned on its head for real-time decision-making. For example, Orbitz uses CEP to monitor their business in real time - when there's a problem, Orbitz knows about it in milliseconds, which means they can do something about it before its too late. And they can change the logic for detecting and responding to problems on the fly. Databases, in contrast to CEP, are designed to help a company look at what already happened, not what's happening now. Clearly the database is an essential tool that can inform future decisions. But knowing what's happening now helps identify and therefore seize business opportunities that exist for just a short time, like fixing your problem on Orbitz before you leave and go to Travelocity. CEP is increasingly becoming a critical tool for the enterprise.

The most extreme, and common, use of CEP is in the capital markets - firms like Deutsche Bank, JP Morgan, and RBC use CEP to automate their trading decisions. The financial markets produce millions of events every second, and opportunities to make money last, in some cases, for minutes, or, in some cases, for milliseconds. The systems that identify and exploit those opportunities are under constant development. In trading, rapid development and instant action aren't luxuries; they directly impact revenue and competitive differentiation.

So, can Microsoft dominate CEP? In his groundbreaking book Intrapreneuring, Gifford Pinchot researched the success and failures of innovative efforts in large U.S. corporations. At a macro level, he found that big companies, statistically, are terrible at innovation:

You might think that larger organizations would be stodgy in thinking up new ideas, but, because of a wealth of management talent, would be very good at executing them. It turns out, however, that just the reverse is true: our large organizations are producing large numbers of good ideas but generally are unable to implement them."

This, of course, is why the venture capital industry is alive and well - innovation naturally comes from small firms. However, Pinchot went on to describe that when big companies act like little companies, they can innovate. But they must create small groups, accept and expect failure, let these groups be autonomous, hire different kinds of people, compensate them differently, manage them differently, and get close to customers. All these attributes fly in the face of traditional big company culture.

But Microsoft has pulled off innovation before - are they doing it here according to Pinchot's findings? It's a CEOs job to watch for competitive threats, and I see no evidence to suggest Microsoft is approaching CEP in a way that portends success. Nimble moves by Microsoft could include: recruit top visionary CEP engineers (they haven't); create a small, independent group (they didn't - the product is coming from the SQL server group, which has hundreds of engineers); get out and talk to customers directly and co-develop with visionary partners (we see no evidence of this); recruit and empower entrepreneurs and empower them to break down corporate structures (Microsoft didn't do this either).

So the physics that would make an entrepreneur worry just aren't there at Microsoft's CEP effort.

Which leads me to my current answer: the fact that Microsoft is trying to get into CEP is a big, important endorsement for CEP. And since we think we have the best CEP product available (as many others who have evaluated us have found), we invite the competition. If Microsoft's endorsement makes corporations more aware of CEP, it's better for StreamBase.

May 08, 2009

As the ponytail (and the tiny bio above) imply, my
posts will focus on technical details such as language semantics,
general programming issues, compiler design, or anything else I feel is
nifty.

Bob Hagmann from Aleri recently posted a great piece on designing and debugging CEP apps;
unfortunately, he leaves out what that process of debugging might
actually look like. While some languages force you to spelunk through
printf output or trace files to track down a bug, I have found that the
returns from an integrated debugger are immeasurable (and greatly
magnified by good design). StreamBase ships with a visual debugger
built into its development environment, so I will expand on Bob's post
with a concrete example using the StreamBase visual debugger. Consider
the following sample application that continuously calculates Bollinger Bands
for stock quotes (for those playing along at home this sample ships
with StreamBase and can be loaded with "File -> Load StreamBase
Sample...")

This
app actually calculates Bollinger bands in two different ways, one
using a simple moving average and the other using an exponential moving
average. For simplicity, the two paths are separate branches of the
main app, allowing each path to be debugged separately. Within each
path, the calculation is divided into a first step, where the moving
average and standard deviation are calculated, and a second step, where
the intermediate results are used to calculate the Bollinger bands.
This allows us to use the StreamBase debugger to inspect intermediate
results and inspect their values.

With
the app paused at a breakpoint, we can inspect the fields of the
current tuple or at any earlier point in the active data flow, allowing
easy inspection of transformations and invariants that need to be
maintained. The StreamBase debugger also lets the user modify the
value of fields in the current frame before resuming execution,
providing a simple way to test downstream components with hand crafted
data.

Of course in an application this simple, one doesn't even
need a debugger or careful design. But, when debugging a larger
application like a smart order routing system, organizing your code
into staged transformations where each stage is a submodule can be
invaluable.

This
sort of design not only allows you to step into each module and debug
its inner workings separately, but it also provides clean abstraction
barriers for unit testing. And once you have unit tests for each stage
and a debugger to help you fix regressions, you will be free to
refactor, optimize, and extend your application with impunity.