In
addition to these great overviews explaining where OSGi stands today,
Eclipse has announced two very notable projects worth following: Project Gemini, which aims to provide implementations of the standards developed by the OSGi Enterprise Expert Group, and Project Virgo, which was made possible through the donation of SpringSource dm Server to the Eclipse Foundation. IBM and JBoss are also making great strides in exposing the virtues of OSGi to their enterprise customers, while Paremus continues to innovate with cool products like Nimble. And let's not forget about Apache Aries and iPojo and Sun's Glassfish. Of course, the notable list goes on.

The Early Adopters

With all the progress we've made over the past 18 months, we're left with two nagging questions:

Is OSGi ready for the enterprise?

Is the enterprise ready for OSGi?

Of course, the answer to each is, "It depends!" Let me elaborate through reference to the technology adoption lifecycle.
When OSGi in the Enterprise was published roughly 18 months ago, I have
no doubt that we were in the Innovators phase. The hype surrounding
OSGi was magnificent and vendors were working feverishly to leverage
OSGi. Yet, without subjecting themselves to a significant burden due to
dearth of tooling, lack of third party modules, and scarcity of
available resources, leveraging OSGi in the enterprise was simply not
feasible. This is why, 18 months ago, OSGi was not a viable enterprise
technology. Yet, it was easy to recognize the potential, as well as the
momentum, which is why I made the following recommendations:

Include OSGi in the Infrastructure Roadmap

Monitor Product Evolution and Market Penetration

Design for Modularity Now

Today, I have no doubt that we've crossed over to the Early Adopters
phase. I am starting to hear many more success stories from developers
who are leveraging OSGi. This is because platforms expose the
capabilities of OSGi to developers, tools have emerged that ease
development, and many third party frameworks have been "osgi-ified".
Each of these advancements makes it easier and more viable to develop
applications that leverage OSGi. More than ever, we must heed the
recommendations above. OSGi continues to make its mark. If you're an
early adopter, OSGi is ready and waiting for you.

Crossing the Chasm

Yet, the most
significant hurdle looms - crossing the chasm. There are a few different routes for OSGi. Let's examine a few scenarios. First, some background.

JSR 294 is the standard that will define the module system for Java SE 7. Project Jigsaw leverages JSR 294 as part of the OpenJDK
project to create a simple module system for JDK 7. Officially, Jigsaw
is not part of Java SE 7. However, Jigsaw is going to be the reference
implementation (RI) for Java SE 7, implying Jigsaw is a logical choice
to become the JSR 294 RI. The OSGi Alliance does have members on JSR 294 (ie. Peter Kriens) to help ensure compatibility of JSR 294 with OSGi. If you're interested, here's a perspective from the middle of last year following JavaOne. Now, at this point, a few things potentially happen that affect the future of OSGi and Jigsaw.

Recently, JSR 294 291 was marked as inactive. Alex Buckley commented
that this was only because of a process flaw in the JCP. A JSR is
marked inactive if it hasn’t produced an early draft for 18 months. He
assured the community that JSR 294 is alive and well. Yet, possibly
it’s a sign that Oracle isn’t behind the project, especially since they
have leveraged OSGi in some of their products, though they have been
quiet about OSGi since they announced their acquisition plans. With the
death of JSR 294 goes the death of Jigsaw. So, Jigsaw dies, and Oracle
throws their weight behind OSGi. OSGi becomes the runtime module system
on the Java platform.

JSR 294 inactive status truly is a snafu in the system. The JCP
delivers JSR 294, for which Jigsaw is the RI. Momentum builds around
Jigsaw and it becomes part of Java SE 7 implementations. OSGi is
relegated to niche markets (Eclipse RCP, home appliances).

The second scenario plays out, yet there are flaws in the specification and Jigsaw’s implementation that prevent its adoption. Neil highlights a few of these. In the end, OSGi is proven to address these shortcomings, and OSGi emerges as the de facto standard module system.

JSR 294 does in fact ensure compatibility between Jigsaw and
OSGi. They can be used interchangeably. Everyone goes home happy.
Doubtful!

Try playing out your own scenario. Either way, all paths lead to a module system on the Java platform.
And certainly OSGi has a significant headstart.

But What If...

Wait a minute, though. Imagine that Java 7 never
gets launched as a JSR, is instead a proprietary implementation from
Oracle/Sun who tries to monetize Java 7. The other vendors decide to
boycott Java 7 and the world sticks with Java 6. Let fragmentation of
the Java platform commence. A dire scenario indeed.

Yet a plausible scenario, as well. There is no JSR for Java SE 7, meaning there is no specification nor technology compatibility kit (TCK). A recent article in SD Times
points out that there have been no new JSR submissions since Oracle
announced their decision to buy Sun. Likewise, no JSRs have entered
early draft review. So what exactly are Oracle's plans? Well, it's
possible we're about to find out. On January 27th, Oracle is hosting a webcast where they intend to announce their plans. And as Mitch points out, Ellison certainly doesn't want any distractions hanging around as the big race approaches.

So it may come down to this. The future of OSGi, Jigsaw, and modularity
on the Java platform comes down to the direction that Oracle intends to
take Java technology going forward. Of course, their decision will have
a tremendous impact on the entire Java ecosystem. We should have direction sometime soon. Time to set sail!

January 06, 2010

The term "disruptive technology" is much-abused in analyst writing. Let's remind ourselves what it really means. Clayton Christensen and Joseph Bower coined the term "disruptive technology" in their 1995 Harvard Business Review paper [1]. Christensen and Bower point out that products tend to improve incrementally to a point beyond which customers cannot adopt the product's new capabilities. At some point, the high-end capabilities of the product will exceed the needs of its customers. At this point, a disruptive technology may enter the market and offer a new and different value proposition. Disruptive technologies offer "a different package of attributes from the one mainstream customers historically value". This often means less leather trim and walnut dash inlays, but a raw performance that makes the impossible, possible. Christensen and Bower use the evolution of the hard-disk-drive from 14" to 8" to 5.25" then to 3.5" to illustrate points of disruption:

Each of these new architectures initially offered the market substantially less storage capacity than the typical user in the established market required… But the disruptive architectures created other important attributes – internal power supplies and smaller size (8"); still smaller size and low-cost stepper motors (5.25"); and ruggedness, lightweight, and low-power consumption (3.5"). From the late 1970s to the mid-1980s, the availability of the three drives made possible the development of new markets for mini-computers, desktop PCs, and portable computers, respectively.

So, bored with reading endless, 'X of the decade' reviews in newspapers over the holidays, I decided to join in with a list of technologies that have disrupted application development over the last ten years. Here’s my list (in no particular order):

Spring Framework

Ruby on Rails

Eclipse

Amazon Web Services

JBoss Application Server

Open Source Databases, such as MySQL

Apache Ant

JUnit/xUnit

Each of these disruptors is detailed in a table embedded in these slides.

What's interesting in retrospect is the bias in my list towards open source innovations. I didn't set out with that in mind. My colleague Kirk Knoernschild posits that this is because:

Developers creating tools to "scratch their own itch" tend to hit the mark better than vendors trying to invent a problem that needs solving.

Which brings us back to another Christensen concept: the "innovators dilemma". Vendors tend to create products for their existing customers; disruptive innovation tends to create new markets outside an existing customer base, or a new demand for products with different attributes (often a lower price point). While some of these technologies were disruptive on price point (e.g. JBoss, MySQL), many are not – just better functionality (Spring, Ruby on Rails) or brand new (JUnit, Amazon Web Services).

The Jolt Awards are another reference worth returning to, for spotting trends in application development technologies. I would say, the Jolts award sustaining, rather than disruptive technologies: they tend to be more vendor-centric than the open source bias in my list. That Eclipse and Hibernate won the 'languages and development environment' and 'libraries, frameworks and components' awards respectively in 2004 and again in 2005, was unusual.

There are many other disruptors I wanted to include that do not fit with these technologies: disruptive architectures like REST or development practices, especially agile methods. These other disruptors will make a fine topic for another article.

July 02, 2009

Lord Phillips, this is an unexpected pleasure. We're honored by your presence.

PHILLIPS**

You may dispense with the pleasantries, Thomas. I'm here to put you back on schedule.

Thomas turns ashen and begins to shake.

KURIAN

I assure you, Lord Phillips, my men are working as fast as they can.

PHILLIPS

Perhaps I can find new ways to motivate them.

KURIAN

I tell you, this middleware will be operational as planned.

PHILLIPS

The Emperor does not share your optimistic appraisal of the situation.

KURIAN

But he asks the impossible. I need more men.

PHILLIPS

Then perhaps you can tell him when he arrives.

KURIAN (aghast)

The Emperor's coming here?

PHILLIPS

That is correct, Thomas. And he is most displeased with your apparent lack of progress.

KURIAN

We shall double our efforts.

PHILLIPS

I hope so, Tom, for your sake. The Emperor is not as forgiving as I am.

<fades>

Commander Kurian will be relieved that it seems the Oracle Fusion deathstar will be operational as planned, as Anne reports here. So, let's award the middleware team 10 out of 10 for execution. Ramming acquisitions into a (excuse the pun) coherent set of products is tough engineering and marketing work.

Product strategy is a different matter. Burton Group's position on heterogeneity (not homogeneity) as a driver for an application platform strategy is well known. We see opportunities for Rebel incursions.

Also, technically, I don't (yet) see any influence of the BEA microkernel architecture surfacing in other parts of the suite. This is a pity, because promoting a modular architecture like OSGi really helps developers digest parts of the stack they need and remove the bits they don’t. The WS02 carbon project and Paremus Service Fabric are pioneers for this type of architecture.

July 01, 2009

By invitation, I went to a special NDA analyst briefing last week on the Oracle 11g announcements. As I tweeted at the time, I found it very refreshing to hear a strong and definitive strategy underlying the announcements. It's a four-part strategy:

Complete: Oracle wants to be a one-stop shop. You can buy everything you need: applications, software infrastructure, tools, databases, management, and hardware infrastructure from Oracle.

Integrated: All components of the complete platform are designed (or perhaps refitted) to work with each other.

Best-of-breed: Each component in the integrated, complete platform is a credible, competitive product in its own right.

Hot-pluggable: The environment is standards-compliant, so, if you desire, you can replace a best-of-breed Oracle component with a comparable standards-compliant component from another vendor.

Yes, the Oracle environment is complete, integrated, and composed of (for the most part) standards-based, best-of-breed offerings. But if you take advantage of the "hot-pluggability" feature, you break the "integrated" benefits of the environment, which derive from the common development and management systems (JDeveloper and Enterprise Manager). But Oracle has deliberately limited the scope of these products to work only with Oracle-supplied platform components.

As alluring as the one-stop shopping strategy is, organizations must learn to just say "no". The reality is that no one has an entirely homogeneous environment. Oracle claims that Enterprise Manager supports end-to-end business process monitoring, but the concept breaks down if the process includes a .NET service or a third-party COTS application. A better solution is a management strategy that embraces diversity.

Diversity in IT systems is a fact of life. The trend toward heterogeneity is only going to increase as organizations take advantage of cloud computing or implement ebusiness collaboration systems.

As for the specifics of the announcements: Two things I really liked:

Integration of TopLink and Coherence -- you can now use the Coherence distributed data caching system within TopLink

Integration of Collaxa and Fuego runtime engines -- a single engine can now run both BPEL scripts and BPMN models

Not long ago, the decision to use Java or .NET for
enterprise development was often filled with polarizing discussions.
Argumentative points advocating the merits of one platform over the
other were often based on false facts and pseudo-information. Today,
many organizations have development teams that use both Java and .NET
and the argument is again surfacing. The decision today, however, is
based on different criteria than before.

March 05, 2009

In the economic realm, economists often refer to 'structural change'. 'Structural change' is when long-term game rules have been fundamentally altered, and truisms based on the old model no longer apply (e.g. 'real estate value never decreases', 'over the long term, the stock market always rises in value', 'my children will experience more opportunity', 'owning is better than renting'). If I propose economic and cultural aspects underlying the world around us are radically changing and old models do not provide good guidance, you may not be surprised. Because underlying economic mechanics have changed (i.e. investment yield,counter party risk, credit liquidity, personal consumption habits) economic bailouts based on existing models are not as effective (much to everyone's chagrin). We are forced to re-build the economy and meet new long-term realities imposed by structural changes that occurred during the last twenty years (i.e. financial industry market participation, credit expansion, credit risk management and hedging, asset valuation). To be effective, participants must recognize a new model is in force.

Shouldn't a structural change within Information Technology be recognized today? Cloud computing proponents think so... Disruptive forces (i.e. commoditization, tightening economics, industrialization, business practice acceptance, and a resurgent desire for trust and credibility) are creating a new reality. Organizations can abide by old model rules or gain advantage by adapting.

The normal IT response is to adopt an 'architecture bailout' focusing on products and technology. But many 'architecture bailouts' (i.e. Object oriented programming, Component Based Design, Event Driven Architecture, and Service Oriented Architecture) are pursued as incremental, bolt-on practices. Rarely do organizations transform people, process, and technology to match realities imposed by structural change. But 'What is IT Transformation Really?', Brian Watson of 'CIO insight' furthers the discussion with this call to action: "[Position] your IT shop as the “internal consultant of choice,” versus available consulting or advisory services". How many of us know 'the cost of compute hour', 'the cost of storage per GB', 'the cost of web service development', 'the cost of our application platform'? To determine which aspects of IT to shift into Cloud, we need to fully understand the economics of IT instead of architecture and technology.

Many Cloud computing messages target IT business model adaptation (i.e. pay-as-you-go, externalizing IT services, service consumption) instead of another architecture bailout (i.e. grid, virtual machines, autonomic computing, SOA). The conversation is shifting in the right direction, and Chris Howard is leading with research proposing a process to determine when to engage internal resources (core) or disengage the task and rely on external service providers in the Cloud.

Today's economic players are proposing a transformational response and stepping out of their comfort zone to lead. Meeting structural changes imposed by Cloud computing will require significant action by IT leadership. A recent blog post by Mike Rollings states an incremental responses will often be ineffective and leadership is a critical. Jack Santos has an excellent blog entry on leadership and collaboration (the embedded 2 minute video is uplifting).

December 09, 2008

Mark recently blogged about the importance of modularizing the JDK, and mentions this is a primary goal of Java 7. As Mark points out here, Java Kernel and Quickstarter of JDK 6u10 (now 11)
is a step in the right direction in terms of reducing download and
startup time. But really it's little more than a stop gap solution
driven by necessity due to Java FX.
Modularizing the JDK is a step in the right direction, but bringing
modularity to the Java platform is a long-term solution that is overdue.

The
difference between a modular JDK and a module system on the Java
platform is significant. While a modular JDK might be more efficient, a
module system on the Java platform allows development teams to
modularize their applications, something they stand to realize
significant value from. These benefits include the ability to more
easily maintain and extend anapplication, increase reusability of
application modules, and increase manageability of applications.
Without a module system, attempts to modularize large software systems
is virtually impossible. Those who have experienced development with
and without OSGi understand this.

Unfortunately, it appears Sun might drop the ball. Java 7 isn't scheduled to hit the streets until 2010. Given that JSR 277 is no longer part of Java 7, one has to wonder if the Java Module System is already dead in the water. OSGi continues to gain traction in the marketplace, and is currently being leveraged by every single major platform vendor. Even Sun's very own Glassfish V3 Prelude
builds atop OSGi. It's time to give up on JSR 277 and embrace OSGi as
the standard module system for the Java platform. It's here today, and
it's proven.

November 06, 2008

Data centres must be more dynamic, especially in today's rapidly changing markets and economies, but organizations can only change as fast as their infrastructure allows.

According to Drue, the Dynamic Data Centre is all about having an agile, fluid IT infrastructure that can automatically change to meet business demands.

What’s on servers in data centres? It turns out that application servers take up a decent percentage of the server estate. At the financial services firm I worked for nearly 50% of servers in the data centres were dedicated to running application servers and this figure is fairly typical.Drue asked me how OSGi fits into the picture of improving operational agility, so I’m going to start answering that by looking at how we deliver the Java applications hosted on those servers.

Enterprise Java program design has matured to an art form. We create loosely-coupled components and carefully layer them on top of each other. Then, we do the artless bit: the packaging. We stuff all the components and all their dependencies into one monolithic archive file for deployment into an application server.Kirk Knoernschild describes the problems with packaging and deploying Java applications this way in his “OSGi in the Enterprise” overview:

Small changes, even if isolated to specific areas of an application, require packaging and redeploying the entire application and often a restart of the JVM instance hosting the application. Transitive dependencies among JAR files when adopting new frameworks are difficult to manage. Homegrown frameworks must be deployed with each application that uses them. Dealing with the problems of multiple class paths is a burden when troubleshooting runtime errors. Fighting through the problems encountered when different JAR files contain the same fully qualified class is painful. The amalgamation of each of these issues, combined with many others that tend to surface, reveals a packaging and deployment model that is broken.

That sounds bad. And it is bad. It’s certainly a maintenance and operations nightmare. And it’s not going to give us the agility we need to move and change those applications dynamically around the data centre. It’s like designing and creating an incredibly elaborate layer cake, then throwing all of it into a big sack. If we want to replace any of it, we have to take it all out, add a different kind of chocolate and build it all again.

OSGi gives us a way to package and deploy each of the layers and tastes separately, more like a bento box than a squashed cake. If you don’t like chocolate mousse, fine. Swap it for a lemon tart. It doesn’t affect the taste of the rest. If the crème brûlée has turned, you don’t need to return the whole box for a new brûlée If the chef demands you eat the tiramisu with the apple sorbet, that’s easy. If we’re sharing the desert and we both want vanilla ice cream, but you want the Madagascar variety, that’s okay, we can have our own versions.

Sure, so OSGi give us a better way of packaging and deploying Java applications. So what?

Well look behind this and it is actually a big deal. OSGi solves some of the thorniest Java deployment and operations problems, namely:

Eliminates application dependency problems No more guesswork. No more tracking down classloader problems in production. OSGi requires explicit declaration of dependencies between modules.

Minimizes footprint. Application servers and their hosted services can be “right-sized” by allowing administrators to build the minimal Java application platform. Only the bundles with the explicitly required capabilities will be installed. The platform will dynamically adapt by loading capabilities on demand.

Solves versioning problems- multiple versions of the same software module can be deployed within the same JVM instance.

Enables hot deployment - modules can be deployed and updated within a running system without restarting the application or the JVM.

Solving these problems breaks down barriers to Java application mobility and supports changing the workload dynamically as demanded by Drue.

Platform vendors are all busy rebuilding their application servers based on OSGi. There are 2 flavours of OSGi fit out you can see them doing: retrofit and ground-up. You can roughly interpret that as “OSGi for me” and “OSGi for everyone”. SpringSource’s Adrian Coyler characterises the “OSGi for me” vendors as saying:

I happened to use it internally but there's no real way you could tell; it's just my private way of partitioning out my internals

The ground up, “OSGi for everyone” platforms, of which SpringSource dm Server and Paremus’ Infiniflow are good examples are not only building an “a la carte application server”, but exposing the full capabilities of OSGi to enterprise applications.

Is OSGi ready for the enterprise and perhaps more importantly is the enterprise ready for OSGi? Kirk Knoernschild answers these key questions in his “OSGi in the Enterprise” and in recent blog posts: here and here. Certainly the enterprise data centre is ready for technologies that enable fluid and adaptable configuration of workloads.

September 10, 2008

Over on the Executive Advisory blog Chris Howard discusses Monday's outage at the London Stock Exchange (LSE). The TradElect platform, built on .NET technology was down for seven trading hours. Knowing all I do about the demands of developing and operating trading systems, I’d be slow to throw stones at an entire platform and quick to murmur glad it wasn’t me. Chris raises an interesting point about the choice of platform for trading applications:

A surprising amount of .NET infrastructure underlies sophisticated trading applications worldwide, both on exchanges and within Financial Services companies.

One contributing factor to this is that trading platforms are often built outward in, the GUIs are established first, and design decisions made at the front-end drive the entire architecture. Initial financial product development is always done on spreadsheets. In many cases, portfolio and risk management remains on spreadsheets far longer than is prudent (or legal) to do so. For their interfaces, traders like familiarity – front office applications are typically modeled to appear tabular like spreadsheets. Traditionally Microsoft has the upper hand in the richness of the GUI widgets, and of course great integration with Excel.

This is a landscape that Microsoft is looking to exploit heavily with its Silverlight technology, and an important battleground for Adobe to try to gain a foothold in with AIR.

Front office developers are skilled .NET practitioners, and will tend to trust the technology when it comes to building out infrastructure. That trust and investment in development skills are huge factors in platform selection, despite the pleas of technical architects to turn to Java for high-volume mission critical systems. It’s not that there’s no Java, just less than you’d think. In fact Java is still looked on with some suspicion in the front office for latency-sensitive applications – because of its reputation for being interpreted (it can’t be faster than my C code) and stopping the world to collect garbage. Of course this is almost entirely unfounded, in this age of real-time JVMs with predictable GC and JVMs that optimize reactively based on runtime profiling data. Again in a pinch, developers will stick to the tried and trusted. C and C++ will decay to their half-life slowly.

There are winners and losers in any event like this. The clear winners in this case are the competing multilateral trading facilities Turquoise and Chi-X, and dark liquidity pools. Trading organizations with smart order routing would have fared better – certainly brokers in a post-RegNMS/MiFID world could have sniffed out liquidity for some securities at alternative venues. Turquoise can’t do too much laughing – it had its own 80 minute outage last week, handling a fraction of the volume of LSE.
I guess the other Superplatform players will believe that these outages will strengthen their claims to be more suitable for mission-critical environments. I’m doubtful. There are too many other potential sources of the problem – weaknesses in change management or testing strategies, for example. If anything, what is strengthened is the case for effective IT governance.

If schadenfreude is your thing, there’s plenty offered in this story. Clara Furse, the LSE chief executive headlined the FT's letters page on Monday, the day of the outage. We have been operating an electronic trading engine since the introduction of SETS in 1997, and we recently introduced TradElect to ensure we remain at the cutting edge, she wrote. The emergence of new trading platforms should test the attractiveness of our services. Indeed.

As Andrew Hill chuckles in the FT, The higher the horse, the harder the fall.

July 01, 2008

Oracle just briefed the world on its product roadmap and the integration of its BEA products. The good news is that there were no surprises. It's nice to know that adults are piloting the ship and making rational decisions.

Oracle will maintain the WebLogic brand name. Oracle will release rebranded versions of the BEA products within 100 days. It will deliver converged products over time.

Oracle WebLogic Server (WLS) will be the strategic application server going forward. Key features from OC4J will be added to WLS, including Toplink and the SCA container. Thomas Kurian indicated that the next major release will adopt an OSGi kernel.

Oracle Service Bus (formerly known as AquaLogic Service Bus [ALSB]) will be the strategic ESB going forward. Key features from Oracle ESB will be added to ALSB.

Oracle BPEL Process Manager remains the strategic BPEL engine. Oracle will develop a new converged BPM system based on Oracle BPA and AL BPM.