Monday, April 28, 2008

We’re currently taking Google Apps for a test drive and Dave had some fun trying out the Salesforce.com integration over the weekend. I was really surprised to hear today that when he filed support issues with Salesforce.com he got no response until this morning. Not even an automated reply! I find it very strange that a company that sells itself on being there when you need it wouldn’t have support on the weekends. I must admit it took us a lot of effort to figure out the infrastructure and processes for scalable 24x7 support at MuleSource, I can understand why SalesForce.com might have avoided it given their sales model.

Friday, April 18, 2008

SUN has all the right ingredients to make a PaaS (Platform as a Service) play;

They know hardware

they have a great operating system, Solaris

they have the Java platform and mindshare

They even have a great database offering, MySQL

Given all this plus the potential reach SUN has into Java communities and commercial organisations, they should be leading the PaaS movement.Oh wait, SUN do have an offering. It’s called Network.com and not many people have heard of it.

SUN may have a lot under its belt but one thing they lack is the marketing engine to get their message out. This might be because for the last 8 years SUN’s messaging has been all over the map. I like SUN (did you know they are the worlds largest Open Source company? Me neither) but they need help. Apart from their hardware business I don’t think many people know what SUN does or what their business model is.

I’d love to see SUN dust off Network.com and create a PaaS offering. They should base it on Solaris, Java and MySQL, outsource their PR and marketing for this project and dive head-first into the PaaS game.

Finally, Fring have released a VoIP/Chat application for the iPhone. You can connect using your Skype, GTalk, Twitter, Yahoo, MSN, ICQ and generic SIP provider accounts and start talking. It even supports Skype Out. Fringing cool! We had some problems testing Skype voice yesterday but it's still a beta.

Monday, April 14, 2008

I must confess I didn’t fully get REST until recently. Given the amount of interest around REST most people understand the basics:

It’s an architectural style. It’s not a technology itself, rather a set of well defined guidelines or rules for building scalable applications utilizing HTTP.

HTTP verbs such as POST, GET, PUT and DELETE are used to communicate desired behaviour to the server.

Each of these verbs have a well-defined CRUD actions associated with them i.e. POST = Create, Update, Delete; GET = Read, etc.

URIs are used to represent resources to act upon. The information that appears in URIs usually identifies Nouns such as http://myhost.com/people/{person} or http://myhost.com/people/{person}/addresses.

What I didn’t get until recently is how you build applications in a RESTful way. This was because my mind was still set in RPC mode. This means that I was trying to map RPC/WS calls to REST and quickly you start to think there aren’t enough verbs in HTTP to perform all tasks. The real problem was that in order to build a RESTful architecture you need to think in terms of Resource Oriented Architecture (ROA). That is you need leave behind the urge to define interactions in terms of ‘what you want to do’ and shift focus to ‘what resource you want to act upon’.

For example, you may have a Web Service that has a login() method. This is an action; it’s something you want to do. How does this map to resource? A resource needs to be defined where login() becomes an action on the resource. In this case it would be a UserSession resource. This may not be immediately obvious coming from WS/RPC world since the UserSession resource didn’t exist.

I am still getting to grips with some of the detail of ROA but I am becoming a big fan. I love the fact that RESTful proponents are very particular about REST terminology (apologies for any faux pas I might have made here), the power of REST seems to lie in the acceptance of a core set of principals; there is no room for ambiguity or bending the rules. This also means that REST is not suited for all architectures, but is certainly a powerful architecture style that should be in all architects toolbox.

Sunday, April 13, 2008

Amazon Web Services (AWS) amazes me. Not only have they built a fantastic platform but also it seems such an odd direction for a book and DVD company. I have often mused about how Amazon went in this direction by imagining the board meeting when the suggestion came up:

Jeff:

We’re doing great. Book sales are through the roof, our recently launched DVD service is taking off and we even have a community market place that is getting traction.

Investor:

Yes, the numbers look great. What’s next?

Jeff:

Well, I think we should develop a compute cloud and build a set of Web Services around it to allow developers to build applications on our technology.

Investor:

Hmmm... I don’t get it. What about selling electronics?

Jeff:

No, I like the cloud idea.

Talking with Dave the other day gave me some insight as to how Amazon really stumbled on this interesting direction (though I still like my boardroom conversation). Basically, Amazon has a boatload of compute power available for redundancy and for them to cope with peaks such as Christmas. About 6 years ago the economy was on a downturn, Amazon sales were not doing as well as hoped and their stock started going south. Jeff Bezos, who has a reputation for having some crazy ideas decided to back the Electronic Compute Cloud (EC2) project since Amazon already had the infrastructure and knowhow for building hugely scalable systems and they had a bunch of hardware that was doing nothing.

It was a gutsy move for Amazon since EC2/AWS would not have been a short-term revenue generator and I doubt it is having a huge effect on Amazon’s earnings 5 years later. However, few would argue today that many in the industry are looking into compute clouds and SaaS with great interest. It’s just a matter of time.

Every now and again I get into a discussions about why I decided not to adopt JBI for Mule. Admittedly, the topic has cropped up less and less over the past 2 years since people realize that just because something is called a standard doesn’t make it the best solution. However, now and again there is a die-hard fan that is there to push JBI as the only way. It’s been a few years since I read the JBI specification, but here goes.

When JBI (Java Business Integration a.k.a. JSR-208) first started cropping up it generated some interest since it was an attempt to standardize an area of application development where there were a lot of moving parts and complex problems to solve. Of course I looked at the spec early on since Mule was a platform that operating in the integration/ESB space.

My initial reaction was mixed since I felt there was scope for standardization in integration but the scope of JBI seemed to intrude into the problem space much further than was required. Integration problems are varied, complex and different for every organization because the technologies, infrastructure and custom applications in organizations are always different. Furthermore, they are immutable. This is a key point since many proprietary integration solutions (EAI brokers, ESBs, SOA suites) assume that an organization is either creating a green-field application or can rip-and-replace pieces of their infrastructure to make way for the Vendor X way of doing things.

Mule was designed around the philosophy of “Adaptive Integration”. What this means for Mule users is that they can build best-of-bread integration solutions because they can choose which technologies to plug together with Mule. It also means that they can leverage existing technology investment by utilizing middleware that was purchased in the past from other vendors. When talking about integration or SOA I think the piece with the most value is the glue between systems. With that said it is vitally important that this glue is as flexible as possible in order to be useful for a wide range of integration and SOA scenarios. This is one area I think JBI went wrong.

JBI attempts to standardize the container that hosts services, the binding between services, the interactions between services, and how services and bindings are loaded. It sounds like a great idea, right? The problem occurs when the APIs around all these things makes assumptions about how data is passed around, how a service will expose its interface contract and how people will write services. These assumptions manifest themselves as restrictions, which as we know is very bad for integration. These assumptions include –

Xml messages will be used for moving data around. For some systems this is true, but for most legacy systems, Xml didn’t exist when they were built and they use a variety of message types such as Cobol CopyBook, CSV, binary records, custom flat files, etc.

Data Transformation is always XML-based. Transformation is hugely important for integration, so to assume all transforms will be XML and use the standard javax.xml.Transform library was a huge limitation.

Service contracts will be WSDL. This might seem like a fair assumption, but again it’s very XML-centric and WS-centric too. We know that back and middle office integration is no place for Web Services. What about other ways of defining service contracts such as Java Annotations, WADL or Java interfaces?

No need for message streaming. There is no easy way to stream messages in JBI. It just wasn’t a consideration. Some vendors have worked around the API to support streaming, but that defeats the purpose of having an API.

You need to implement a pretty heavy API to implement a service. This means the guys writing your services need to understand more about JBI than is necessary. Mule has always believed that services can be any object such as a POJO, EJB Session bean or proxy to another component. This means it does not force developers to know anything about Mule logic, only write logic relevant to the business problem at hand. It’s worth noting that vendors have also found workarounds in JBI to allow developers to deploy POJO services, but it quickly starts looking like JBI is working against them.

It’s not actually that clear what a service engine is in JBI. JBI seems like a container of containers, but what about services? Do I need to host an EJB container inside JBI and then have an EJB Session bean as my service? Do I write a service engine per service? It seems both may be valid, but I never thought either were optimal.

There are other issues as well. The JBI specification is obfuscated with lots of new jargon. For a new developer they sheer amount of knowledge required to get started with JBI is a little overwhelming. This could be said for any middleware but I think JBI is one of the worst for new developers to grasp.

The JBI specification does talk about roles of the different types of users that will interact with a JBI system. This is a good approach, but in reality it was difficult to understand primarily because it seems JBI was designed without really thinking about how the developer would work with it. Why? Well, when you get a load of vendors (though not Oracle or BEA) to sit around and design an integration system, they will design a system around the way they see the world. This is exactly what vendors did before JBI too, and we were often not happy with the products we were given. JBI seems to be a “standard” written by middleware vendors for middleware vendors.

This “vendor view” of the world is one of the main reasons Open source has done so well. Traditionally, Open Source has been written by developers much closer to the problem being tackled. These developers can deliver a better way of solving the problem using their domain knowledge, experience and the need for something better. This was the ultimate goal Mule and given the success of the project I believe that goal has been realized with the caveat that things can always be improved (which we continue to do).

Also I think the far-reaching scope of JBI affects its re-usability across vendors. By their nature, vendors need to differentiate themselves in order to compete. Because JBI attempts to define how everything should work vendors have to build in features and workarounds that go beyond the specification to differentiate their service container. This breaks the re-use story since, if I use a JBI Binding Component in one container doesn’t mean it will behave the same way in another container.

This leads me to my final point, which is that one of the big selling points of JBI and standards in general is to promote re-use. But I don’t think we’ve seen much re-use from JBI. Where is the library of Service Engines and Binding Components? I know Sun have started porting their J-Way connectors to JBI, but nobody supports them. If you look at every JBI implementation each has written their own JMS, FILE, HTTP, FTP etc Binding components… not exactly what I’d call re-use.

One thing that concerns me about the JCP is that when something is released through this process people label the resulting work a standard. This is a little strange to me since I think of a standard as something that is defined when the problem domain being addressed is fully understood. For example, TCP/IP is a real standard because it states a well-defined protocol for exchanging information over a network. Its scope is very clear, the protocol is precise and it deals with a single problem. JMS on the other hand is not really a standard, it’s a standard API defined ultimately by two vendors (IBM and TIBCO) who made sure that the standard suited their needs. Not necessarily the needs of Java messaging. This is why JMS seems to have some quirks to it and the API is heavy (though improved in JMS 1.1).

The fact that JCR-312 is called JBI 2.0 not JBI 1.1 means that the forces behind JBI 1.0 realize the flaws and are looking to make amends. To all those that badgered me in the past about adopting JBI in Mule may now retract some of their overzealous statements since JBI 1.0 already looking obsolete. I am glad I did not pollute Mule with JBI.

I had some good conversations with Peter Walker (co-lead) about joining the expert group for JSR-312. I appreciated the community spirit of Peter reaching out but I felt the direction wasn’t compelling enough. It seemed to be going in a similar direction. We talked about OSGi and SCA support and that these may be weaved into the specification. While I think this is a good idea I don’t think it’s up to a standards committee to define how to build a server. Again, the scope seemed too broad.

So, should we just give up? No, of course not. But I think we should re-align our sights to target the particular areas that make sense to be standardized. From my experience there are two areas that would make sense to standardize; configuration and connectors. This is because these are the pieces that either the user or other systems need to interact with.

Service Component Architecture (SCA) addresses the XML configuration piece and I do think this is a step in the right direction (I’d like to see the same done for programmatic configuration). The issue with SCA right now is people still think it’s a bit Web Services centric and the configuration is quite verbose. Another major issue is that SCA hasn’t received the adoption required to be a front-runner.

On the Connectors front we do have , but anyone who’s read the JCA spec may wonder how on earth a bunch of smart people could come up with such a recalcitrant API.

I’d like to see a compelling connector architecture come out of the JCP process. Focusing on this specific area may promote middleware vendors and other independent software houses to build re-usable connectors that we could all use.

Tuesday, April 8, 2008

There has been a lot of talk about OSGi in the last 18 months with many vendors and open source projects adopting OSGi at various levels. For many, OSGi is a new technology that is sometimes difficult to grasp since its applicability is not always obvious.

OSGi stands for Open Services Gateway Initiative and was born out of efforts from the embedded Java world some 8-9 years ago. In a nutshell OSGi is a dynamic module system for Java. It provides a class-loading framework and framework for managing relationships between modules, known as bundles. The driving force for OSGi is to provide a secure, pluggable system so that bundles can be hot deployed to the OSGi container. The container provides good isolation between bundles so that no bundle can interfere with another bundle unless that bundle explicitly exposes its services to the others. This means you can avoid all the class-loading tomfoolery we have with JEE where you deploy your application and then get cryptic non-standard, messages about conflicting versions of Log4J on your classpath.

This isolation also means you can have different versions of the same bundle available in the container. That might sound like a bad thing, but in a well-defined environment such as the OSGi container it’s a very good thing. Imagine you have an SOA with a few services running. At some point the API for a public service changes. You have loads of clients out there relying on this service. Rather than just redeploy the new service and have al your clients hammering your support because the service no longer works, you can deploy the new service and keep the old one. Over time you can retire the old service once your clients have switched. The point is that you have a much cleaner migration and your clients and support team will appreciate it.

Recently, I did an OSGi 'fireside' talk with Rod Johnson at TSSJS Las Vegas this year. I thought it was bizarre that we had a room full of people who wanted to know more about OSGi, since it’s something that should really be invisibly to most users in the same way Java class-loading is (or at least should be). We analogized OSGi to anti-lock breaks on a car. Breaks are generally a good thing in a car and we all need them. Anti-lock breaks are very cleaver and provide a lot of value to our driving experience. However, they are not very interesting on their own and they are something we just expect to be there at our disposal when driving.

OSGi isn’t interesting on its own. It’s what OSGi enables that becomes interesting to consumers. Hot deployment of bundles, relationship management between bundles and bundle versioning is much more interesting once made available within the framework/platform that you build your applications such as the application server, Mule, Spring or even the JDK (i.e. JSR-277).

The point is by next year I doubt there with be any more fireside chats about OSGi since nobody will care. Instead I hope we will be talking about designing and enabling well-designed applications to take advantage of OSGi.

Monday, April 7, 2008

At the recent OSGR event (yes, that’s Open Source Goat Rodeo), Zack from MySQL gave everyone a pair MySQL boxer shorts. These shorts, much like open source software, provided most of the functionality you’d want; elasticated waist, 2 leg holes, privacy. However, I noticed that there was no hole, you know… for frontal access. I was told that the hole was available in the enterprise version, but I could look to the community to see if they had already created a hole. Of course I could always create my own hole, but then I’d have to maintain it.

Personally, I would always go for the enterprise version since I like good support from my underwear.

Left to our own devices we deployed the product in ways it wasn't designed for, we really should have gotten help from the vendor.

Sunday, April 6, 2008

This is how conferences should be done. No AV problems, no PowerPoint slides to prepare, no panels, campfires or BOFs, and no sitting on corridor floors huddled around the power supply.Instead, this event was held in the mountains of Salt Lake City, with the focus on dialog and skiing. We were a small group with folks with big presence in the Open Source and greater business community. The group included Matt Asay from Alfresco, Larry Augustin from VA Linux, SourceForge and XenSource fame, Jeff Borek from IBM, Fabrizio Capobianco from Funambol, Marc Fleury from JBoss fame, Lonn Johnston from PageOne, John Robb from Zimbra, Bryce Roberts from OATV and Zack Urlocker from MySQL.

Needless to say the conversation was fast flowing, forward thinking and thought provoking. We must definitely do it again next year.

Big thanks to Matt for only organizing everything and for the fantastic cherry pie he made with is own bare hands. It had the best crust I have ever tasted. Seriously.

So what about the Goat Rodeo I hear you ask? Well, the name was coined over lunch on the last day but I’m sure there will be a much stronger goat emphasis next year. Besides the acronym OSGR sounds like a credible event.

We held our second MuleCon in San Francisco and what an event! We had around 250 attendees from all over the globe with Norway, Turkey, Japan, UK, Argentina and Sweden, Kula Lumpur and Australia represented (that I can think of). Like last year we had customers presenting their experiences with Mule and MuleSource, technology partners talking about complimentary products and of course the MuleSource team talking out our products, roadmap, deep dive sessions and lots of Q and A. It was great to see the diversity of users and customers in terms of size of customer and role in the organization. It was a real mix bag of people and I think it worked really well; people really seemed to enjoy exchanging use cases, success stories and challenges.Though I didn’t have time to attend many sessions, I did catch glimpses of a few. Some that stuck in my mind were –

Scripps Network (the guys behind HGTV, FoodTV etc) gave an good case study presentation about how they have used Mule to help them build Rich client applications. What was really interesting is seeing the progress Scripps have made since they came to MuleCon last year and we held and “Open Architecture” session using them as a use case. By the way their use of Mule is a little different from most since they are actually scheduling programming content including HD, so they really need a platform that can route very large messages in a reliable way.

Eugene from LeapFrog gave us a good insight into how to architect Resource Oriented Architectures (ROA) using Mule and demonstrated how this is being used at Leapfrog. I found this session particularly interesting since it sparked some ideas of how we could combine the recently release Mule RESTpack with the patterns Eugene used to help Mule users build ROA.

Jahan More of U1 Technologies gave a good presentation on high performance messaging. U1 is a little known JMS vendor right now that have a very fully featured JMS implementation that goes well beyond the JMS spec (in a good way). I was pretty impressed with what their Ambrosia JMS server can do, and we are currently in the process of certifying it with our QA infrastructure for use with Mule. Those who know me know I am a big fan of AMQP and see it making huge in-roads at all levels of the organisation in future, but I still see a lot of value of having a slid JMS implementation available for people today.

Dan Diephouse gave a great drill down of Mule Galaxy, our Registry and runtime governance product. We saw a lot of interest in this primarily because as the SOA projects in organisations mature more people are faced with the issue of how do you manage, discover and govern the increasing number of services available. Dan also spoke about our new RESTpack and new Web Services support in Mule 2.0.

We ran two panels and both provided lively discussion. The first day panel “Are they Smarter than a 5th Grader” discussed for the most part the impact and adoption of Open source in the Enterprise. The session was hosted by Michael Cote, from Redmonk with Larry Augustin,, from Augustin ventures, Matt Asay from Alfresco, Jason Maynard from Credit Suisse and Dave Rosenberg. M. di Paolantonio provides a good walk though.

The second panel was hosted by Michael again with John Davies, John Davies, Eugene Ciurana, John Rowell and John Gardner. Unfortunately I missed this one, but M. di Paolantonio covered it.

We have a number of excellent technology partners presenting. Uri Cohen from GigaSpaces gave a good overview of the different strategies you can combine Mule and Gigaspaces to build highly available and scale out solutions with Mule and Gigspaces. Alexis Richardson for CohesiveFT (who I’ve blogged before) gave an excellent demonstration of using Mule and RabbitMQ deployed to Amazon EC2 using CohesiveFT’s Elastic Server On Demand (ESOD). Why would you do this? Well, you get a great scaling story when you deploy Mule and RabbitMQ together on EC2 clouds and ESOD makes this very easy; a step in the SaaS direction. John Davies talked about high-performance message processing using Artix Data Services, which seemed to be well received by the slice of our audience from the financial sector.

For those who missed it Mule 2.0 was released during MuleCon. The Mule 2.0 team did a great job of providing a drill down session of what is new in Mule 2.0. There was a lot of buzz from customers and users wanting to get going with Mule 2.0. The last session of the day we decided to combine my session with the “Developer campground”. We kicked off with a demonstration of Mule hot deployment with some oohs and ahhs from the crowd. People were very keen to see when they could get their hands on this stuff. We need to do a lot more QA and tooling around it before its ready for our customers but I imagine we might provide an early access release later this year.

Other members of our team presented the new Mule IDE that works with Mule 2.0. I heard lots of great comments about the progress we've made. Also, our business transaction monitoring product, Mule Saturn, got an introduction. This product is in beta at the moment but seemed to be something our customers really want.

I was overwhelmed by the response of this year's event. If you missed it this year, make sure you don't next year. We will be posting all of this year's content on our website.

Tuesday, April 1, 2008

MuleCon has started and going very well so far, we have a great turn out of over 200 people. Rory de la Paz from Biogen is giving a presentation about his experiences with Mule and one of his slides is a pretty funny comparison of Mule and WLI. He uses the "sideways bike" versus the "single-speed bike" analogy. Before I go on you need to see the Sideways bike in motion.

Here the Sideways bike is WLI because:

It adds complexity to the solution

It does get you places... somehow

It gets you odd stares

On the flip side he described Mule as a Single-speed bike because:

It is simplicity in motion

It does get you places... fast

It gets you nodding stares

It is easy to fix and maintain

Personally, I am more of a car man and Mule has been described a the Porche 911 of ESBs by Burton.