Peter Kriens discusses OSGi

Recorded at:

Bio Peter Kriens is currently the OSGi Director of Technology. He has worked extensively for major companies like Intel, Ericsson, Motorola, Adobe, IBM, and Nokia. In 1998 he started working with the precursor to the OSGi Alliance. He has been heavily involved in all major releases of the OSGi specifications. He currently manages the OSGi technical work and gives workshops as the OSGi evangelist.

My name is Peter Kriens, I am a Dutch man, I grew up in Holland. I went to the Electronic school there, I studied electronics. I first started building newspaper systems and I spent about 10 years in Sweden, working for Ericsson in the GSM management system area, later at Ericsson research in Stockholm and from there I became the OSGi expert for Ericsson, and later I was hired by the OSGi Alliance to do their technical work. I've been part of the specification for the OSGi Alliance since 1998. I started my company in 1990, I was tired of building newspaper systems, I was doing a lot of management work and I wanted to work with technology all the time so I started my own consultancy company and that went off like a rocket.

It's really a big surprise for me - well, surprise is a big word. We started with making something for the home automation market. When we started with OSGi in 1998, the assignment that we got from the business people was "build a system for home automation" because at that time, 1998, the Internet hype was going right up. And the business people saw this opportunity; I was working at Ericsson at that time. They saw that the telephone system was a nice basis for extending it into home automation - that people would control their lights, their doors, television, all those kind of things that you can automate in a home. This was a very interesting area to be in, especially for these big companies like Ericsson and Nortel. They started this group that was going to standardize it because the operators didn't want to use anything Ericsson specific. We built the E-box at Ericsson but they didn't want to have a proprietary model, so they said "standardize it".

We came together to develop a standard specifically for that market. If you look at that market there are some interesting problems there. One of the things was we had this triad model where you got the operator, the service provider and the end user. And if you have that model, it means that you are going to have code running at a home that is not just from the operator but is also from the service providers. So you need to provide an execution environment where these different applications can work together, but more important they have to collaborate. It's not sufficient to just run them in the same system; they need to be able to use each other's services. One of the key problems in home automation is that there are a zillion different network standards. If you want to make an application that controls the lights, then you don't really want to know what kind of network standard is used to really control the low level on and off state of those lights. You need higher level abstractions. Trying to solve that problem, having multiple providers of the software, that need to be able to work together in a transparent way, they will likely meet each other the first time in the box, because you are going to get a zillion different combinations to support, you can't test all of them a priori.

Trying to solve that problem we came up with a modular system that you package your applications in what we call bundles, which are Java JAR files, and with the service registry. And that's how those things will be able to find each other dynamically and how they would be able to connect to each other to find the different services. If you look at the service layer it's an intermediate, so for example if you have an abstraction for a light and then the bundle that would provide the interface to the network would register a light service and the one that wanted to control the light would talk to the light service. But that means you could have multiple applications controlling the lights and you could have multiple applications providing network interfaces, so it becomes a meeting point. One of the key things there is the dynamicity, because you want to be able that if somebody plugs in a new interface or plugs in a new light or does anything, or if somebody with a Bluetooth telephone walks into the home, you want to have that service representing the device available immediately and not having to reboot. This was 1998; you probably remember Windows 95 and Windows 98 where any change you were trying to make immediately required you to reboot the system. Our mantra at that time was "no reboot".

And if you look at OSGI one of the most elegant things in it is that everything is dynamic: if you change the configuration it's done on the fly; if you install a new bundle, a new application, it is installed on the fly, immediately it's services will be available to anybody that wants to use it and when you uninstall it, it will be removed from the system and other bundles have to adapt. The dynamicity is one of the most elegant parts. It's the modularity, being able for the service providers, and also for the operators to package their application in a single file and deploy it on the system, and on top of that the service layer that allows you to have these bundles communicate with each other in a non straight way. They will be able to find and bind in a dynamic way.

It's pure Java, and basically what we've done is we abstracted the classloaders layer. If you look at a lot of applications today they start to muck around with the classloader for extensibility so we just provide a layer that allows you to install a JAR file into the JRE or into the OSGi environment. When you start up in the JRE you start up the OSGi framework and then you need to get an initial bundle installed and that bundle has an API to install other bundles. And those other bundles can install other bundles and so on.

My long story before about the way that OSGi got started: when you look at the problem of being able to work with code from different parties, and be able to let them collaborate in the system, that's actually the problem we are facing today in most of the enterprise software as well. How do you get the components from all these different departments and from all these different open source groups, how do we get them in the VM running together in a secure and reliable way? And if you want to do that you need a modular standard, you need something that clearly delineates the parts that you are building. And OSGi has been doing it because of that earlier home automation problem, and that's exactly the problem that you see in the market as well today. On top of that, you need to be able to update the software on the fly because restarting big servers is taking more and more time. Being able to update parts of the code on the fly saves time there as well.

Plus the dynamics is something that people really like, the fact that you can have several demonstrations now that people make web based applications and they can plug in a new bundle and that has extra features to the web pages of the applications. Having this modularity that you can extend your applications and that you don't have to have everything running at once, which were key requirements in 1998 for home automation, they have become common. That's what people want: it needs to be reacting immediately; the dynamicity, and you need to be able to have modularity to let different teams work together and make it work in the end on the system.

Last week I was an EclipseCon which was really nice because there are a lot of small and big companies starting to use it and what you see is that people really like it because it forces them to be disciplined during development. And because of the discipline it's really nice to be able to use it everywhere, because it starts to run basically everywhere you can run Java, you can run OSGi. For me it's not a surprise that you can implement OSGI in about 250k. So what are we talking about? That's peanuts in today's world. And because of that small size you can run it everywhere, but it has a big impact because if you want to use it you must work disciplined. John Wells from BEA said it rather nicely last week -- He had a presentation and he said: "we thought we were working modularly -- we were disciplined, we were doing the right thing and we were so surprised when we moved to OSGi because we found out we never had been working modularly, we had all these dependencies, all these links to all kinds of subsystems and we never noticed it because it was all on the classpath, and as long as you didn't use it you didn't run into the problem".

But of course everybody that uses Java knows that you can get unexpected surprises when you don't take really care of it. OSGi forces you to be disciplined about the modularity; you can't use anything unless you declare that you use it, which means that the deployment phase is a lot easier because you don't get all these nasty little surprises at the end, you get up front as the developer. You get a lot cleaner modules from your developers than you were used to. I think that's the key thing to why OSGi is so applicable over the whole range of computing, where you can use Java. That's why you see it in the mobile phone, Nokia is running OSGi, you see it on mainframes, we actually know that IBM is running it on the mainframes, you see it in WebSphere, you see it in BEA and JBoss is going to support it. I think the key thing is that it forces you to develop in a certain way and once you do that you can deploy your components over the whole line of computing. Maybe not in the smallest machines, but on top of that, everywhere.

If you look at the specifications, the core specification is pretty complete on the scope which it tries to address and that is modularity, service layers, security and the execution environment. Those 4 different areas are pretty well defined. Of course we will need extentions on it later but if you look at it I don't think there will be a big change in that. What will happen is we'll need more services on top of OSGi. One of the key things, what I talked earlier about the light, and the protocols that you need to implement, you would like to standardize the common needs in a service layer so that different people can implement it and different people can use it. If you have a web service facility on board, then you would like to standardize the way that you can export your... let's say that you have a bundle that contains a number of web services, and it needs to use SOAP engines, there are different SOAP engines on the market that have different requirements, it would be nice if you could standardize it for the bundle programmer that he could write as little as possible, and that another bundle provides the web service, engine and does all the network traffic and serialization stuff.

Not every bundle has to say "I'm talking to this engine, I'm talking to that engine", and they will not really work together. Trying to standardize those kinds of things within OSGi that the bundle programmer has to do as little as possible, I think there's a lot of work there. One of the key things is distribution. If you look at OSGi it is an in-VM service-oriented architecture. SOA is always seen as web services, and going over the line, communication. In OSGi it's really procedures calls -- an OSGi service is a normal Java object, it's a POJO, no special requirements, no interfaces that need to be implemented... The key thing in OSGi is that we try to standardize two things: we want to allow multiple implementations and we want to let the bundle developer see a single way to implement a specific problem. But we allow different vendors to solve their problems in different ways. For example if you look at OSGi HTTP service, it abstracts how you use an HTTP service, and it could be a Jetty it could be a Tomcat, it could be a proprietary web server. But as a bundle programmer, I have a standardized interface, I put in a servlet in my bundle and there's a standardized way to get it on the web server. And we have about 20 services in that area at the moment, that I've written in the compendium, it's an 800 page book, accurately describing what they do, and I think there's a lot of extension there; the core, I don't think so, that is pretty stable and it will remain except some few details.

I was very interested in 277 when it came out; I tried very hard to get on board because of the experience of working in this area for 8 years. I thought I would be a very welcomed guest on JSR 277. Unfortunately I got told that there were already too many people on board, the group was 14 people at the time, currently I think it's about 25, and I tried to contact some other people directly that I knew that might help me get on board but I didn't get on board, which of course was a very bad sign. Because there were 2 people that were knowledgeable about OSGi, Richard Hall and Glyn Normington but they didn't have the kind of background or the history. When you develop something like OSGi you run into a lot of problems, we've gone through 4 releases and if you look at the spec you see that there's a lot of maturity in it, a lot of details that we have learnt the hard way, and they are not always obvious. It's really nice to be able to help other people that want to go there, plus of course from an OSGi point of view it would be important that 277 would not make OSGi impossible. 277 has the advantage that they can make changes to the VM and to the language.

We would have loved to be able to be able to make changes to the language, so I thought it would be possible to do a good synergy. But again I was denied access to the expert group. I have been pinging them all the time and they came out with a draft about six months ago and that was a big disappointment because they maybe added one thing that we did not address and that was outside our scope was a repository. The repository is really interesting and there are a lot of things we can do but there are also a lot of ways to screw that area up, and I think they are really making a bit of a mess of things. They don't have the experience in that draft, I wrote an extensive blog analyzing it and it was a feeling of disappointment, "hey if you got the change to do it right, being able to change the language and the VM, then why not do it really right?" They are before OSGi version 1 at the moment, no dynamicity, it's very static, they make some very big mistakes, they try to link up the modules by allowing the modules to participate, which sounds good; that means that if you want to hook up modules you have a piece of code that runs and finds other modules that should be bound into the executable, which sounds attractive -- unfortunately the moment you do that, the management system can't reason about it anymore, so from a management point of view you have a disaster at your hands, you have got to deploy something but you don't know what it is going to need which in large systems is a no-no. you want to be able to have a declarative model like the OSGi manifest headers, so that you can analyze them and you know it's going to run or it's not going to run.

There are several of those things -- they don't take care of multiple versions, they don't check consistency. One of the problems is if bundle A uses version 1 of a package and bundle B uses version 2 of that package -- you don't want these bundles to talk to each other because the moment they exchange objects of that specific package they will be from a different classloader and you get a ClassCast exception. We have extensive consistency checking in our spec that you can't wire bundles together that don't share the same packages, we call it class space consistency. We have the ‘uses' clause on the manifest header that allows us to calculate it a priori, not detect it in runtime, but when you install it, this is a no-no. Bundles that have different class spaces can't communicate with each other. Which is perfectly fine for a lot of applications and it's much better than if they start talking to each other and get ClassCast exceptions. Maturity is a big problem and it's a pity because they don't add any functionality that's not already there in a better form. So I'm puzzled. And then we have the 294 which I don't know if you know the story but the 277 started to do the modularity for Java, then they moved out the language aspects to another JSR, JSR 294 that was Bracha that came up with it. The only thing known about that part is a very confusing blog that he wrote about a year ago. And I sent a response to that blog to find out "what do you mean with this, what do you mean with that", and he never replied and just told everybody not to get confused by the syntax which was basically all he posted. I'm very confused, I don't know why Sun is doing it, I wish they could get their act together; we from the OSGi point of view would like to work with JSR.

This is an interesting question because this is not a new thing. We did the same thing with JSR 232 for the mobile space. If you look at JSR232 it was OSGi release 4.01 completely - it basically was the whole OSGi. I don't think it's rubber stamping. What we did with JSR 232 and we are doing the same thing with 294 is we allow the expert group (EG) in the JSR to put forward requirements. The EG is not doing design work, not doing development work, they do requirements gathering work. We take those requirements and we process them inside the OSGi organization because of the IPR (intellectual property rights) issues around this.

Which means that the OSGi rules are more even-minded than the JSR rules about IPR which allows us to have it both ways. We want to be in the JCP to show that we are collaborating with the JCP as a part of the Java family. Normally in the JCP, and I'm not sure if everybody is aware of that, there is a single lead, a company that leads and has most of the IPR, it owns them as the lead, and Sun has a certain set of rights. But the people in the EG do not have that many rights to the specification. In a way OSGi is a spec lead for those JSRs, which means that the IPR is more even handed than it would be on JCP alone. Now about the JCP, what is the purpose of the JCP? I don't think it's a design club, I think it is a way that we put things together, that you get a family of standards that allow Java programmers to write efficiently and effectively. And that's exactly what OSGi is providing. If you look at the standard as it is today, the specification document is very well written, it's very thorough, I haven't heard anything bad about it so far. The JCP gets a high quality document outlining how it should be done, it fits in the overall family of Java specifications. Isn't that what we want with the JCP? It's not supposed to be a design club, is it? I think it is ill suited to be a design club.

I think you already indicated it; they are extremely complementary. It started about a year ago I got contacted by Adrian Colyer to tell me that they had a lot of requests from their customers to support OSGi on their Spring base and the first time that we talked he was really like "we have to support OSGi just like any of the other things we are supporting". And it has been really heartwarming to see what happened over the last year because they started to put it on OSGi and they discovered that it had all these things that Spring itself stayed out of: no class loading, no dynamicity in the bean layer in a way. And trying to put Spring on it was like a dream because it looked like they were made for each other - there are still a lot of small details to iron out, but it fit like a glove. In Spring you connect beans to beans; you create beans so you inject them into other beans so you put the configuration over an application into your Spring configuration file - that's where you configure how the things fit together, how you wire them up. And they had to do very little, and that is basically link the service layer of OSGi into the bean layer, so you have an OSGi bean that you can then inject into the other beans.

The dynamicity is a bit of a problem because not all code can handle that, but that's the biggest discussion actually at the moment in that group. But they were really complementary and suddenly Spring had this nice delivery format, they could put all their bundles, all the different things that they had, in bundles. It is a marriage made in heaven, it is very complementary but together they form a much more powerful model. Also from an OSGi point of view we are very enthusiastic about Spring because it adds a layer that we have never been that strong on. If you look at OSGi systems most of the time you do the configuration by the set of bundles that you install. You install these bundles and then they discover each other automatically and they bind to each other automatically and that goes OK to a certain extent, but if you get more complicated applications you really need to say "that one needs to be coupled to that one". And I think Spring is really providing that layer for us. It's going to be interesting in R5 because we've got declarative services that are a mini-Spring very much towards the injection and the low level of what Spring is doing and we have to start to see how they fit together and which party should do what. Interestingly Interface21 joined OSGi and they are now working on writing an OSGi specification for this work. So the Spring OSGi combination will become an OSGi specification in R5, I think.

R5 will probably be one and a half, two years away. We've just started gathering the requirements, we had a workshop in Dublin, and we will have a meeting in May and currently the RFPs (request for proposals), that's the requirement documents, they are being written at this moment, and that will take some time before they are specs, so one and a half to two years. But way before that the Spring OSGi work is already there so that anybody wants to play with it, and I know people that already use it in systems. You can download it and play with it and I think Interface21 wants to put the JAR files in release 2.1 but you have to talk to them for their release schedule. So that should be not that far off.

Yes. We have been discussing this for quite some time now and actually some of those ideas are gelling a bit more. One of the board members, Rob van den Berg, from Siemens VDO in Holland started a work group called Universal OSGi, which is supposed to bridge the advantages of OSGi to other languages. Of course it's normal that you can run any VM-based language on OSGi. There's nothing special, you can do it in Scala you can do it in PHP, I made a demo that works together on the OSGi platform. The key thing is outside the VM how are you going to communicate with them. If you look at the architecture of OSGi on the service layer, the service is just a normal Java object which you register under an interface, you say "It's published under this interface", and other people can find it and talk to it. That model is really suitable to be addressed from the outside as well. Many people already use it to distribute it over the Internet, but it wouldn't be that hard to also distribute it in the same process, to applications written in other languages or to another process on the same machine.

This is all very vague and preliminary but we are really discussing and looking for people to work on that, a way to bridge OSGi over the service layer. Which means that you would deploy your DLLs or shared libraries or executables as an OSGi bundle just in the JAR file, with some extra metadata, on the OSGi, which would be the Java part you deploy them and they would be recognized as being native code, they would be installed as an executable or a shared library and linked appropriately and if they start they will be able to find each other and communicate through the service registry. It will look like you have multiple language support but from the same basic ingredients. And the nice thing is that your management system which has always been very important for OSGi will remain the same because you deploy bundles, you start them, you stop them, you have the same primitives, you update them and in fact that will actually be native code that's going to be deployed.

I think there is no clear answer. If you look at WebSphere, for example, what I have seen of that architecture is that OSGi is at the bottom. They don't expose the OSGi APIs to the application programmers at the moment. If you look at JBoss they are doing the other way around, they have the micro container and this has support for the OSGi metadata. They just support OSGi so that you can deploy bundles from applications. You have also the servlet bridge where people have put together an OSGi framework that you deploy as a WAR file on a J2EE system, so you run OSGi inside a J2EE system. I think it's interesting to see that you can actually do that with the technology, you can actually deploy a complete OSGi system on an OSGi system which is tricky because there are quite a few singletons in the Java VM - we solved the URL singleton for example but JNDI has a singleton which we will probably work in the future on as well to solve those issues. I think the technology is really flexible so you can look at how you want to deploy it. If you just see it as something to clean up your application, but you want to deploy it on a J2EE system, you can do that. If you build infrastructure and you want to expose the OSGi APIs you can do that as well. It's a bit of a non-answer but it is flexible and it depends. I don't think technology itself puts a lot of constraints on you if you do it. Actually, on the contrary, it will work together better than if you just try to solve it one by one with proprietary technology.

I think the choice that Eclipse made for OSGi was really interesting; I was part of that team. Jeff McAffer contacted me 3 or 4 years ago and said that they were investigating a new runtime for Eclipse and we spent about a year talking about it and investigating JMX and all kinds of other subsystems and in the end it became OSGi and of course that was a big thing because suddenly it was on the disk of a million developers - they didn't know it, but at least it was there. By having it spread so widely and by having a program that is so big and extensive and used in some many different places and so many different ways made it visible to a lot of people. And I think that was one of the biggest things that hit OSGi that suddenly this obscure standard for home automation became visible in the enterprise world and that attracted a lot of attention and that was the tipping point for OSGi. With Spring OSGi we see a similar thing. A lot of people building enterprise applications with Spring and they say "hey we want to have those goodies as well."

We developed the mobile specification which is inside the JCP under JSR 232, which is currently getting some very interesting application in Sprint's systems. Sprint is rolling out a Wi-Fi network in the USA and they have standardized on an OSGi layer as the application layer for that network, which means that they will likely require all their phone manufacturers to support OSGi on their devices. So it's currently Nokia, which has a commercial product which they can ship the OSGi framework on, but this means that a lot of other phone manufacturers will also start shipping OSGi on their phone. Which if you take one step back and think about it is kind of fantastic. You can write functionality that runs on your back-end server, but you can move this functionality seamlessly also on your phone. For enterprise applications that become more and more intertwined with the client applications, it's really nice, the Nokia runs eRCP (embedded Rich Client Platform) among other things, you can make a bundle that you move between the client and the server. So you don't have to make the choice a priori of where you put functionality, you can just write components that work in an OSGi environment. And instead of doing the remote procedure call you can actually move some of the business logic to your client or the other way round, whatever you want. And because of the remote management you can actually manage the stuff as well, you're not forced to do all that moving yourself. I think that's the real convergence that's happening in the market at the moment, that your architecture becomes a lot more fluid by not having to decide a priori where you put your code, you get more flexibility in the end. And you can do it where it is best suited without creating a huge deployment nightmare because we have remote management.