About this blog

This blog features updates, opinions, and technical notes from Caucho engineers about Caucho products, the enterprise Java industry, and PHP.
Caucho Technology is the creator of the Resin Application Server and the Quercus PHP in Java engine. A leader in Java performance since 1998, Caucho is a Sun JavaEE licensee with over 9000 customers worldwide.

Archive for the ‘Uncategorized’ Category

If you recall in the first part of this series, we talked about running faster, developing faster and spending less on hardware. In the second part of this series, we covered how this new project, ProjectX, is enabled for faster development of services (SOA 2.0+, iSO2). In this post, we are going to focus on the persistence model.

Imagine a service that needs to horizontal scale up to tens of machines, even hundreds of machines and now has the ability to handle 10x the load from one machine (two to three machines if your server requires high availability, i.e., replication).

Reminder: ProjectX is a placeholder. We are still looking for a name. Feel free to send ideas for a name to sales@caucho.com.

Hardware! What is Hardware?

Hardware costs in the days of virtualization is a tricky subject. Hardware might just be EC2 instances for you. There is a lot of merit in horizontal scaling and cloud computing. But consider this, if you need 100 EC2 instances to provide the same SLA as 10 EC2 instances, then the cost of your “hardware” is 10x. There are many services out there that can scale a lot higher on a lot fewer instances using ProjectX. This is scaling without all of the headaches of database mapping, cache coherency, split brain, and the like.

The hardware is just part of the cost associated with the traditional approach to service development. The other cost is development time! That development time can take a lot of forms including database mapping, hiring DBAs, buying expensive DataGrids (and the consultants who can code them/manage them) and spending cycles fixing cache coherency issues – to name just a few. Simple services in the traditional model can be very expensive. We covered this quite a bit in the first and second segments of this blog series. These are very real costs.

The fact is, many services deal with scalability issues through caching and mass horizontal scale out. This makes even simple services very expense to produce. Even in ProjectX there is room for horizontal scale out and caching, but it is not a foregone conclusion. Services are inherently stateful and own the data they are operating on. ProjectX does not preclude horizontal scaling.

If you read through the white papers of LMAX and commentary of the LMAX approach, you would see how they tackled this issue. In a nutshell, they replaced expensive hardware and software with commodity servers. As a result, they used a fraction of the servers and incurred a fraction of the cost to handle 60x to 90x the load. This is the potential of ProjectX.

You don’t have to learn a new way of developing. You can develop with POJOs, plain old Java objects, and get phenomenal results.

Core ProjectX Tech

At its core, ProjectX is a non-blocking system. It uses patterns similar to the disruptor ring buffer internally. In fact, Resin 4, an application server from Caucho®, uses the same set of patterns. These patterns allow it Resin 4 to serve pages faster than NginX and Apache httpd. In a very real sense, ProjectX is a way to expose this style of non-blocking development. Non-blocking development is the core of Resin to the world of developers at large. Now you can write Java services that outperform services written in C.

Resin 4 is the fastest application server on the planet, and it was built with many patterns that you find in ProjectX. Resin 7, which will be (a) Java EE 7-based application server, was built from the ground up with ProjectX. Resin 4 relies on BAM. ProjectX is a successor to BAM.

In a very real sense, ProjectX is not a technology developed in a vacuum. ProjectX is the technology that we used to build the best set of application services on earth. ProjectX is the fulfillment of a vision that we have worked on and evolved over many years. For example, ProjectX is the DNA of Resin 7 just like BAM was the DNA of Resin 4. The HTTP proxy cache service, the data grid service, the health systems, the JCache implementation, clustering, the fast messaging system, to name a few, were all rebuilt with ProjectX. What we have learned from building Resin 4, BAM and more are in ProjectX as well as Resin 7. ProjectX is a platform for developing services with a long history of engineering excellence.

You Don’t Have to Rip and Tear to Use ProjectX

ProjectX will be a separate project/product from Resin 7. For example, you will not have to switch from using Tomcat to use ProjectX. Just like you did not have to switch from Tomcat if you used Cassandra (written in Java) or MongoDB. You can use ProjectX with PHP, Ruby, RoR, Node.js, JavaScript, Tomcat, Python, Django, to name a few. In the same way you are able to use MongoDB and Redis from those languages and platforms.

Bottom line:

ProjectX allows any developer to tap into this framework to rapidly develop high-speed services.

You can easily access any ProjectX services over HTTP/JSON or WebSocket/JSON using the JAMP wire protocol (or use the open Hessian protocol that has been ported to most languages).

ProjectX Stores Operational Data

ProjectX comes with a lot of the same tools you see with MySQL and MongoDB. You can dump the data to JSON/YAML and you can import the data from JSON/YAML. For some, ProjectX will set in their architecture where their NoSQL solution and Java REST services sit today. For others, ProjectX will sit in their architecture where RDBMS, Memcached and Java REST services reside. You would expect this from a service engine that promotes developing stateful services.

With ProjectX your query language is the Java collections API and the new streaming API. You can structure your objects in memory using custom data structure or use TreeMaps, TreeSets, HashMaps, and the like that come with Java. Very little, if any, of your code is dependent on ProjectX.

Exposing Your Service to the World

If you want to expose your service to the outside world, you merely add a @Remote annotation to your ProjectX object. When you do this, your ProjectX object becomes available over HTTP and WebSocket (objects are serialized with JSON or Hessian).

Your Java code can live in your Java classes. The service layer can be a very thin layer that exposes your Java logic to the ProjectX system.

Storing Your Objects for Crash Recovery

When it is time for ProjectX to flip the log or your input ring buffer is empty, a method annotated with @OnCheckpoint is called. This simply means that the system wants you to sync your current state to permanent storage.

@OnCheckpoint
publicboolean checkpoint(){
save();
returntrue;
}

In this method you just need to save the state of your service to disk. ProjectX provides the store, which is similar to an LSM style disk store. LSM is used by BigTable tablets, LevelDB and similar data storage systems that serialize your Java objects to disk using Hessian. Hessian is a binary format similar to Google Protocol Buffers or Thrift but more like a binary JSON. Hessian allows your objects to add and remove fields more fluently without getting Serialization problems found with Java Object Serialization.

These properties give ProjectX many of the advantages associated with JSON and NoSQL for storage and object representation. Hessian does not require a strict schema. ProjectX does not require that you use our store. You can use any store you like: a data grid, a database, a flat file, or whatever you choose. But, the store comes with ProjectX and it is fast and replicated with backup and restore features.

The important thing to note is that you do not have to wait until the checkpoint method is called. You can periodically store things to the async store. Even on a commodity server with just one disk, you can get amazing performance, as a modern hard disk can sequentially read and write up to 300 MB per second. Additionally, a SSD can read and write up to 500 MB per second sequentially.

When your service starts up, it just reads its state from disk.

@OnStart
publicboolean onStart(){
read();
returntrue;
}

Again you can read from the store or some other mechanism. You could even use plain Java Object InputStream with Java serialization, but there are many advantages to using the provided store for high availability, replication and read performance. There are also many mechanisms in ProjectX that allow you to do these operations asynchronously.

Services in ProjectX can be remoted but they can also be used internally. There is an API for creating and exposing services in ProjectX. If you are using Java EE and CDI, then it is a simple matter, as all ProjectX services are CDI injectable. If you are using Guice or Spring then there is a programatic API which would be easy to integrate into those frameworks in order to make ProjectX objects useable in.

ProjectX is truly a services engine. Coding services with ProjectX allows you to readily and easily write scalable services. In an era where service-oriented development is a foregone conclusion, ProjectX is the next generation framework to get things done. ProjectX was developed by the same engineering team behind the fastest application server on the planet – the very one that powers millions of public websites. ProjectX is solid tech. The makers of ProjectX boast years of experience providing services, cloud computing and clustering. This is no new kid on the block.

The Vision of ProjectX

This is not our first time to the dance. We have been doing distributed systems for years. Our clustering has had replication and recovery for over a decade. Millions of websites already employ our software. ProjectX was forged for high-speed, high-volume projects. Its first usage was developed with the aim of sending six million mobile messages in one second in EC2.

We have just scratched the surface of what ProjectX can do. ProjectX provides the following:

ProjectX is a services engine for rapidly building distributed systems for mobile and HTML 5 application. ProjectX is the next evolution in the trend towards non-blocking systems – specifically non-blocking systems that use principles of mechanical sympathy to optimize applications by writing code that takes advantage of the hardware’s multi-core machine. Now, instead of spending millions on hardware and software that scales to tens of thousands of transactions per second, your team can develop software that scales to millions on commodity hardware that costs merely thousands.

Moreover, today your team can use some of the same ideas and patterns that are in LMAX Disruptor and Workday service architectures. In turn, your company can benefit from the ROI of in-memory data to develop faster and write services that are easy to scale.

ProjectX provides the service journal, replication, fast async store, and more. Your in-memory objects are the actual data. You can develop faster without the bottleneck of databases for synchronization. Your databases can go back to being used for reporting and long-term analytics. Your operational data can run as fast as possible and easily meet the demands of SLAs. The in-memory objects are true objects that have the actual operational data of your service. ProjectX provides the Storage that happens async. The ProjectX storage is crash recovery for your operational objects.

Stay tuned. Let’s roll up our sleeves. We are going to be posting some more examples and helpful tutorials shortly.

Interested in learning more about ProjectX and/or being part of the early access program? Please contact sales@caucho.com.

In our last segment we introduced the concept of ProjectX, our next generation model to rapidly develop fast, scalable services, that are exposed as REST and WebSocket services. Please remember that ProjectX is a placeholder name and the final name is yet to be determined.

In the same way NoSQL is an alternative way of thinking about data storage, ProjectX is a different way to think about writing services.

When you develop with ProjectX, you develop in a service-first manner, allowing you to spend your time on writing objects versus wasting time on cycles and iterations of complex schema design and schema migration. Not to mention you’ll see a significant decrease in cache coherency issues.

How does it work?

If you recall objects are data and logic. Your data is in your objects and the objects that you have in memory have the data that you need. ProjectX just makes sure that those objects are backed up to disk. This allows you to write services that are in Java and only Java.

ProjectX and Non-Blocking RPC Services

Often when people think about services, they think about blocking RPC services like REST. ProjectX allows you to easily develop non-blocking RPC services, as well as allowing you to register callbacks and/or use one-way method calls. ProjectX allows these services to be consumed over HTTP/REST/JSON or WebSocket/JSON.

ProjectX has an open wire protocol, based on JSON, called JAMP that is easy to implement. You can call into any ProjectX service from any language. All that is required is for that language to have HTTP support and JSON support (so Ruby, Go, C#, Java, Python, JavaScript). If the language also has WebSocket support, then the conversation can be bi-directional and very efficient. (We also have HAMP, which is Hessian based. Hessian is a binary protocol which has been ported to many languages including ActionScript, Python, Java, C#, and others.)

Object-Oriented Development Is Logic and Data Together

One of the original complaints about J2EE, was that it pulled developers away from the object-oriented model. You ended up writing procedural code and all of the data existed in the database. All services were stateless and had the logic in them. No services maintained their own state and relied on third-party frameworks to map objects to a relational database.

Many frameworks like Spring, CDI, Hibernate and Guice, mitigated some of the early issues with J2EE and its lack of OO. Also, the NoSQL movement made the mapping of objects to database easier. But, inherently the majority of modern service development still typically splits the data into a database or some sort of data store, which separates the data from the service. In this common model, services do not own their data.

When you start using caches and DataGrids to speed up storage and retrieval to databases, you are trading the problem of latency for a new more complex set of problems that include but are not limited to: cache coherency issues and split brain to name just a few. These are not easy issues to handle.

In the ProjectX model, objects services own their data and the objects are in memory. ProjectX enables service developers to back those objects to disk in the most efficient manner possible. There is no longer database usage merely for data safety. You can still use a database for reporting, but now your operational data can exist purely in Java.

Did you know that a modern commodity hard disk can read/write up to 300 MB per second? If you are using SSD, the sequential reads are up to 500 MB per second. Phase shift memory and advances in Flash mean that this speed will increase. If you add RAID level 0 support, this speed can increase by several multiples. ProjectX journaling and data store takes advantage of sequential writes to ensure data safety at top speeds. More details about this are in subsequent posts.

Using ProjectX is as easy as just using a few simple annotations. Your code will look like code written for a typical service in EJB 3, Spring or Guice. But with ProjectX you can avoid the common mistake of using the database as a synchronization mechanism.

Using the database as a synchronization mechanism is an anti-pattern that causes many performance and scalability problems in service development. Rest assured, ProjectX is a Java POJO approach to development. Your code can be completely annotation free, or, if you choose to use Java EE/CDI, you can use a few annotations for productivity. Your code base has very little to no direct tie to ProjectX. It is just Java. We don’t try to tie you to our platform.

The Real Expense of Abusing Caching

Using ProjectX also enables you to avoid the anti-pattern of duplicating all the data in the database and every possible query of that data in a data grid or data cache. By using a cache or adding a lot of complexity to your application, you may incur problems of cache coherency and split brain. If all you know is horizontal scaling and caching, every large-scale system looks like a nail. ProjectX can be the hammer.

It is very easy and, from my experience, very common to paint a project into a corner by abusing caching. Caching is the equivalent of applying a quick and dirty (as in dirty read) Band-Aid solution that can cause many operational and development issues down the road. Many have worked on projects that had 80 GB of data, but the same data existed in many cache layers to the tune of 12 TB of RAM. There are projects that solve all these issues with more horizontal scale out and more caching, and these projects can quickly become a vast waste of hardware and developer productivity – not to mention the near impossibility of properly invalidating a cache. Misapplying horizontal scale out and caching have wasted countless developer and operation-engineering years.

Using ProjectX does not preclude horizontal scale out and caching. But when you have services that are up to 10x – or as much as 100x – more efficient and don’t require cache for all of their data; then you reduce cache coherency issues and you need less server instances. It would not be uncommon to replace 10 to 100 hardware servers written the traditional way with six to 12 servers using ProjectX. The ProjectX approach should also be 2x to 10x faster than normal service development (database, cache, Java REST lib, JPA, local cache and distributed cache.). Also, since you have fewer servers and fewer things to worry about (like cache coherency issues which are some of the least fun things in the world to chase down and debug), your operations costs should be 2x to 10x cheaper as well.

ProjectX fully supports horizontal scaling. You can service many more requests/connections from your services. ProjectX is, in fact, a distributed system for service development. More about this will be covered in the next post.

Services Should Own Their Own Data

ProjectX allows the service to own its data, and ProjectX provides fast storage mechanism for crash recovery. ProjectX allows your objects to be served out of memory.

In the ProjectX approach your operational data is your Java objects.

ProjectX provides journaling, replication and fast persistence. The emphasis is not on the persistence. The persistence is a foregone conclusion managed mostly by ProjectX for data safety. This feature allows you to focus on your business logic and derive real value from your services.

Do you want to focus on enhancing the business value of your service or on managing database mapping and cache coherency issues?

The Real Win: The Ability To Develop Faster and Streamline Your System

Just as NoSQL was built for horizontal scaling but found a home in the hearts of developers who wanted to avoid schema migration and wanted more productive, dynamic schema, ProjectX has big productivity wins as well. You don’t have to be the next Internet sensation to get benefits out of ProjectX. If you want to focus on providing business value instead of feeding complexity then ProjectX is a good fit for you.

We feel once you start developing services with ProjectX that you will not want to develop them any other way. Instead of dumbing down distributed service development, we put the engineering rigor and computer science back into service development. You get to take full advantage of your distributed system. Ultimately, and most importantly, you get to focus on writing more collaborative, richer applications. Features that were once cost prohibitive, or could never be squeezed into the budget, are now easy to develop. ProjectX, is a very practical, user-friendly way to create massively collaborative and rich applications. It makes the nearly impossible development easy.

ProjectX makes sense for both enterprise applications and mobile applications that need to send six million requests per second. ProjectX is just simply a more productive way to build services.

Tune in next time when we show you some basic code examples from ProjectX.

The industry is changing at a rapid clip. There is a lot of convergence and it’s a new dawn for software development.

The number of devices that developers have to support has grown enormously – from smart phones, to glasses, to virtual servers. What I want to describe is a way to drastically speed up development time, reduce complexity, and reduce hardware costs.

But first, let’s talk a little bit about the trends in the industry.

The idea of an application server is becoming a thing of the past. Today, most server-side developers develop services – not applications. This is the trend. The new Web is no longer just a servlet engine – a database and some JSP/HTML/CSS. Today, applications can range from mobile applications to rich HTML 5, and the presentation logic is expected to be in the client. Users have come to expect a rich user experience. HTML 5 promises and delivers a very rich environment for writing applications. Companies that embrace this will deliver user-centric GUIs and be more successful than companies that do not. User Experience (UX) is finally the mantra as it should be.

The Rise of NoSQL

The rise of NoSQL is really about the rise of tools that focus on data safety in contrast to relational databases. The emphasis is on horizontal scaling – potentially millions of clients’ data –and not forcing application data into a relational model. NoSQL, although originally built to support horizontal scaling, has found a home in the hearts of developers who just want to rapidly develop and rapidly iterate on their applications and not be forced to deal with the hassle of constant schema migration. Schema migration is a difficult process to manage and has historically slowed development down to a crawl.

While NoSQL’s claim to fame might be horizontal scaling, a larger selling point has been more dynamic schema. This has driven NoSQL from massively scaled uses to also being used for department-level applications that will never use the horizontal scaling features. It just works. It is easier then dragging a schema along and it means fewer DBAs, Ops - and less trouble. Many confuse NoSQL with BigData. There are NoSQL solutions that can be used in BigData, but NoSQL is more about scalable, operational data.

The Rise of Rest Services

In days gone by, SOA was a way to break up an application into reusable services. With this backdrop of activity, we see the rise of REST development with Java. More people are writing services and using services-oriented development, and more people should be talking about it. SOAP and XML are used less while REST and JSON are used more. The days of SOA and belly button lint inspection are gone. The days of writing services has just begun. Service-oriented development is a forgone conclusion. It has become synonymous with software development.

In the era of HTML 5 and mobile applications, the weight of the presentation logic has shifted back to the client. The service-oriented approach has been reborn and repurposed. HTML 5 apps and mobile apps are calling REST services. REST, along with JSON, has become the conduit of communication for mobile applications. REST with JSON is the common language of the Web. If you are doing REST, you are five times more likely to write that REST service in Java than any other language.

WebSockets – The New Communication Backbone

WebSockets are showing up in more places as well. WebSocket is the next generation way to develop services for mobile and HTML 5 applications. WebSocket is part of HTML 5 and provides faster bi-directional communication without the latency of HTTP, request/response of REST. HTML 5 is synonymous with WebSocket and IndexDB. WebSocket is just baked in. Like REST, Java will dominate this space as well.

In-Memory Data – The Golden Goose

To handle load and develop more interactive applications, there has been a trend toward non-blocking systems that use principles of mechanical sympathy to optimize applications. This is done by writing code that takes advantage of the hardware’s multi-core machine effectively. Now, instead of spending millions on hardware and software that scales to tens of thousands of transactions per second, teams have developed software that scales to millions on commodity hardware that costs only thousands.

From LMAX Disruptor to Workday, companies are finding that in-memory data is the fastest way to develop, deliver and scale modern applications. The basic idea is that service requests go through a journal and are replicated before the service is called. The data that is in-memory is the actual operational data. Storage and replication are now background tasks that occur in parallel with the service as much as possible. Storage is simply crash recovery. In-memory data is the actual data. Unlike the NoSQL model, your objects are your data and there is no database per se. Combining logic and data has another name: Object-oriented development.

This allows developers to focus on writing code and not worry about persistence or mapping as much. This is the next logical step in the NoSQL trend. This goes beyond NoSQL to no database, or rather no databases in the operational path. Services own their operational data. Think: “No more mapping. No more cache coherency issues. No more schema migrations. Sounds pretty good, doesn’t it?”

This is not to say you don’t have databases. You just don’t need databases to ensure that your operational data is safe. You can use databases for what they were meant for – reporting and offline analytics. The database no longer needs to be in the operational path. You no longer have to use your database for synchronization and turn it into a performance choke point.

This approach allows faster development time, as no database mapping or schema migration is required. You get the same data safety as you would get from a NoSQL or RDBMS, perhaps even more since the cost of data safety is less. Also since traditional architecture usually requires a lot of caching, it must deal with cache coherency issues. This new approach avoids that by allowing the services to own the operational data.

This allows companies to rapidly iterate and come up with their minimal viable applications and focus on providing an awesome user experience rather than spending millions on infrastructure and slowing the development process to a crawl with schema migrations, cache coherency issues, and the like. This approach allows companies to adopt the lean startup philosophy by allowing simpler more rapid iterations. As far as scalability goes, the same hardware can handle 10x to 100x the number of requests, so you have less vertical scaling to manage. To put it simply: Do more with less.

A SERVICE ENGINE READY FOR THE MASSES!

Well, what about the programming model? Is this in reach of the everyday developer? How can I use this approach?

Enter stage left, ProjectX (our code name). ProjectX has its DNA in JAX-RS, EJB, Spring, etc. It is designed around the way that Java developers write services. It provides the benefits of this new model in a programming model that is familiar and friendly to developers. Instead of learning a new programming model or language, you program in Java.

I’m happy to announce, with the release of Resin 4.0.35, Resin will compile and run on a Raspberry Pi!

For the uninitiated, Raspberry Pi is a credit-card-sized single-board computer, very popular amongst tinkerers and hobbyists for it’s ease of use and low cost.

We’ve made number of changes in recent releases to allow Resin to run on a Raspberry Pi. These included both Java fixes and compilation of Resin’s native libraries. Both Resin Pro and Resin GPL Servlet Container will run with native optimizations enabled on Raspberry Pi.

When attempting to diagnose application errors or performance issues, the single best tool Resin makes available are the Resin PDF Health Reports. They are also the first things we request when addressing customer support questions.

Resin can generate two slightly different PDF health reports: Snapshot and Watchdog reports. A Snapshot Report captures a “snapshot” of Resin at the current point in time. A Watchdog Report aggregates as much information as available about Resin at a previous point in time. Watchdog reports can also be thought of as a “post-mortem” or “restart” report, as they usually are generated immediately after an unexpected server restart.

Watchdog report are usually generated automatically after any unexpected server restart. Check your logs directory for Watchdog-*.pdf files. However you can always regenerate a Watchdog report from Resin-Admin on the Watchdog Page:

Resin has the very convenient ability to import configuration from an HTTP URL. This feature in combination with clever usage of environment variables can be a huge help avoiding the issues caused by maintaining local copies of configuration files in a large environment. In this Wiki article I setup a webserver as a centralized configuration repository and show how to modify your local Resin configuration such that the same file can be used by any Resin instance in any environment.

We have recently run some performance benchmarks comparing Resin 4.0.29 versus NginX 1.2.0. These benchmarks show that Java-based Resin Pro matches or exceeds C-based NginX’s throughput.

Summary: Using industry standard tool and methodology, Resin Pro web server was put to the test versus Nginx, a popular web server with a reputation for efficiency and performance. Nginx is known to be faster and more reliable under load than the popular Apache HTTPD. Benchmark tests between Resin and Nginx yielded competitive figures, with Resin leading with fewer errors and faster response times. In numerous and varying tests, Resin handled 20% to 25% more load while still outperforming Nginx. In particular, Resin was able to sustain fast response times under extremely heavy load while Nginx performance degraded. (See related press release).

Scott and I just finished work on making sure Seam-Booking example works on Resin. I posted a tutorial on deploying seam-booking in Resin at Resin Wiki: Seam On Resin . It requires Resin 4.0.29 which is due in about a week from now.

The short answer is no. The longer answer is that Resin supports Hessian for remoting. And, in about 20 lines of code, you can expose all @Stateless/@Remote beans as remote services using the Hessian protocol using CDI and Servlet 3.0, which are part of Resin and part of Java EE Web Profile (as Resin 4 is a Java EE Web Profile certified application server).

Hessian (now Hessian 2) predates many other forms of remoting and is a [http://daniel.gredler.net/2008/01/07/java-remoting-protocol-benchmarks/ wicked fast, binary protocol] (faster than CORBA, RMI, SOAP, XML-RPC, etc). You could think of Hessian as a high performance binary JSON. Hessian has been ported to many languages. Hessian is a remoting framework and a flexible Java serialization framework.

You can expose any bean as a Hessian remote bean quite easily. Hessian has been around for 10 years, and is very solid. (Both Hessian and Resin are development and maintained by Caucho).

Resin 4 documentation does not have Hessian documentation yet, but Hessian usage has not changed in years. You can find a good tutorial on getting started with Hessian from the [http://www.caucho.com/resin-3.0/protocols/hessian.xtp Resin 3 documentation]. I’ve tried these tutorial steps in Resin 4 and the tutorial works as advertised.

Resin 4 is Java EE Web Profile certified as such it does not support CORBA, EJB remoting, etc. However Resin does support Java Dependency Injection (CDI), which allows you to easily find beans with certain annotations. What follows is a simple example that finds all @Stateless beans that have @Remote interfaces and automatically exposes those beans as remote hessian objects.