I have been running Tomcat 6.0.24 in a Netbeans development environment for months. Yesterday, I started getting a "java.lang.NoClassDefFoundError: org/jboss/logging/Logger" message when I try to start Tomcat in Netbeans services. I'm not running jboss, and to my knowledge, I haven't made any changes to my computer. The server output is:

The idea was to provide multiple web applications running inside a Tomcat instance with configuration properties from config files located outside the WAR file of the applications. I'm currently trying to provide these configuration properties with a global naming resource (defined inside the server.xml) which is consumed by the said applications to keep the configuration container independent and to avoid direct file access from a web application. So far almost everything already works perfectly! Except that Tomcat obviously caches resources which are looked up via JNDI.

asked by

There is sort of a way to get this happen. It is not a cache, instead it is singleton or non singleton.

Take the default example that ships with Tomcat

<Resource name="UserDatabase" auth="Container"

type="org.apache.catalina.UserDatabase"

description="User database that can be updated and saved"

factory="org.apache.catalina.users.MemoryUserDatabaseFactory"

pathname="conf/tomcat-users.xml" />

With this configuration, getObjectInstance will only be called once. However, there is an attribute that can be set that will make sure that getObjectInstance is called every time a Context.lookup happens, it is singleton="false"

<Resource name="UserDatabase" auth="Container"

type="org.apache.catalina.UserDatabase"

description="User database that can be updated and saved"

factory="org.apache.catalina.users.MemoryUserDatabaseFactory"

pathname="conf/tomcat-users.xml"

singleton="false"/>

Then you are able to decide when getObjectInstance returns a new object, or a cached object in your own code.

I have observed the tomcat 7 process memory (private bytes) was initially 1.2 GB during startup and it got increased to 3.5 GB where my server RAM size is only 4GB after running a 100 users test for 4 hours. this private bytes was not released even after stopping the test. could you please let us know any configurations that might help this or kindly analyze the situation and provide your suggestions/solutions.

Lacking more specific about the behavior that you are seeing and about the environment that you are using to run your tests, and since this is a testing environment, my suggestion to you would be to run your tests while you have a profiler hooked up to Tomcat (YourKit is an excellent profiler). The profiler will allow you to look for memory problems in your application.

That's not to say there is definitely a problem here. It is entirely possible that you could see the heap grow from 1.2 G to 3.5G legitimately. It just depends on your JVM options and the memory demands of your application.

With the Apache Tomcat 7.0.27 release, the Apache Tomcat team introduced a WebSocket implementation. In a previous post, we took a look at what the WebSocket implementation means, including what benefits and limitations they present. Today, we will discuss specifically how WebSocket is implemented in Apache Tomcat 7.

Since WebSocket is a protocol sent over TCP after an initial HTTP handshake, you could effectively implement WebSocket using Tomcat’s Comet implementation. There is a back port to Tomcat 6 suggested that does exactly that with very minor changes.

The Apache Tomcat team however decided to go with a more substantial implementation with changes to the core of Tomcat’s network and protocol implementation. The reason for this was memory and scalability based. If Tomcat can recycle the HttpServletRequest/Response objects after the initial handshake, each WebSocket connection will take up less memory in the Java heap. It also opens up the Tomcat container for other future protocols that utilize the HTTP Upgrade feature.

The WebSocket implementation from an API standpoint is fairly straightforward. You really can only do two things:

My recommendation is that you run without the security manager. It is inherently complex to, and it's not where you start as a beginner.

Before even consider running with the security manager turned on, you need to evaluate your requirements if you even need it. In my experience, 9 out of 10 companies will not need it as they secure the environments where Tomcat runs instead. In shared environments with untrusted parties, such as a shared hosting environment, the security manager comes in handy as it can protect against code.

So, to get started, turn off the security manager, and get your application fully working. The security manager is not a feature of Apache Tomcat, it's part of the Java Runtime Environment.

With the 7.0.27 release the Apache Tomcat team introduced a WebSocket implementation. WebSocket has received a lot of hype, and has been much anticipated by Tomcat users. Let’s take a quick look at what web sockets are, what benefits and limitations they have and how they are implemented in Apache Tomcat 7.

What is a WebSocket?

WebSocket is considered the next step in evolution of web communication. Over time, communication has evolved in steps to reduce the time and data throughput for the application to update a user’s browser. The evolution has looked a little like this:

Entire page reloads

Component reloads using AJAX Processing

Comet communication

Long poll– similar to AJAX, but not holding a thread on the server

Bi directional- two way communication over the same TCP

Each of these steps had their benefits and challenges. Apache Tomcat 6 implements bi-directional communication over HTTP using its Comet Processor. This implementation allowed for asynchronous event driven request processing as well as bi-directional communication. This implementation had a few limitations.

This release is includes significant new features as well as a number of bug fixes compared to version 7.0.26. The notable changes include:

Support for the WebSocket protocol (RFC6455). Both streaming and message based APIs are provided and the implementation currently fully passes the Autobahn test suite. Also included are several examples.

A number of fixes to the HTTP NIO connector, particularly when using Comet.

Improve the memory leak prevention and detection code so that it works well with JVMs from IBM.

Working software is the primary measure of progress for software development teams. This is one of the principles of the Agile Manifesto and has led agile software teams to focus on implementing the most important features of a system early and efficiently. These teams usually provide frequent deployments of the software in order to receive feature validation from the business and to show project progress. The benefits are quick and frequent feedback for the developers and congruous applications for the business.

The practice of automated continuous deployment ensures that the latest checked in code is deployed, running, and accessible to various roles within an organization. Project managers can have a place to check on project progress, testers have a view into the latest builds, developers can see the their modules working with the modules from other team members, and stakeholders can see how their requirements have been translated into working software. Tomcat and tc server easily integrate with continuous integration servers to allow agile teams to realize continuous deployment while utilizing a lean application server (another practice of agile teams). You can start practicing continuous deployment very quickly using Tomcat or tc server, Jenkins, and your source control system of choice.