Pages

17 February 2015

When Red Hat redesigned JBoss 5, one of the key things they did was to combine all the configuration from separate files within their own modules to a single XML file. When you start JBoss by calling standalone.sh, that single XML file holds all the configuration it uses, which makes it much easier to track down any misconfiguration.

Clustering in standalone mode is very straightforward; simply edit the JGroups subsystem in the appropriate configuration file (standalone-ha or standalone-full-ha) as Andy Overton has outlined in his previous blog. Providing they both have the same configuration, the servers will discover each other and you're done. The downside comes when you have large clusters to manage and need to make the same configuration change in many places!

9 February 2015

Automation is a useful ability in many fields, with installation of software being no different. Once the time has been put in to create an automating installation script, automation can save you a great deal of time in avoiding what can be seen as a repetitive and time consuming task. In this blog I’ll describe how much of the installation process can be automated, and ways to make the automation reusable.

3 February 2015

I know that I said in my previous blog that I was doing a two-part series on how to put together a clustered web application running on WildFly on EC2. Having spent some more time hacking around, I've realised that if I was to dump everything else into one more blog it would be quite long-winded. As a result, this is now part two of three. This blog will be focusing on setting up and using Infinispan.

Recap

In the previous post, we talked about the basics of setting up a WebSocket application running on WildFly and ran through some of the front-end and back-end code for that. We also looked at how to tweak the JGroups configuration for WildFly's clustering on EC2.

Infinispan

Infinispan is a highly scalable and highly available in-memory key/value data store written in Java - and is the open source project behind JBoss Data Grid. Infinispan can be used as a distributed cache in front of an SQL database or as its own NoSQL data grid. When using it as a data grid, it is possible to configure Infinispan to persist data to disk-based data stores in order to ensure that data is not lost.

Infinispan is typically used in two different modes:

1. Library mode is where you use Infinispan within your own application's source code.
2. Client/Server mode is where you have Infinispan running separately from your application as its own server. This is the mode that we will use for our setup.

The Infinispan Server has four endpoints - or protocols - that clients can make use of. The protocols are:

1. HotRod
2. REST
3. Memcached
4. WebSocket

In this case, we will make use of the WebSocket protocol. Given that we have previously looked at setting up a WebSocket server on WildFly, it seems like a natural fit to now look at making use of a WebSocket client in Java to store our data.

Setup changes

In the first blog, I put together a diagram of the architecture set-up that I was intending to use. I did not include anything in there regarding how and where we were going to deal with our storage for the application.

This is an updated revision of the architecture:

In practice, we would expect multiple Infinispan instances running within our architecture to ensure the availability of our data. For this demo however, we will just be using a single server.

WebSocket requests

There isn't a large amount of documentation on making use of the WebSocket server for Infinispan - at least for using a Java client - so it was a nice exercise to dig around the Infinispan server codebase to dig out what our request would have to look like.

As we can see from the above gist, we need to send a request (as a string) that has a JSON structure with some specific parameters:

The "opCode" parameter tells the server what operation you are attempting to perform - put/get/remove/notify are the options. This is so that the server knows which of its internal handlers to invoke in order to deal with your operation.

The "mime" parameter tells the server the mime type that you are using. The server cannot deal with "application/json". So I used "text/plain".

The "cacheName" parameter tells the server the name of the cache you want to perform your operation on.

The "key" parameter states the key you wish to perform your operation on.

The "value" parameter is only specified for a put operation and will be the value that you wish to use.

Any time when we do send a message to the server, we have to ensure that we create a structure which follows this one.

InfinispanEndpoint class

In the application that runs on WildFly, we have put together an InfinispanEndpoint class. This class will be the one that deals with communicating with the Infinispan server.

Similar to the @ServerEndpoint annotation used on the server side, we would use a @ClientEndpoint annotation for the Java client. This is annotated on the class level. We also have the same method level annotations available, such as @OnOpen, @OnClose, @OnMessage etc.

For starters, let's look at how we start the client. In this case, we are instantiating the client from our application code, and we will provide the WebSocket URI to connect to - so we are passing that as a parameter to our constructor.

For the other annotated methods, all that we are doing is logging messages in our application - apart from the onMessage() method, which is where we do a little bit more work. What we have done in addition is set up a separate MessageHandler interface; we can instantiate different instances in our Getter and Storer classes that will then deal with these messages appropriately. For now though, let's look at what we have to do within this endpoint class.

The important method to look at is sendMessage() (in testing, the WebSocket wouldn't be opened in time before we tried to send a message so we just pause for a short time) as this is the one that sends the message to the server. The API for doing so is identical to the server-side one, because we are dealing with the same type of Session object!

I also put together a separate client (running in a main() method) to test this functionality. What this client does is put a key/value pair, wait 10s and then try and get the same key. Once it gets the server response it will just dump that message to screen - this was also so I could understand what the server responses looked like. Here is how I am handling the message:

From the gist below, we can see how the messages that we send to Infinispan are constructed and then sent out, as well as the successful response from the server.

And there we go, we have now built a Java client to connect to the Infinispan WebSocket Server. If you are unclear on how some of the other internal client wiring works, I followed this thread on StackOverflow quite closely for some ideas. It quite neatly explains how to set things up.

In the next part, we will look at putting all of these parts together in a full application running on Amazon EC2.

23 January 2015

In this blog post I will provide a brief introduction to JASPIC and then take a walk through setting up a basic demo using JASPIC to secure a simple web application in GlassFish.

What is JASPIC?

JASPIC stands for Java Authentication Service Provider Interface for Containers. The original JSR for JASPIC was created back in 2002 but it wasn't completed until 2007 and wasn't included in Java EE until Java EE 6 back in 2009.

JSR 196 defines a standard service-provider interface (SPI) and standardises how an authentication module is integrated into a Java EE container.

It is supported by all the popular web containers and is mandatory for the full Java EE 6 profile.

It provides a message processing model and details a number of interaction points on the client and server.

A compatible web container will use the SPI at these points to delegate the corresponding message security processing to a server authentication module (SAM).

Walk-through

I will be using the following software whilst doing this walk-through. If you are using different versions then you may see different results.

There is one additional (rather ugly) step we need to do to make our app work. In order for GlassFish to accept the roles that our authentication module puts into the JAAS Subject we have to map them to groups.

validateRequest - This is the main method of interest. In order to pass in the user and role I have just added them as servlet request parameters for testing purposes. This method extracts those values and then calls authenticateUser.

authenticateUser - NOTE - This method doesn't actually do any authentication! It simply takes the user and group, creates callback classes from them and passes them to the callback handler.

Although JASPIC is yet to take off it's a good first step towards standardising security in web containers and avoids the need for each to have their own proprietary solution, although there is still the issue of different containers using different deployment descriptors hindering the portability of apps.

21 January 2015

It was one of those rare occasions that I was in the office when my boss started a conversation with me that went along the lines of:

Steve: I’ve been thinking….. Alan: Yes? (I wonder where this conversations going)Steve: Alan, you have been working with Chef for customer ‘X’ for about a year now’ Alan: Yes? (Still not sure where this conversation is going)Steve: I know you have been working on deployments to large scale JBoss EAP 5.x/6.x (Standalone) clusters, but would Chef be a suitable tool to help provide ‘Continuous Delivery of Oracle SOA Suite Applications?Alan: Give me a couple of days to review our WLST scripts used to deploy Oracle SOA applications (currently executed manually) to assess if Chef can be used and I’ll write a blog about it.

Background

I have been working with one of our customers for just over a year helping them to achieve their goal towards providing ‘Continuous Delivery’ for a number of their core business applications. Continuous focuses on automating all the build and deployment steps up to and including the UAT environments. The Dev Ops teams have been using git, Maven, Jira, Stash and Jenkins for a number of years so have a well-established ‘Continuous Integration’ process, but lacked the tools to automate the deployment.

Chef is a software provisioning system developed by Opscode which allows the infrastructure is modelled as code on the Chef server, with clients installed on the servers to be managed (Nodes). The Chef clients communicate with the Chef server to determine what changes need to be made to the nodes configuration. The infrastructure is modelled using the following Chef objects:

Cookbooks contain Chef DSL/ruby code in recipes, libraries and definitions to define what resources/operations need to be carried out on a node. The ruby code references from attributes defined in either the cookbook, roles, data bags or environments.

Data Bags are used to define global attributes that can be used by all code defined in the cookbooks.

Environments are associated to nodes managed by Chef and define the cookbooks and any attributes specific for that environment. The cookbook definitions can be pinned to a version, allowing for different versions to be applied to the DEV, TEST and PROD environments.

Roles describe the purpose of a server and defines the run-list and order of recipes to be applied to the node.

Nodes are registered with the Chef server and assigned a role(s) and environment

Chef provides a command line utility called knife, which is used to manage the Chef objects and environment. The knife ssh subcommand is used to invoke SSH commands (in parallel) on a subset of nodes, based on the results of a search query made to the Chef server. This allows you to invoke a Chef client run on nodes to provision/deploy an environment.

As an example, say we have an application that processes ‘Insurance Claims’ and comprises of a web tier deployed to a 2 node cluster and business logic tier to a 5 node cluster.

In the above example the following Chef Roles would be created, ‘ClaimsUI’ and ‘ClaimsBusinessLogic’, defining the run-list of recipes that are executed to deploy each component.

A Chef Environment ‘ClaimsDev’ would be created where the environment specific properties for the application would be defined, such as database connection details, web server urls etc. The nodes would be assigned the appropriate roles and environment, so for example, to deploy the Claims UI to the development environment, the following knife command would be issued:

The knife ssh command queries the Chef Server returning a list of matching nodes in the Claims Development environment that need the Claims Web interface to be deployed. An ssh session is started on each node returned as the user ‘afryer’ and runs the Chef client with sudo access. The Chef client connects to the Chef server and updates the attributes in the node’s hash map and executes the recipes defined in the ClaimsDev Role’s run-list. The recipes read the attributes from the nodes hash map (defined in cookbooks/environments/roles) and performs the operations in the recipes, deploying the Claims Web component on the required nodes.

We now have a mechanism to deploy/provision applications to an environment on multiple nodes from a single command line call. Configuring the knife tool on the server running Jenkins and executing a shell from a Jenkins task to run the appropriate knife ssh command enables ‘Continuous Delivery’ to be achieved in the Dev, Test and UAT environments.

So can Chef be used to provision Oracle SOA Suite? Oracle SOA Suite is based on the Oracle Weblogic server which consists of an Administration Server and Managed Servers in a Domain. The Administration Server is used to configure Managed Servers in a Domain and deploy applications to them. The WebLogic Scripting Tool (WLST) is a command-line interface used to automate domain configuration, application deployment and configuration, see Oracle WebLogic Scripting Tool for more information. This still requires all the Oracle Weblogic binaries to be installed on the hosts in the Domain, but the majority of the configuration will be done via the Administration Server typically using WLST.

Chef has an execute resource that can run any OS script/command which means that WLST scripts can be executed from a Chef client run. The properties required to drive the deployment would need to be modelled as attributes in Chef Environments/Cookbooks, which would be read from the node hash at runtime by the Chef recipes and passed to the required WLST scripts. Hence Chef would be able to install the Weblogic binaries on all the nodes in a Domain, perform any configuration on the individual Managed servers, executing any WLST scripts via the Administration Server.

In summary, Chef can deliver a solution that automates the provisioning/deployment of Oracle SOA Suite applications in a repeatable manner, significantly reducing deployment times. This lends itself to provisioning consistent configuration across multiple environments leading to a reduction in time spent trying to debug configuration errors which can occur with manual deployments. With careful design of the recipes in a granular way, it can also help to promote the building of standard environments quickly for new projects using a certified middleware stack. This enforces good practices across all projects, improving the quality of the systems delivered, reducing development and support costs.

In the next blog I’ll create a simple Chef cookbook to deploy and configure Oracle Weblogic SOA Suite in a development environment.

SOA Suite uses a database schema called SOAINFRA (collection of database objects such as tables, views, procedures, functions etc.) to store data required for the running of SOA Suite applications. The SOAINFRA (SOA Infrastructure) schema is also referred to as the ‘dehydration store’ acting as the persistence layer for capturing SOA Suite data.

What data does Oracle SOA Suite 11g (PS6 11.1.1.7) store?

Composite instances utilising the SOA Suite Service Engines (BPEL, mediator, human task, rules, BPM, OSB, EDN etc.) will write data to tables residing within the SOAINFRA schema. Each of the engines will either write data to specific engine tables (e.g. the CUBE_INSTANCE table is used solely by the BPEL engine) or common tables that are shared by the SOA Suite engines such as the AUDIT_TRAIL table.

Few examples (below) of the type of data that is stored within the SOAINFRA schema:

Message payload (e.g. input, output)

Scope (e.g. variables)

Auditing (e.g. data flow timestamps)

Faults

Deferred(messages that can be recovered)

Metrics

Why do you need to purge Oracle SOA Suite 11g (PS6 11.1.1.7) data?

Data within the Oracle SOA Suite database can grow to substantial levels in a short space of time. Payload sizes and volume of data will have an impact on available disk space which in turn will affect the performance of SOA Suite. For example, EM console can often become slow to navigate, increasing number of messages becoming stuck or requiring recovery, JTA transaction problems etc.

Purging itself, can become challenging if the data has not been maintained due to the large number of composite instances. Therefore, establishing a purge strategy and implementing it on a regular basis will help maintain the health of SOA Suite keeping the environment running efficiently.

What are the purging options available for Oracle SOA Suite 11g (PS6 11.1.1.7)?

Oracle provides three options for purging Oracle SOA Suite 11g data:

EM Console: Within the Enterprise Manager console the ‘Delete with Options’ can be used to manually delete many instances at once however, this may lead to transaction timeouts and is not recommended for large volumes.

Purge Script: This is the process of deleting instances that are no longer required using stored procedures that are provided with Oracle SOA Suite 11g out of the box.

Partitioning: Instances are segregated based on user defined criteria within the database, when a partition is not required it will be dropped freeing the disk space.

Which data will be purged by the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script?

The purge script will delete composite instances that are in the following states:

Completed

Faulted

Terminated by user

Stale

Unknown

The purge script will NOT delete composite instances that are in the following states:

Running (in-flight)

Suspended

Pending Recovery

List of composite instance states that will be considered for purging by the purge script:

Instance State

Instance State Description

Will Oracle purge script delete instances in this state?

0

Running

no

1

Completed

yes

2

Running with faults

no

3

Completed with faults

yes

4

Running with recovery required

no

5

Completed with recovery required

no

6

Running with faults and recovery required

no

7

Completed with faults and recovery required

no

8

Running with suspended

no

9

Completed with suspended

no

10

Running with faults and suspended

no

11

Completed with faults and suspended

no

12

Running with recovery required and suspended

no

13

Completed with recovery required and suspended

no

14

Running with faults, recovery required, and suspended

no

15

Completed with faults, recovery required, and suspended

no

16

Running with terminated

yes

17

Completed with terminated

no

18

Running with faults and terminated

no

19

Completed with faults and terminated

yes

20

Running with recovery required and terminated

no

21

Completed with recovery required and terminated

no

22

Running with faults, recovery required, and terminated

no

23

Completed with faults, recovery required, and terminated

no

24

Running with suspended and terminated

no

25

Completed with suspended and terminated

no

26

Running with faulted, suspended, and terminated

no

27

Completed with faulted, suspended, and terminated

no

28

Running with recovery required, suspended, and terminated

no

29

Completed with recovery required, suspended, and terminated

no

30

Running with faulted, recovery required, suspended, and terminated

no

31

Completed with faulted, recovery required, suspended, and terminated

no

32

Unknown

yes

64

-

yes

How to install the Oracle SOA Suite 11g (PS6 11.1.1.7) purge script?

The following details will be required:

Database host details:

hostname (IP address)

username

password

SOA Database details:

SOAINFRA schema prefix

SOAINFRA schema password

Full path of the SOA Suite home folder

Full path of the directory where the Oracle purge script will write log information to (a folder on the database host)

‘DEV’ was the soainfra schema prefix used for the examples below.

Log into the Database host server.

Connect to the database as administrator using SQL*Plus:

sqlplus / as sysdba

Grant privileges to the soainfra (database) user that will be executing the scripts:

Looped purge is a single threaded PL/SQL script that will iterate through the SOAINFRA tables and delete instances matching the parameters specified.

What is Parallel purging (Oracle SOA Suite 11g (PS6 11.1.1.7) purge script)?Parallel purge is essentially the same as the looped purge. It is meant to be more efficient as it uses the dbms_scheduler package to spawn multiple purge jobs all working on a distinct subset of data. There are 2 more parameters that can be specified in addition to the ones used by the looped purge. This is designed to purge large data volumes hosted on high-end database nodes with multiple CPUs and a good IO sub-system. A maintenance window should be used as it requires a lot of resources.

purge script parameters

type

mandatory optional

default

usage?parallel or both

description

min_creation_date

timestamp

M

both

Beginning creation date for the composite instances.

max_creation_date

timestamp

M

both

Ending creation date for the composite instances.

batch_size

integer

O

20000

both

Batch size used to loop the purge, how many instances will be deleted in 1 loop.The way the script is run is the following: it reads / and writes to a temporary table. This is how many records will be used one cycle before going to the next cycle.This is NOT how many records to delete in 1 purge operation.

max_runtime

integer

O

60

both

Expiration time at which the purge script will stop running. Specified in minutes

retention_period

timestamp

O

null

both

Retention period is only used for BPEL instances.

The value for this parameter must be greater than or equal to max_creation_date.

Used as a further level of filtering. Specify a retention period if you want to retain the composite instances based on the modification date.

purge_partitioned_component

boolean

O

false

both

Users can invoke the same purge to delete partitioned data.

composite_name

string

O

null

both

The name of the SOA composite application.

You can purge the instances of a specific SOA composite application and leave the instances of other composites unpurged. This action enables you to purge certain flows more frequently than others due to high volume or retention period characteristics.

composite_revision

string

O

null

both

The revision number of the SOA composite application.

soa_partition_name

string

O

null

both

The partition in which the SOA composite application is included.

ignore_state

boolean

O

false

both

If this is set to true then all instances will be purged regardless of state.

DOP

integer

O

4

parallel only

Defines the number of purge jobs to run at the same time.

As a rule of thumb, the number of jobs should not exceed the number of CPUs on the node by more than one. For example, on a quad core / 4 thread CPU RDBMS box we will set it to 3.

max_count

integer

O

1000000

parallel only

Defines the maximum number of rows to process (not the number of rows deleted). A temp table is created and then jobs are scheduled to purge based on the data.

We are required to delete all composite instances which were created between 1st June 2010 and 30th June 2010. In addition, there is a requirement not to delete instances that have been modified after 30th June 2010. The script must finish running after an hour due to business hours resuming shortly afterwards.

The above will in effect delete all "composite instances" where the created time of the instance is between 1st June 2010 and 30 June 2010 and the modified date of the BPEL instances is less than 1st July 2010.

This blog has provided a basic understanding of the purge script contained within Oracle SOA Suite 11g (PS6 11.1.1.7).

A long term purging strategy needs to be implemented and in order to do so, a good understanding of the workings of the purge script is required along with an awareness of the issues related to the script.

Therefore, leading on from part 1 there will be few more blogs that will cover other the following: