With the rapid rise in the uptake of virtual assistants, we wanted to explore integrating the Oracle Digital Assistant (ODA) with Alexa, so we can expand the range of platforms we can offer for our customers. Good news, with the latest release of Node SDK for ODA by Oracle and the alexa-app SDK by Amazon this can now be done, very easily.

The integration is achieved by setting up a web server in between Alexa and ODA, that acts as middleware.

The next step is to associate this custom slot to your custom intent. For this click on CommandBot under Intent menu, navigate down to Intent Slots to create a new slot.

Let's keep the Slot Name as command and click on + button, and select MyCustomSlotType as slot
type, as shown below:

Finally, add a sample utterance to your intent. Click on CommandBot under Intent menu and add {command} as a Sample Utterance. Click on + button. This will pass the user input to our web server application code.

Click on Save Model.

After saving, build the model by clicking the Build Model button. Building the model may take a few seconds, you will get a message once it is successful.

2. Creating a web server app:

Our web server app is a Node.js app with an Express Server.

It requires 'alexa-app' and 'bots-node-sdk' node modules.

Before we begin, please download the sample code and perform an npm install.The sample code can be found in this repo: Alexa with ODA.

The sample code runs on the port '5000'. It is very helpful if you expose this port and obtain a public URL beforehand.(I prefer using Ngrok. If you are new to ngrok, refer to the official documentation.

Hint: Execute the command 'ngrok http 5000' in the downloaded ngrok folder and make a note of the 'https' URL provided.

After you have the code locally, there are a couple of changes you need to make. To make the changes we need to setup a webhook channel in ODA and obtain the connection parameters from there.

Now, let's see how it's done.

3. Creating a Webhook channel in ODA:

After you login to ODA, click on the burger menu icon in the top-left corner, expand Development and select Channels.

Click on create a new Channel.

Select Webhook from the Channel Type dropdown.

Provide the details for the Name and the Outgoing Webhook URI leaving the Platform Version default. You can also add an optional Description and change the Session Expiration.

The Outgoing Webhook URI is the public URL that we've obtained in the previous step, suffixed with "/botWebhook/messages" (you can change this in the sample code provided at line 339 in the service.js by changing the endpoint property, shown below).

After providing the details, click 'create'. If there are no errors, your channel should be created and you will be presented with a screen like this:

Preserve the Secret Key and the Webhook URL somewhere (we will need them in the final step). Enable the channel by clicking on the toggle next to Channel Enabled label. Route this channel to your required Digital Assistant or Skill from the drop down available at the Route To label.

We are now just one step away from finishing up the integration.

The Final Step:

We are just left with linking the web server app that we've created with ODA.

To accomplish that, replace:"YOUR_APP_ID", "YOUR_WEBHOOK_SECRET_KEY_FROM_ODA" and"YOUR_WEBHOOK_URL_FROM_ODA" in the metadata with your Alexa App ID(obtained from the Alexa Developer Console), the Secret Key and the Webhook URL respectively.

The metadata is located at line 20 in the service.js file provided in the sample code:

Go to the developer console of your skill, click 'Build Model' if your skill is not already built.

After the skill is built, click test from the console.

Make sure Skill testing is enabled, if not select Development from the drop down.

Type something in the input box along with the invocation name of your skill to test. If everything is properly configured, you should see the response from your bot:

And, we are done!

A few restrictions with Alexa

To wrap up, it's worth pointing out a few restrictions we've noticed in working through our ODA-Alexa integration.

Messaging types in Alexa are limited:

ODA features interactive components like Lists, Cards, Webviews etc. If one/many of these components are present in your conversation flow (BotML), it is not advisable to use Alexa as a channel.

Alexa being a voice based channel makes it very obvious that we can't have graphical interactive components, unlike Facebook and other channels.

It is possible to show cards (called Home Cards) in Alexa but these cards only show up in the Alexa application in a mobile phone/tablet, and are not interactive, meaning that you can only present the user with some information (eg: weather forecast, an order summary, etc) but can't have the user interact with Alexa.

Alexa also supports Progressive Responses (similar to a progress bar in GUI but with a vocal output).

For more information on Home cards and Progressive Responses, please visit Amazon's official documentation.
Even though there is no documentation around graphical cards, Alexa renders them in a completely different way.

Now we'll compare the rendered messages in Alexa and Facebook, and notice some of the Alexa shortcomings.

Multiple responses aren't received:

<br
With the given SDK, Alexa processes only one response from the bot. Even if there are multiple responses from the ODA skill, Alexa renders only the first one. Check out the example below:

Expected Response:

Alexa's Response:

The second and third text messages are skipped in Alexa making it inappropriate to use when there are multiple responses from the bot.

Cards are a bit fuzzy to respond to:

We are accustomed to seeing navigable cards in both web and facebook channels. Alexa renders these cards in a completely different way, it adds the prefix 'CARD' to the card title and the button and then reads out all the card responses in the order of their appearance. To select a card you must tell Alexa the card name, for which Alexa responds with another prompt with the options available for the card (mostly 'view' and 'return').

Alexa opens and reads out the card data when you say 'view' and returns if you say 'return'. Take a look at the screenshots:

Cards in Facebook:

Cards in Alexa:

Selecting a card in other channels is simply clicking on the button available on that particular card, whereas in Alexa, the card title should be given as an input which is not as convenient.

Lists aren't that bad in Alexa:

Lists seem to be working fine in Alexa. The available options are read out to the user and the user has to reply with any one of those options.

Test your patience with the FAQ's (QnA's):

The "System.QnA" component is rendered as cards with the questions that are relevant to the user input, along with the answers. The user has to click on the card to view the full answer.

Alexa reads out all the matched questions prefixed with the word 'Card', just like it does with normal cards as mentioned above. The user has to read the question back to Alexa (imagine how painful it would be if you go wrong, the whole conversation heads nowhere!). Given this, I feel it's better to avoid using FAQs in the Alexa channel.

Let's take a look at these screenshots:

QnA's in Facebook Messenger:

QnA's in Alexa:

When you choose to 'view' :

When you choose to 'return':

Having seen all of these features and some of the shortcomings with Alexa as a channel, I personally feel it is the developer's responsibility to consider these, before choosing the appropriate channels.

Alexa is definitely great for sending single and concise messages, and messages with prompts, but you might want to reconsider using Alexa if your bots are highly interactive (if they contain cards, webviews, etc).

Using Alexa as a channel where the messages contain private and confidential information (ie medical information, bank history) is not recommended.

Alexa is perfect when your messages require beautiful narration. It can even convey emotions that a user can connect to.

I would like to acknowledge a few people that have contributed to this blog post, helping with reviews/testing/validation. Thanks to my colleagues from Rubicon Red: Nikhil Bansal and Sri Charan for trying out the integration and Mr. Rohit Dhamija from Oracle Product Management for this awesome blog post that I took reference from.

TL;DR: Using the rubiconxred/psm docker image with external secrets, we can securely interact with psm CLI without needing anything else installed on our machine. No python, no pip dependency downloads, no secrets stored in the image; the only dependency is docker.

This is Part #2 in our 3 Part series on Oracle PaaS Service Manager (PSM). In the first post we showed you how to locate and download the PSM CLI.

We can use the dockerised image in exactly the same way as we would use the native psm cli while avoiding all of the upfront pain (e.g. conflicts between the python dependencies of other CLIs).

A co-worker shared their frustration from earlier experiences, after I shared this approach on our internal collaboration portal.

I wish I had done the docker-first approach for the CLI tools, as I have been through hell with the CLIs for aws, psm and opc etc and their shared use of python!!

If this resonates with you, well then now is the time to switch to running dockerised CLI tools. If you're just getting started with psm then save yourself the pain. Unless you are not using python for anything else, you're probably gonna have a bad time. Do you want to have a bad time?

Beware of Imitations

I wrote this article because the other articles I had seen were giving really bad advice such as instructing readers to bake their secrets into the docker image itself. This is not a good idea as all it would take is for someone to do a docker push to a public registry and now the world has full access to your entire Oracle Cloud domain. Don't be a sucker!

Only prerequisite. docker.

You shouldn't need anything other than docker to run through this guide. If you don't have it installed head over to the Docker Installation Guide or if you're on Linux/Mac, you can simply install with the following:

curl -fsSL get.docker.com -o get-docker.sh
sh get-docker.sh

Creating our psm wrapper

Establishing a dockerised setup for psm can be done by simply adding a docker wrapper to your PATH. Cool hey?!

Let's create a psm file on the PATH so it behaves exactly like the natively installed psm. We will take the contents below and put it in /usr/local/bin/psm (although it could be anywhere so long as it's on the PATH.

Finally, update the placeholders with the real values you want for PSM_IDENTITY_DOMAIN, PSM_USERNAME, PSM_PASSWORD and PSM_REGION. If you don't know how to find these, I'd recommend checking out Andrew Dorman's Getting Started with PSM guide. The identity domain in particular can be notoriously difficult to find.

Now, you're done. You can interact with psm as per normal.

psm help

Hold up, PSM what?

I know, I know, I couldn't help but jump to the solution. It's bad I know. So let's wind it all back... What is psm and why should I care?

The Oracle PaaS Service Manager Command-Line Interface (psm) is a useful tool for managing the lifecycle of various services in the Oracle Public Cloud. It's a thin wrapper over the various PaaS REST APIs to make it easier to perform cloud lifecycle automation from scripts.

So what can I do with it?

Well... Plenty of things! An execution of psm help will show that we can manage and automate the lifecycle of the following services (as at July 2018; I'm sure there is more to come).

How did you create your image?

Ok, so if you have read this far, good work. You might be wondering how you can create a secure docker image under your own namespace rather than relying on the one I have pre-built. I've uploaded the Dockerfile I used to Github for your convenience. To build your own image, simply clone the repository, download psmcli.zip locally using one of the approaches in the earlier post, place it in the same directory as the Dockerfile and execute a docker build.

Now to use your image simply replace your wrapper script to have psm-cli instead of rubiconxred/psm. That's it!

Distributing your image for easy access anywhere

One of the nice things about the image is that it doesn't contain any secrets and so is safe to push to a Docker Registry. Once pushed to a registry, you or anyone (or at least anyone authorised in case of a private registry) can pull down the image anywhere that it is needed.

Of course, you can skip this step altogether by using the pre-built rubiconxred/psm from Docker Hub.

If you do indeed want to use your own image repository on a docker registry, all that is needed is an active Docker Hub account. If you don't have one you can sign up for free at https://hub.docker.com/

Step 1: Make sure you have performed a docker login first.

Step 2: If you are using the image tagged as psm-cli. Be sure to first tag to with your namespace from Docker Hub.

docker tag psm-cli yournamespace/psm

Step 3: Push your image

docker push yournamespace/psm

Thanks!

I hope you found this useful. If you did, please share this post or leave a comment below.

Oracle PaaS Service Manager (PSM) is a command line tool that allows you to provision, configure, and manage the life cycle of Oracle PaaS platforms and products. It acts as a wrapper around the REST APIs, and makes it easier to interact with the PaaS services from the command line and scripts.

This is Part #1 in our 3 Part series on Oracle PaaS Service Manager (PSM). In this blog we will look at how to locate and download the PSM CLI. In subsequent blog posts we will show you how you can very easily securely Dockerise PSM CLI and how to use PSM CLI.

To get started with PSM there are two ways of downloading the CLI client

Download from the Oracle Cloud User Interface

Download using the REST API

Download Via the Oracle Cloud User Interface

Log into the Oracle Cloud Console and click on the Menu icon in the top left

Select "Services" from the Menu

Select one of the PaaS Services displayed in the menu. For example Java, Database, Analytics, SOA, Integration etc. In our example we are going to select "Database"

From the service console of the PaaS Service that you selected, click on the round circle with your initials in it to open the menu. Select "Help" followed by "Download Center"

This brings up a dialogue displaying the items that you can download. In our example there is only PSM. Click on the download arrow to commence the download.

Download Via the REST API

We can download the psm CLI using curl and the REST API. Before you begin, you will need to locate your Oracle Cloud Account Identity Domain, and the region.

Locate the Oracle Cloud Account Details

Log into the Oracle Cloud User Interface

The lower portion of the Dashboard list the services that you are currently using within your account via a series of tiles. Click on the menu icon of one of the tiles and select "View Details"

From the service details we can see "Identity Service Id" (highlighted in the red box), this is our Identity Domain. To find our region we look at the "REST Endpoint", and we have highlighted the region value in yellow. Valid regions include "us","aucom", and "europe"

Using curl with the REST API

We can set the values of our username, password, Identity Domain, and region as shell environment variables for use in our curl command.

Installing PSM CLI

Having successfully located and downloaded the psm CLI you can now proceed to installing it. Ensure that you have completed the installation and configuration of the prerequisite version of python before installing.

Keep an eye out for the next blog post which will cover using PSM CLI.

]]>

Over the past few months, we have been working with Oracle Integration Cloud (OIC). In this post, we’ll be looking at Integration Cloud Services (ICS) agent framework, which allows us to connect to our on-premise applications.

As not all applications run on the cloud, the agent framework can help

Over the past few months, we have been working with Oracle Integration Cloud (OIC). In this post, we’ll be looking at Integration Cloud Services (ICS) agent framework, which allows us to connect to our on-premise applications.

As not all applications run on the cloud, the agent framework can help us build hybrid cloud solutions, allowing on-premise and cloud applications to communicate with each other.

There are 2 types of agents:

Connectivity Agent:-

A gateway which provides a connection between on-premise apps and ICS domain to interchange messages by eliminating the firewall.

Execution Agent:-

A self-managed on-premise instance of ICS that enables better integration for systems within your organization and caters for security concerns on sensitive data residing in the cloud.

In this blog, we’ll focus on the Connectivity Agent.

For our use case, we will create an Integration that connects to an on-premise MySQL database where we’ll be storing the latest weather forecast.

This generates a basic integration flow with no source and target configured

Step 2: Configuring Trigger and Invoke

Let's configure the source, drop the connection we have created from the resource palette.

•Provide a name that defines your operation and an optional description.

Next page shows the available operations and port types, as our WSDL has only one operation we can continue to next page by clicking "Next".

As we have defined our operation to be one way in our WSDL it provides us with this page where we can choose to send a delayed response with custom headers or no response at all, we’ll select no response.

We won’t be adding any Headers, so skip to the summary page.

With our source configured, let’s now configure the target.

Drop Vivek_MySQL on to the flow and provide Basic information like name and optional description and the steps for configuring the adapter are:

a) We would be running pure SQL so select that option from the
drop-down.

b) Insert your SQL query and validate it.

c) We have to validate the query before proceeding to the next step
and it prompts the below error if we don't validate.

d)Any errors in the validation are shown below.

Step 4: Map Data

After configuring the target, ICS automatically creates a mapper file where we can map data from Source to Target.

Click on "Map" Action to open the mapper window.

Map the required data elements by just drag and drop wizard or we can even apply functions on them by clicking on the necessary target elements.

Click validate for checking errors and then close the wizard.

Now our flow should look like below with a trigger, invoke and mapper file.

Step 4: Adding Tracking variable

Tracking Variable is mandatory to activate the integration.

Click on menu icon and select tracking.

A unique identifier must be provided which helps us when tracking the integration flows.

We’ll select CityId.

Step 5: Activate

The agent must be running to activate the integration.

Save and activate the integration.

Activation provides us with an endpoint which we can use to test the
integration.

Testing Our Integration:

We can use SoapUI to test our integration.

Create a new Soap project and provide our integration’s endpoint.

Enter the data along with WSS Username Token and Timestamp. Username and password can be specified in request properties bar.

Right click on the request payload and click on "Add" to add username tokens and timestamp values.

Run the test case to successfully trigger our instance.

Now we can see that the data is being populated to the DB Table.

We can track any specific instance by providing the tracking variable used while creation of an integration

Usages of agent FrameWork

The On-premise agent can be monitored through the User Interface in the ICS Console

No need to have a pre-built container to deploy the agent

It is not required to open the inbound ports to access
applications

It is not required to expose private Soap-based web services

Monitoring and Control:-

The installation provides start/stop scripts which take username and password as inputs:

./startAgent -u=Username -p=Password

Status and Start/Stop actions can be monitored from the ICS Console.

With this, we have seen how to Create, Test and Monitor an Integration that connects to an on-premise application using the agent framework.

In Part #1 of this series : Removing SSH 'pem' files from Jump Boxes in AWS - Introduction, I described the problem of SSH management and started on a solution to restrict the spread of SSH key distribution across the servers in an environment.

In this post, we'll put this all together and show how the AWS Parameter Store can help us manage our SSH keys.

Uploading SSH keys to the AWS Parameter Store

As mentioned in the first document, it's not possible to just copy/paste SSH 'pem' keys (private keys) via the AWS Console - for some reason it translates the EOL characters into spaces. Instead, we'll need to do this via the command line - so let's see how to do this.

First, we need to change the IAM Role for the Instance where we will be running the command - to allow the Instance to 'put' values into the AWS Parameter Store.
In this case, I altered the non-prod jumpbox and gave it the : RxR-NonProd-Parameter-Keystore-Manager' Role (but will change back to Read Only after the put is done).
Please see the first document on how this is done.

An interesting note here - thanks to my colleague Craig Barr.
Craig pointed out to me that it's possible to even dispense with the 'insert-pem.sh' file completely - due to a really neat trick that I wasn't aware of with the '--values' clause.
The whole 'insert-pem/sh' file can be replaced with a one liner!

Yes - That works! Great. So we know that we can extract the key with just a command line.

This is where things get tricky!

This is a good place to stop for a second and think about where we're at. We know that we can retrieve an SSH 'pem' file from the AWS Parameter Store. That's great!

But thinking about how we do connections via the execution of a SSH command - SSH wants a file that it can read with its '-i' parameter. It also wants that file to be 'chmod 600' - restricted user read only permissions.
This leads to an interesting conundrum, since we don't want to store, or leave behind, any 'pem' files or scripts on the jump boxes.

So what we want to do is :

Somehow call a script, that can't exist on the jump box, that will extract the target SSH 'pem' file to the jump box and save it as a file

SSH uses that file to log in (using ssh -i <pem-file>)

That 'pem' file is then deleted as soon as possible - but the SSH connection must remain.

Then the script itself must not be left behind on the jump box.

The answer to all this is to go one step further with the AWS Parameter Store, and use it to also store the script we'll be needing!
Then, from the MobaXterm command line, we can execute a command that extracts the script from the AWS Parameter Store, which in turn sets up the SSH connection. As part of it's execution, the script ensures that the downloaded SSH 'pem' key, a well as itself, are deleted!

So, to do this, we'll take the same approach as before when uploading an SSH 'pem' key - but this time upload our script instead.
The script looks like this :

Note on the line "(sleep 1 && rm -f $MYFILE) &" - the sub-shell command to remove the SSH 'pem' file one after a (1) second sleep. I needed to take this approach since attempting to post-pend an 'rm' command after the SSH command would not work. That is, the 'pem' file would hang around on the jump box until the SSH command exited - not what we want (since any interim login to the jump box would be able to see the SSH pem file!).

So, that takes care of the SSH execution and the removal of the 'pem' file - but how about the script itself - since we need to also remove it from the jump box after it has done it's job.

The answer is to take the same approach - but this time apply the 'rm' within the execute string we send through to MobaXterm :

We call AWS to extract the script 'connecter.sh', saving the extracted file as 'connecter.sh' and making it executable.

We then create a sub-shell command that will sleep for 1 second before deleting the script and make it a background task.

Without pause, we run the 'connecter.sh' script with two (2) parameters - the 'pem' file we'll be needing plus the target server IP Address.

And that's it!

When we connect using this methodology with MobaXterm, we will SSH connect to our target server and there will be no 'pem' or script files left over on the jump box - exactly what we want!

Reviewing it all ...

It's a good idea now to review what we have achieved with all this :

We have vastly reduced the set of 'pem' keys we need to store on the servers in the environment.
Remembering that previously we had two options - (1) Store SSH 'pem' files on the jump boxes, or (2) Store the full set of SSH 'pem' files on the Windows server where MobaXterm runs (using the network settings / SSH gateway option) -
we have now reduced all this to just two (2) SSH 'pem' files being needed on the Windows server - the non-prod SSH key and the prod SSH key.

==All the vital SSH 'pem' files needed for connecting to both non-prod and prod servers are stored in one place == - in the AWS Parameter Store in an Encrypted format. Much more secure! What's more, IAM Roles are restricting the jump box servers to what files they are allowed to read - i.e. the non-prod jump box can only read non-prod files, the prod jump box can only read prod files.

==It makes SSH key management much easier. ==
Since we know that there is only one source of truth for all the SSH 'pem' files, it becomes much easier for us to change / rotate SSH keys since they are not stored on local servers. In these cases, all we need to do is generate new SSH 'pem' keys on a target server and then update the old 'pem' file in the AWS Parameter Store with the new key. Easy.

No artifacts are required - or left behind - on the jump boxes.
Again, much more secure.

The last thing I'll add here is ensure that you have backed up / saved all your SSH pem files to a very secure place - for example, Zoho Vault or 1Password.

I hope that this is useful to you in your work to secure your AWS environments.

During the work of a major Client Project, one issue that was continually on my mind was how we were using SSH 'pem' files on the non-prod and prod jump boxes.

What follows are the notes on my travels towards tightening up security on our AWS Environment. The mindset is towards making any 'hack' attempt as difficult as possible for an intruder. This isn't negative thinking as such, but rather taking a default position that someone 'has' gotten onto the system, so how can I make them waste as much time as possible whilst they are on it.

And of course, the more time they are on, the more opportunities there are for them to be discovered, etc.

Finally, I have broken up this discussion into two parts.
This is because it covers a lot of ground and some of the content is a bit technical (Part #2 will be posted up a little later).

Setting the Scene

Before we dive into SSH keys etc., it's worth taking a quick look at how these servers (jumpboxes, work servers) relate to each other.

The idea behind the above design is to provide as much 'segregation' as possible between the different environments.
Using SSH private keys (pem files) we can control each 'jump' required to log onto a target server.

A 'pem' file is a SSH private key, so it is a vital resource that should be heavily protected.
Having these sitting in the /home/ec2-user directory did not fill me with joy - especially since security wise this is a gaping hole.

So, what can we do about it?

Being AWS, there is an EC2 service known as the 'Parameter Store'.
This area can be used for all sorts of things like passwords, SSH keys and other parameters.

So the perfect place to be is:

No SSH pem files on jump boxes

No support scripts etc. on jump boxes

No 'articles' left behind on jump boxes when a tunneling operation is performed

Overall, across the AWS server estate, a vast reduction in the number of SSH pem files stored physically on servers. This includes not just servers like 'non-prod- or 'prod' jump boxes, but also the Windows servers where MobaXterm (or it's equivalent) runs.

Please note that although I centre on MobaXterm for this discussion, the same situation exists when using other tools - such as Putty.

The core issue is that these tools require the whole set of pem files for accessing an environment need to be present on the server where MobaXterm (or Putty) is located. If there are multiple of these servers (i.e. 2+ jump boxes - as is often the case), then multiple copies of these keys will be present. And on Windows, this is exacerbated even further if these pem files are stored not in a 'public' place on the Windows box, but in private user directories - e.g. c:/users/<username>/pems.

Very quickly, the control of critical SSH keys becomes a thorny issue, especially for security conscious Clients.
To find a good solution, we'll need to work with both our tooling (MobaXterm) as well as the environments on AWS.

Looking at MobaXterm, although it allows us to execute a command when we log onto a jump box, it does not allow us any flexibility when first connecting to that jump box.

Let's take a look at that now:

Notice how for (1), we can only specify a SSH Private key? We don't have the ability to execute a script or command that would 'get' that key for us - unfortunately.
I have written to MobaXterm and have asked them to integrate AWS parameter store access into the Application.
Hopefully they will do something like this into the future.

But given what we have ....
The best we can do at the moment is:

** LIMIT ** the number of SSH keys we can store on our Windows server (where MobaXterm runs)

Somehow not have any critical information stored on the intermediate jump boxes themselves.

As it happens, we can use the AWS Parameter store to achieve (2), and that is what I'm going to discuss next.

Using up the AWS Parameter Store.

What we want to do is use the AWS parameter store as our secure storage area for the SSH keys and (believe it or not!) an execution script that actually creates the SSH connection.

So, what is the AWS Parameter Store.

If you look in the AWS console in EC2 and scroll down the left hand side services, it appears near the bottom of the list:

If we Click on 'Parameter Store', this is what we'll see:
(I've created a few example entries since I did not want to show anything real!)

Creating a new entry is very easy - but I found a real gotcha when trying to add SSH pem files - in that when you try and retrieve them, the carriage-return at the end of each line in a SSH private key is returned as a space ' ' !

Of course, this breaks the SSH private key - but there is a way around this - by inserting values via the 'awscli' command on the Linux command line - but more on that later.

So, we can store parameters etc.
But how can we allow access to the values in the AWS Parameter Store?

Read on ...

Allowing access to the AWS Parameter Store

What we need to do is allow our jumpboxes - non-prod and prod, to access the AWS Parameter Store values - but only for the values relevant to that server. For example, we only want non-prod to have access to non-prod keys, and prod to have access to prod keys.

The first thing we need to do is create an AWS Service Endpoint. An Endpoint is, essentially, a way for users of an AWS provided service to access that service. In this scenario, we need to have access to the AWS System Manager.

To set this up, you need to go to the VPC Service and then select 'Endpoints'.

As an example, lets create an endpoint here for the Systems Manager. By clicking on 'Endpoints', you'll be presented with a page like this (sorry for the size - I wanted to fit the whole page since it describes the situation better):

You can see that in this case I have chosen the AWS SSM Service. Also note that you need to place it in the correct VPC and subnets, as well as applying the correct security groups.

After creating it, you'll end up with an Endpoint that looks like this :

So now we have an Endpoint, how are we going to allow an Instance like a non-prod jump box to access it?

It's time to do some IAM magic!

What we need to do is create some specific IAM Roles and then associate them with Policies that will allow the access.
So, what I did was create four (4) new Roles:

There are two Roles for each jump box - one for 'read-only' and the other for 'Manager'.

Read-only is pretty self explanatory - an Instance can only READ the values for, say, non-prod, if it has associated with it the 'RxR-NonProd-Read-Parameter-Store' Role.
The 'Manager' Role allows 'put' operations - which is important since we'll need this to upload the SSH 'pem' files.
In fact, it will be the only way we will be able to do this - since trying to do this via the Console doesn't work (as described earlier).

For each Role, there is a user defined Policy - let's take a look at the Policy for the : 'RxR-NonProd-Read-Parameter-Store' Role.

The 'RxR-NonProd-Parameter-Keystore-Manager' Role is slightly different, since it needs to give a more powerful access:

With that done, the last thing we need to do to allow the jumpboxes access to the Parameter Store is to grant the Instances (ie. AWS Servers) access to the appropriate Role.

For example, looking at the non-prod jumpbox, we need to grant it the IAM Role : 'RxR-NonProd-Read-Parameter-Store' so it can access the AWS Parameter store at, say, /example, via the System Manager Endpoint we defined earlier:

And now attach the Role:

And taking a look at the Instance, we can now see that the Role is attached :

And that's where I'll leave Part #1.

In Part #2 of this series, I'm going to show how you can do the following :

Upload SSH 'pem' files into the Parameter Store via the command line.

Read those same files back again.

Explain how using this approach, we can download and run scripts from the AWS Parameter Store that in turn download SSH 'pem' files from the AWS Parameter Store - leaving no trace whatsoever that they had ever run or downloaded anything!

Oracle Fusion Middleware environments often consist of multiple servers distributed across many physical hosts. To provision new environments we have the power of Rubicon Red MyST Studio to automate the creation and deployment of platforms both large and small, but there still remains a level of manual effort to validate that everything is working as expected. One of the most basic checks is to review the server logs for errors. Given the size and complexity of some environments this task can be time consuming, and it is likely for things to be missed, making it a perfect candidate for automated assistance.

In this blog we will look at how we can automate checking for errors from all of the WebLogic Server Log files using the WebLogic REST APIs. The blog contains the following sections:

WebLogic 12.2.1 REST APIs

WebLogic 12.2.1 introduced a range of REST APIs for configuring, monitoring, deploying, and administering the WebLogic Server. For example, you can:

Create, start, and stop servers

Configure JDBC data sources

Create and configure JMS resources

Create users

Create partitions

And much more

These REST APIs provide us with the power and flexibility to quickly and easily incorporate the WebLogic Server administration into other processes and applications. It also simplifies the creation of automation scripts and tools to assist with repeatable and reliable management of the WebLogic Server farm.

WebLogic Log Inspector

Using the REST APIs we have created the WebLogic Log Inspector that interrogates the running servers to read the log files, searching for any error messages, and collating them into a central report for review and investigation. The WebLogic Log Inspector is written in python, making it widely portable and easily understood.

Below is an example execution of the WebLogic Log Inspector utility.

In addition to outputting the high level statistics to the command line, the utility also generates an HTML report with the detailed error messages categorised by log. Below is an example example of the errorReport.html

Using the REST APIs to interrogate log files

Before we begin to use the REST APIs, it is worth noting that the APIs are enabled by default, but may be disabled if desired. To check if the APIs are enabled, log into the WebLogic Server Console (e.g. http://hostname:7001/console) and navigate to

The Domain node in the "Domain Structure" navigation tree

Click on "Advanced" at the bottom of the "Configuration" -> "General" tab

Scroll down to find "Enable RESTful Management Services"

With the REST services enabled, the first step of the utility is to access the WebLogic Server management API for servers

http://servername:7001/management/wls/latest/servers

The response from this request details all of the servers within the domain, along with some basic server information such as the memory usage and server state, as well as a uri for each server to query further. The utility iterates over these uris to query all running servers.

To illustrate the APIs the utility uses, we will follow the API path for the AdminServer all the way to access the ServerLog messages. From the previous response message we are going follow the uri for the AdminServer

Again, we see that the response contains some basic information about the server memory usage, as well as a uri that allows us to find the list of logs the AdminServer writes to.

We then query the log urihttp:// 192.168.1.152:7001/management/wls/latest/servers/id/AdminServer
and receive the following response that contains a list of server logs (e.g. ServerLog, HTTPAccessLog, DomainLog, and DataSourceLog), and a uri to access the log contents.

We can then query the server for the ServerLog using the following urihttp:// 192.168.1.152:7001/management/wls/latest/servers/id/AdminServer/logs/id/ServerLog

The response provides a structured representation of the server log, containing all of the log events. The Log Inspector interrogates the log entries and records any with the severity of “Error”, before generating the errorReport.html with the high-level statistics and detailed error messages.

Using this approach our simple utility makes it very quick and easy to scan through all of the server log files and identify any errors or issues that need investigating. I am sure there are many many more scenarios in which the WebLogic Server Administration REST APIs will make it simple to automate and integrate administration tasks.

During a recent engagement, I faced a few issues while creating a database adapter for a stored procedure in OSB 12.2.1.2. I wanted to share the detailed steps to create the database adapter, the issues I encountered and the resolution to those issues, in the hope that his may help other people, if they come across similar issues.

As SOA Suite 12c uses JDeveloper as the development environment for both SOA and OSB, it is worth noting that it is now possible to create the database adapter in the OSB project itself. The unified development environment provides a consistent approach for both SOA and OSB, unlike the previous approach, where we needed to import the JCA and corresponding files in Eclipse.

This article will cover:

Steps to create 12c Database Adapter for Stored Procedure

Issues faced

Resolution to the issues

Steps to Create Database Adapter

Step 1: Open the Database Adapter Configuration Wizard and name your DB Adapter and specify the location. The Default location is the Resources folder in the OSB project for wrapper, JCA, WSDL, etc, with the exception of the business service, which defaults to the Service Directory location.

Step 4: You can leave the schema as Default or choose to select it. The schema name will automatically be included in a few files that are generated as part of the DB Adapter creation process. In subsequent steps, we will need to make sure that we revise or modify the schema name - this is important to note because during development we might use a local or temporary database, which we need to change later on, for deployment purposes. Also there could be instances where schema name has environment specific keywords in it, which we need to change as we progress our code from different environments.

As shown below, select the stored procedure.

Step 5: This is the important step. There are two fields where we need to take care - the wrapper procedure name
and the checkbox to overwrite.

The procedure name needs to be chosen carefully - it cannot include any "." (fullstops/periods). I discovered this by trial and error when I used "." in the stored procedure name - it complained about it, however, it did not provide guidance about the right error. Instead it will create sql create/drop wrapper package files but no JCA , XSD or WSDL and it will error out saying 'Failed to create adapter'.

The workaround I used on our project was to use an underscore ( _ ) in the procedure name - for example "osb_procedurename", and this worked fine.

Also, if you are recreating or updating the adapter, be sure to check the overwrite option for existing wrapper package.

Step 6: Click Next and Next. DB Adapter creation is finished and it will create the following files under the resource folder:

Step 7: By default Oracle will generate all the files in the Resources folder. However, if required, based on your project guidelines, you may like to change the location of artifacts to respective folders by using the move option (by right clicking on the artifact and choosing move option), as it will update the reference of dependent artifacts, automatically. In the screenshot below, you can see an example of the guidelines that we followed in our project, for separating JCA, WSDL, XSD files into their respective folders.

Additional (optional) Step 8: This is an important step as this is where we will modify or remove the schema name, per your development environment requirements. That is, when we created the adapter in Step 4, the auto-generated files had the schema name attached, by default. However, it is common that schema names differ between environments in the development lifecycle - such as DEVMDS, SITMDS, PRODMDS, respectively. Hence, we can't simply deploy our code to the Test environment by using the defaulted development schema, as it will generate a "Schema Not Found" error. There could also be other situations where the schema name is different, and therefore it is good practice to remove the schema name.

To do this, you will need to edit the following four files that have schema name attached to database resources, such as the procedure name.

Open the schema file and remove the schema name

Open the JCA file and remove the schema name

Open the SQL file and remove the schema name

Open the Business Service and remove the schema name property

Additional (optional) Step 9: At the time of development, we ran into a couple of additional issues, where we had to make some minor updates to the automatically generated scripts.

In the auto-generated SQL file, needed to remove the "REPLACE" for the TYPE as OBJECT as shown below for TYPE "TIF_ ..":

In some cases you can see the following error while running the adapter that is ORA-06531:Reference to uninitialized collection. Then you need to change the text as follows in the auto-generated SQL file.

Source Text: (generated text in the file)IF aSqlItem.COUNT>0 THEN

Target Text: (manually change the text file )IF aSqlItem IS NOT NULL and aSqlItem.COUNT>0 THEN

See the example below:

I hope you find these tips handy if you are working with stored procedures in OSB 12.2.1.2.

During my recent engagements, working with SOA Suite 12c, I have realized the strength of the XQuery library feature (introduced in OSB 12c) to simplify SOA 12c implementations by avoiding copying code in various places, and instead centralizing it. In this article we will discuss following:

Overview of XQuery library

Effective Use Cases for XQuery library

Working Example: "Storing customer specific business logic"

Without using XQuery lib

Using XQuery lib - detailed steps involved

Summary

Overview of XQuery library

Creating an XQuery library was a new feature introduced in SOA Suite 12c. This feature provides functionality to reuse XQuery functions and avoid repeatedly coding the same logic, as logic can now be centralized at one place. This is a very handy feature and can be used to strengthen and simplify solutions implemented on the SOA Suite platform.

Often the XQuery lib feature may be somewhat underestimated, and therefore in this article, I wanted to focus on some of the use cases where I have found these XQuery lib feature's to be very effective.

Effective Use Cases for XQuery library

Storing customer specific business logic that is required repeatedly within the solution. For example, we can store business validation as a set of regular expressions and reuse them across all services.

Creating a common XML wrapper For example, providing a consistent and common approach for generating a particular header or type of envelope that is required when calling out to a third party system or to interact with services or other integrations. Instead of repeatedly creating the envelope or header in every invocation to the third party interface, we can utilize an XQuery library function to generate an envelope or header and simply call out during the services or operation level transformation in XQuery.

Centralizing mappings in the XQuery library for reuse at a particular service level in the event of having the same Enterprise Business Object (EBO) structures used in many components such as services or operations.

And finally, very common use cases writing any time and date conversion function or any other calculative function.

An important note: Unlike XQuery transformation, XQuery library functions can’t be tested standalone as it doesn’t define a function signature and declare variables. This does mean that logic placed in the XQuery library, if required testing, then you need to copy the content to an XQuery transformation and test against it.

Working Example: "Storing customer specific business logic"

Now we will walk through an example using XQuery library with the first use case described above "Storing customer specific business logic". In my recent engagement we had a requirement to place the business validations. These business validations are in the form of regular expressions, but are heavily used in all services and operations for data integrity.

So we had two elements typeType and typeValue, typeType can have a value such as Phone, Email, ABN or Organisation. And corresponding to each type there are different sets of validations needed to perform on typeValue. I have illustrated both approaches, that is, with and without XQuery lib, to compare how effective it could be to use XQuery lib.

Pseudo code without using XQuery lib

Check for the element type, if typeType equals to Phone then compare phone value that is typeValue to the regular expression conforming its valid phone number
else if typeType equals to email address then compare the email value that is typeValue to the corresponding regular expression and so on.

You can see the implementation below.

Pseudo code when using XQuery lib

All the logic can be placed in the XQuery and XQuery lib function and we simply need to call XQuery in our service/operation or any stage with the assign activity and just pass both elements i.e. typeType and typeValue to XQuery which will in turn use XQuery library functions for business validation by placing the regular expression in it. The result of the XQuery will be boolean, that is true if validation passes, or false otherwise.
Not raising the error when validation doesn't pass has been intentionally kept separately out of the XQuery to maintain modularity of the XQuery.

Here are the steps to accomplish this:

Step 1:

Select following from the gallery to create xquery lib function.

Step 2:

Write the XQuery lib name, target namespace, function name, input and output parameters as marked in the screenshot below. In the subsequent screenshot, I have shown the actual values used while creating the XQuery lib function for Email Validator.

Step 3:

If you click OK on above step, it will generate the following file then place the regular expression validation for email, as shown below:

Similarly, we can create other business validations for phone, ABN and Organisation.

Step 4:

Now we will create the XQuery with name cmn.util.businessValidator.xqy with two inputs for typeType and typeValue and output as Boolean. This is the XQuery where we import the library for using XQuery function such as EmailValidator, etc which we created in Step 3. The first screenshot shows how to select XQuery File and second where to import the lib.

Step 5 :

Browse the XQuery library module

Step 6 :

You can see the lib has been imported as module and namespace has been assigned. Now you can see the function on the right hand side pane and this can be used as a normal function by calling the function name, along with namespace.

Step 7:

Now jump to the stage of the service where you would like to do business validation. And now its as simple as calling XQuery with typeType and typeValue, as shown below. Also, if we compare the implementation without XQuery, you can see the difference. Using XQuery is so neat and handy and easily maintainable as core logic lies in one place and not staggered throughout different locations in the services.

Also, please note we can place the XQuery library in the MDS, which will help to use the XQuery functions across the project and further extend its benefits.

In this article we have seen how by using XQuery library we can avoid rewriting code at various places in the service, which is hard to maintain. Instead, we can place code in the XQuery library and extend as an XQuery function, where ever required.

I hope you have found this article useful and it will guide you to use some of these additional use cases, in your own engagements.

In previous blogs, I took you on the journey to containerise a seemingly monolithic application called MedRec, which runs on the WebLogic Application Server. Since then, the strangely familiar, yet fictitious, development team behind the MedRec application have realised that the Physician and Patient components of the system are together starting to look more like a big ball of mud. It's becoming increasingly difficult to ship features to the Physicians without having to disrupt the Patients user base and vice versa. Further amplifying the situation, the teams are under pressure to increase the value of their offering through the interoperability of health wearables and the establishment of Public APIs. To address these concerns, the development team has been split into two Agile teams:

Physician Services

Patient Care

"Gather together those things that change for the same reason, and separate those things that change for different reasons." - Robert C. Martin

The teams are designed to be able to deliver autonomously and the team members are excited about what this may bring. Whilst it will take some time to chip away at the big ball of mud, the teams are seeing the restructure as an opportunity to deliver independently and focus on the technology that is fit for purpose rather than trying and failing to come up with an organisational standard.

The OpenAPI Specification

The OpenAPI Specification (formerly known as the Swagger Specification) provides machine-readable interface files for describing, producing, consuming and visualizing RESTful Web Services. Combined with the Swagger tooling and the associated API designer ecosystem such as apiary.io, the OpenAPI Specification makes it easy to generate interactive and discoverable API documents on the fly all from an easy to read YAML file as well as the ability to generate code stubs from the definitions.

At MedRec, one thing both agile teams agreed on is the need to adopt OpenAPI specification for API definition. Independently each team will use the Swagger tooling combined with apiary.io to perform Topdown design of their APIs.

Polyglot Microservices

The Physician Services development team are keen to utilise NodeJS for their API implementation. They have been building prototypes on this with any time they can get away from the Java development of their MedRec application and they want to take this to the next level. They believe it will help to move the Physician Microservice forward with faster, more stable and more valuable innovations to come.

The Patient Care development team are less driven to move away from Java, after all they had a big part in the redesign from J2EE to Spring. They won't be moving Patient Care to any different programming language any time soon, but they do see the importance of moving to an API-first architecture where the API are designed and delivered to provide agility to support multiple user experiences such as Web, Mobile, Partner Portal, Wearable Device and so on.

Fortunately, the choice of language doesn't really matter anymore for the MedRec organisation as long as the outcome is delivered. The teams are empowered to deliver independently and interconnect only by APIs when it makes sense to do so. A positive side effect of adopting the OpenAPI specification amongst the teams is that it allows for them to generate the stubs in whatever language they see fit. Let's explore this in detail by first installing Swagger locally and create two simple microservice prototypes - one in NodeJs, the other in Spring-flavoured Java.

We are going to create an API from scratch using the Topdown approach. To follow along with these steps you should have swagger installed locally. We can install that through npm as follows:

npm install -g swagger

Alright let's do this thing!

Creating our initial NodeJS project using Swagger

We start by creating a new Swagger project for the Physician Services

swagger project create physicians

You can see we are presented with an option to choose the framework that we wish to use. In our case we will choose the express option.

When we complete this step, it will generate for us a working a hello world application with the following structure.

Editing our API using the Swagger Web Editor

Using the edit is fairly straight forward. We make changes on the left and then we see our documentation on the right being automatically generated on the fly. Pretty neat!

Where did it save to?, since we previously created a Swagger project our changes to the YAML via the editor will end up in the file api/swagger/swagger.yaml. We can of course edit this file manually using any text editor but without the Swagger Editor we can't see the generated API documentation in real time.

If you are playing along at home, you can copy and paste our updated swagger.yaml from this link on github into the left hand panel in the Swagger Editor.

Once we have done this we are ready to test our APIs and the neat thing is, we can do that straight from the Swagger Editor without writing a single line of code.

Testing our APIs using Mock Services

One really nice thing about Swagger is we can get it to generate mocks without any coding required. To do this, all we need to do is exit our existing project that we had previously started and restart it with the following

swagger project start -m

The -m indicates we want mocks.

Now that we have mocks we can use the Swagger Editor page to actually interact with out API directly. Give it a go!

Let's click on Try this operation under GET /physicians. As this is a GET request we don't need to provide any payload, just click Send Request. Look ma a JSON payload!

The Physician Services team are delighted! By hardly lifting a finger they already have a working mock API that they can share with key stakeholders for feedback and they can even deliver it to the UI developers so they can get a jump start on development of the minimal viable product for the new user experience for Physicians.

Writing the initial NodeJS implementation code

Let's add the crypto package. We will need this to generate an ID for new Physician records.

npm install crypto --save

Let's create the file config/db.js with the contents from this link. This will give us a basic implementation with no data persistence. Basically, it will store the changes in memory until we are ready to wire it up to a database.

Let's now create another file at api/controllers/physician.js with the contents from this link. This files creates all of the operations to support the implementation of our new Physicians RESTful API.

Testing our implementation using the Swagger Editor

It's the home stretch for the first Physicians API prototype. Let's restart this again, this time without the mocks. We want to test the real implementation.

swagger project start

From the Swagger Editor have a go at creating, deleting, updating the records. You should now see that they are being persisted and updated in memory within the NodeJS application.

There you have it, a NodeJS API for physicians built top down. All we need now is some real data persistence. We'll look at that in a later blog :)

Generating our Java project for the Patients Microservice

Meanwhile in the Patient Care camp, the team have created their own RESTful API using a similar approach but they don't want the implementation to be in NodeJS. No worries! Swagger provides a language agnostic code generation framework. All we need is to install the swagger-codegen tool.

Multi-language code generation

When you run swagger-codegen without any arguments, it displays a list of all of the supported languages.

Code Generation of JAX-RS

The Patients Care team have decided to settle on JAX-RS 2.x with CXF-RS as the REST framework. They can get their server side API stubs with a single command. Cool huh?!

swagger-codegen generate -i swagger.yaml -l jaxrs-cxf -o patients

Closing Remarks

In less than an hour, both of our Agile teams have built their own RESTful APIs with a minimal viable implementation. Independently the teams can share their interactive API documentation hosted on Swagger with interested parties for feedback and the frontend and backend developers are empowered to take the implementation to the next level.

I hope you found this use case / tutorial useful and get a chance to try it out on one of your own projects. Rapid API prototyping skills is a great thing to have in your back pocket and with OpenAPI/Swagger we can be sure that whatever implementation we wish to use, we are covered.

TL;DR I will show you that the MedRec sample application for WebLogic can be used for deploying Java artifacts and configuring WebLogic resources on first boot of a WebLogic Docker image. We can do this with a 5 line Dockerfile and a medrec.py script which customises the WebLogic

TL;DR I will show you that the MedRec sample application for WebLogic can be used for deploying Java artifacts and configuring WebLogic resources on first boot of a WebLogic Docker image. We can do this with a 5 line Dockerfile and a medrec.py script which customises the WebLogic domain. This example is available to play with on Github at this link.

Discuss what the official WebLogic image does on first-boot followed by options for configuring custom WebLogic resources in a WebLogic Docker Container at the first-boot stage.

Show an end-to-end example of Application Deployment and WebLogic Configuration using the MedRec Sample Application running on WebLogic in a Docker container

Discuss the drawbacks of data that is managed outside of containers. We will show that we can specifically seed the data for the MedRec application outside Docker and discuss how it could be improved to better suit the Docker deployment models.

What happens on first boot of the official WebLogic Docker image

An official WebLogic container actually ships without any WebLogic domain making it easy to extend an official WebLogic image with your own custom WebLogic domain. That said, if you do not override any defaults then on-boot the container will create an empty domain for you with some default settings. This is nice because it means you can simply run docker run -d -p 7001:7001 container-registry.oracle.com/middleware/weblogic:12.2.1.2 and then you'll immediately have a working WebLogic domain with an admin console that you can login to at http://localhost:7001/console.

You can perform basic customisations when using the official image by specifying environment variables. For example:

Customising WebLogic on Docker first-boot

Now what if you wanted to do some serious customisation, like adding a JDBC Data Source or some JMS Queues or Topics? How would you best achieve that with this setup?

One way to achieve this is to write some custom WLST; another way is to use MyST. Shortly, we'll look at all these options in detail but first let's dive a bit deeper into what happens on the first boot of a WebLogic Docker Container based off the official WebLogic image.

On boot of the official WebLogic image it will always run createAndStartEmptyDomain.sh (as long as you don't override the ENTRYPOINT to something else). This script will automatically create an empty domain from scratch by calling out to create-wls-domain.py then it will start the Admin Console automatically. For subsequent boots, the script sees that the domain already exist so precedes to simply starting that without re-creating it again.

Option 1: Customising using WLST

Any WebLogic domain can be configured in Online or Offline mode using the WebLogic Scripting Tool (WLST) which is a Jython based interpreter for running automated WebLogic commands. Online mode assumes the server is running and Offline mode assumes it is not. The majority of common WebLogic resources such as JDBC and JMS can be configured in Offline mode.

For the WebLogic boot on Docker, the offline mode of WLST is very handy because it means we can configure WebLogic resources at the same time that the domain is automatically created before we start it up. That means that when the WebLogic domain boots up, it will automatically have our custom WebLogic resources added such as JDBC, JMS, Workmanagers and so on.

So how do we do it? One simple way is to take a copy of create-wls-domain.py and then add our custom WLST offline at the end of the file. Specifically, we can add out custom WLST code after closeTemplate but before exit().

If we extend the official image by copying in our customised create-wls-domain.py then it will get used instead of the default one. To do this we can have a simple Dockerfile like this:

When we build any image based on the above Dockerfile it will create a WebLogic domain on first boot with all of the customisations that we want. Pretty neat right?

Option 2: Using MyST for Docker container configuration automation

The previous option we showed requires some degree of low-level scripting knowledge. If you want to avoid this altogether another option is to use MyST where you can declaratively define the configuration to be included in your WebLogic Docker Container using the MyST Studio web-console. You can learn more about this approach in our Continuous Configuration Automation video series.

MedRec - An end-to-end example

As we showed in a previous blog, you can easily deploy WebLogic Applications on first-boot by including them in the autodeploy directory of the domain home. After the domain is created at the domain home directory it will see WebLogic Applications existing in the autodeploy directory and automatically deploy them on first-boot. Let's use the MedRec monolithic application to build on that concept and extend it to also configure the Derby Data Source required for the MedRec database.

We can find copies of these files in a typical WebLogic Server installation with the sample apps included (it's an option at install time). These sample apps do not exist in the WebLogic installation that ships with Docker because it is intentionally kept light. You can obtain these files from a WebLogic installation containing sample apps and copy them to our directory where our Dockerfile exists.

1: We pull down the official WebLogic Image. For this to work make sure you have logged into the Oracle Container Registry and accepted the terms in the last 24 hours. Details on this are described here.

2: We copy the medrec.ear and physican.ear to the $DOMAIN_HOME/autodeploy/ directory so they will be automatically deployed

3: We copy our medrec.py which has our WLST offline automation script for creating the MedRec data source into the container at /u01/oracle/

4: We add the seed directory which contains medrec-domain.jar and medrec-data-import.jar to our container at /u01/oracle/seed/. We are going to use these jars later to seed the data for the MedRec application.

5: This is a bit of a hack, but it works nicely! Because I know that create-wls-domain.py already exists in the official image then I can use this line of code to inject the contents from medrec.py to line 66 to the existing file. Bear in mind for different version of the official image the line to inject at may change. Of course, if you want to follow the previous approach we showed where you have a create-wls-domain.py with your WLST customisations already included you can simply do COPY create-wls-domain.py /u01/oracle/create-wls-domain.py and that will work just as well. In that case, you wouldn't need line 3 and 5 just the additional COPY.

A note on managing data in containers

A great pattern for managing data required by web applications running in containers, is to have the required data seeding be handled by the application itself. This can significantly simplify the deployment process because it will allow the application to be in control of seeding missing data. We will discuss this approach in more detail in a later blog where we look at the benefits of Liquidbase for automated database change management and how it can be used from a web application to bring any state of the database into alignment with the version of the application that is being deployed.

As we do not have such a facility for our MedRec application or rather it hasn't been built in a way to seed the data on boot, we will have to do this manually. If you recall, in our Dockerfile we included some jars into /u01/app/seed. We are going to connect into our Docker container using docker exec and use these jars to seed our data to the in-memory derby database that runs by default on the WebLogic docker container.

After we have ran this, from MedRec app we can click on Login in the top right corner then under Patient we can Login with fred@golf.com and password of weblogic. If we logged in successfully we should see a screen similar to the one below and then we will know that the data is seeded correctly.

I hope you have found this post useful! Stay tuned for future posts where we break this monolithic MedRec application into Polyglot Microservices.

In a previous blog post, I wrote about the new Decision Modeling capability introduced in Oracle Process Cloud Service. The earlier post provided an introduction to DMN and a usage scenario along with a working prototype.

The decision modeling and notation standard released by OMG is a very powerful framework providing an implementation and modeling specification for solving many complex operational and strategic decision in an organization. In this second blog post of the series, I will put the DMN engine in Oracle PCS through a litmus test by implementing a complex decision modeling use case.

The modeling approach shared in this blog is based on the principle of “Inherent Simplicity” which says that every situation, no matter how complex it initially looks, is exceedingly simple. I chose this use case as it shows how principles of decision modeling can allow us to break a layered problem into simpler and easy to implement fragments.

At the end of the post, I will also provide a link to where you can download the sample application. Buckle up, read on and enjoy!

Problem Statement

The problem statement was shared on the Decision Management Community website in February 2017 and an abstract of it is presented here:

Your decision model should determine potential fraud during an online product booking process. It’s based on an automatically calculated Fraud Rating Score that should be less than 200 to allow automatic booking. Fraud Rating Score depends on the following factors:

If Booked Product Type is POST-PAID-HOTEL add 5 to the score

If Booked Product Type is INTERNAL-FLIGHT add 100 to the score

If Booked Product Type is INTERNATIONAL-FLIGHT add 25 to the score

If Booked Product Type is CAR add 10 to the score

If Booked Product Type is PRE-PAID-HOTEL add 5 to the score

If there were no previous orders from this customer add 100 to the score

If Number of Orders from this customer between 1 and 10 including bounds add (100 – Number of Orders * 10) to the score

If Customer has previous disputes add 190 to the score

Solution

Decision Requirements Diagram

The problem statement can be broken down into different decision elements each producing outcomes which can roll up to provide the final interpretation. Since we are leveraging Decision Modeling and Notation to solve this, the complete solution is presented in form of both a decision model diagram and a decision logic representation.

To evaluate the overall scenario as part of this use case, the decision model is broken down into decisions and sub decisions as shown below:

The decision requirements diagram shows how the desired outcome can be achieved by combining modular sub decisions explained here:

Determine Booking Fraud is the top level decision that asserts a Boolean flag of true|false as a final interpretation of the sub decisions. The flag is based on the Compute Fraud Score decision value (true for fraud score greater than 200 and false for fraud score less than 200 )

Check Active Disputes is a sub decision that iterates over all Dispute elements of a booking request and if any dispute is valid asserts a constant fraud score (190 in this case)

Check Past Orders is a sub decision that iterates over all Order elements of a booking request and if there are any past orders, asserts a calculated fraud score (100 – Number of Orders * 10)

Calculate Product Type Fraud Score is a sub decision that loops through all Products in a booking request and based on the product type, assigns a fraud score. It also rolls up a sum of all assigned fraud score for each product type.

The Compute Fraud Score sub decision invokes the above sub decisions and sums up the evaluated fraud score from each.

Input Data Definition

In order to implement the decision logic for this scenario, we would need to create a root level Booking request input data type. The data structure for the decision model is depicted below:

A Booking can have one or more Products. Each product has a unique productId. A product element also has a sub-element representing the productType.

A Booking can have one or more Orders. Each order has a unique orderId. The bookingState sub element represents if it is a current order or a completed one.

A Booking can have one or more Disputes. Each dispute has a unique disputetId. The disputeState sub element represents if it is a confirmed or an active dispute. Active disputes are currently under investigation.

Decision Logic Level

As I stated in my previous blog, a decision requirements diagram does not convey how the decision is implemented. This is done through decision logic elements. The overall decision model is broken down into individual decisions that are modular with the top level decision determining if the booking is a fraudulent one or not. The sub decisions use different types of boxed expressions to compute a fraud score for every scenario covered in the use case.

I started the solution implementation in Oracle PCS by creating a new Decision Model application. Then added input data elements based on the structure of the **Booking **element described earlier.

The final implemented decision logic was aligned to the decision requirement diagram. Here is a teaser of the final decision model showing the input data elements and the individual decision logic elements.

The top level decision and each of the underlying decisions is explained in more detail in the following section:

Decision 1 - Check Active Disputes

Create a decision in the decision model editor using the configuration option provided here.

The expression simply checks through an If-Then-Else construct if the count of Dispute elements in the Booking request is greater than 0. If a dispute is found (positive number count), then the decision assigns a fraud score of 190 else the fraud score is 0.

Decision 2 - Check Past Disputes

Create a decision in the decision model editor using the configuration option provided here.

The conditional expression checks if the count of Order elements in the Booking request is greater than 0. If a previous orders are found (positive number count), then it uses the number of order to determine the fraud score using the below formula.

100 – Number of Orders * 10

The higher the number of previous orders, the lesser is the fraud score. For example, if a booking has 9 previous orders, then the fraud score would be 10 as opposed to a fraud score of 90 if a booking has 1 past order.

Decision 3 - Calculate Product Fraud Score

Create a decision in the decision model editor using the configuration option provided here.

This is a tricky sub decision to implement. There can be multiple products within a booking request and a fraud score has to assigned based on the product type for each product.

The sub decision is implemented as a function (a boxed expression that creates a parameterized logic to apply multiple times with different parameter values). The function accepts productType as an input and evaluates the fraud score based on a unique hit policy decision table.

Needless to say, this function has to be invoked from another sub decision by passing the appropriate parameter value.

Decision 4 - Loop Through Products

Create a decision in the decision model editor using the configuration option provided here.

The sub decision is implemented as a friendly enough expression language (FEEL) to loop over the Products in a booking and return a list of fraud score for each product type. This sub decision invokes the parameterized Calculate Product Fraud Score by passing the product type for each product in the loop.

Decision 5 - Compute Fraud Score

Create a decision in the decision model editor using the configuration option provided here.

This sub decision is again implemented as a friendly enough expression language (FEEL) to sum all the product scores determined by the previous decisions. It uses a summation function to calculate the sum of all product type fraud score retrieved from the Loop through Product decision and adds the result of the Check Past Orders and Check Active Disputes decisions.

Decision 6 - Determine Booking Fraud

Create a decision in the decision model editor using the configuration option provided here.

The conditional expression checks if the value of the computed fraud score is greater than or less than 200. If it is greater than or equal to 200, then it asserts true otherwise false.

This completes the implementation of the decision logic.

Creating a Decision Service

A decision model can expose multiple decision services which in turn are comprised of a combination of decision functions. I created a single decision service with an operation determineBookingFraud by associating the top level decision Determine Booking Fraud to it. After the decision model is deployed to the infrastructure, the decision service is available as a REST endpoint.

Testing the Final Decision Model

The following section shows how the overall solution is unit tested. It also shows how the decision model is exposed as a decision service rest operation which can accept a JSON type request and assert an outcome. But before that, there is another powerful feature in Oracle PCS which allows design time unit testing the decision mode, that I want to talk about.

Unit Testing – Testing for Fraud

**Request**

Click on the blue arrow icon icon on your decision model to access the unit testing console. This will open a dialog that allows entering input data based on the defined data type. Let us say that for a fradulent booking scenario, we enter the following data elements:

Response through Sub Decision Chains

When the decision model is executed, all the sub-decisions would execute too. The result for each of the decision functions for the above request through the decision functions would be:

The total score from the different combinations of products, orders and disputes in this scenario is:

Products: 5;
Orders: 90; Disputes: 190Total: 285

The Calculated Fraud Score is 285 which is greater than the threshold value of 200 and hence it is a fraud.

Summary

This post covered a glimpse of the true power of decision modeling notation to model and implement real world complex decisions. It also provided a preview of the different types of boxed expressions available in the PCS decision modeling engine and how to create and combine them to create complex decisions.

In the next blog post in this series, I will show how to work with lists and collections in Decision Model through another use case. If you have any comments, suggestions or feedback, please do not hesitate to share it here.

TL;DRWebLogic Applications can be deployed to Oracle Container Cloud using a separate application image and volume mapping to the official WebLogic image for Docker at runtime as shown in this 2 minute video.

But have we delivered any real value pushing an empty containerised WebLogic Server or Oracle Database instance to the Cloud. Yes, it may excite the inner nerd in some of us, but it's also a bit like manufacturing a school bus that never delivers children to the school. What we really need to deliver is valuable running applications, not just an empty chassis.

So how can we create a WebLogic Docker image with Java Enterprise Edition (JEE) applications and resources?

Should we bake our own image from scratch including the apps?

Should we extend the official image to include the apps?

Or, should we volume map our apps at deploy-time?

All of the above are viable options each with their own Pros and Cons.

Let's explore them.

Bake our own 😟

Building new images in Docker is not difficult for those comfortable with Shell Scripting. We can create a DockerfileFROM an existing image such as oraclelinux:7-slim, indicate the files we want to COPY into our image, set some ENV variables, RUN the various shell commands we need to configure it and finally define the ENTRYPOINT for our container when it is booted.

And then boot the image locally with:docker run -d -p 7001:7001 oracle/weblogic

When our container is booted our sample.war will be automatically discovered in the autodeploy directory and deployed so that we can access it from http://localhost:7001/sample

Now that's not so hard to bake your own image in this way, but you do need to know a bit about the inner workings of automating Oracle WebLogic installation and that's just a simple example. What about running in Production Mode and using the WebLogic Node Manager or configuring WebLogic resources? If you know what you're doing you are free to bake your image any way you please, but then again, if someone's already solved your problem do you need to reinvent the wheel?

Extend the official image 😐

Oracle provide a Container Registry from where you can download official WebLogic Docker images. Once you have authenticated and accepted the terms, you can use the image as the basis for building derivative images containing your applications and resources.

Let's look at an example.

├── Dockerfile
└── sample.war

Our Dockerfile can be nice and lean. Simply we copy our sample.war to the autodeploy directory of the DOMAIN_HOME. That's it.

As you can see, just by using the official image, there are less files required to get up and running with a WebLogic Docker image containing your application(s). There is no need to have an installer or response file or even a configure script because that's already done in the official image.

Again, we can build and run our image in the same way we did when we baked our own.

Distributing our images 😲

When it comes to distributing our images to the Oracle Container Cloud Server we cannot rely on any Oracle Container Registry as it is read only and not for accepting custom-built images. Instead we must push these images to a public writable Container Registry such as Docker Hub or Amazon ECR. That's when the baking and extending approaches start to hurt a little.

Consider we push either the baked or extended image to the Docker Hub. In either case, we're going to need to push the whole image (including WebLogic Server) to the Registry. That's at least 1.22 Gb. This may leave you thinking:

"Why don't I just pull down the container from Oracle Container Registry and deploy my application to it using the Admin Console or a WLST script after it has been booted?"

And there is the trap, the temptation to return to the "old way" is strong. But you don't have to do it this way...

The power of the Docker layered file system 🙂

Docker images are made up of a number of file system layers with each change building on top of the previous one. In fact, every command of our Dockerfile builds a unique image layer. This provides an amazing capability for image reuse. If we have two images based off an official Oracle WebLogic image (or even another custom image) we can be guaranteed that it will reuse the same image in both derivative images. This means that not only do we reduce the download required to run the containers based off a common base image, but we reduce the upload effort as we only ever need to upload an image once to a container registry such as Docker Hub or our own private registry.

What if we could ship our application changes as increments to the Oracle Container Cloud without needing to push the WebLogic Server image to our personal Container Registry? You can...

Lightning fast deployments 😀

Did you know you can create a Docker image that contains only your Java Application(s) and then map it at deploy-time to the WebLogic Server image?

This is a super useful way to deploy Java Applications to the cloud in seconds. You can deliver any application while avoiding a dreaded push of the whole WebLogic Server instance for each change. With the volume mapping approach, any lightweight data container can can reuse the publicly available Oracle WebLogic image at deploy-time without having to be bundled with it.

Let's create a new Dockerfile to build an image that contains nothing more than our JEE application(s).

Notice we are building our image FROM scratch meaning that we aren't basing it from any other image and that we are creating a VOLUME for our autodeploy directory. This will allow us to map it to another WebLogic Server image provided the DOMAIN_HOME matches up with that in our application image.

Let's build that image.

docker build -t rubiconred/wls-sample-app .

Once the image is built we can use docker-compose to test it out locally before we try it on the Oracle Container Cloud Service. Let's create a docker-compose.yml file as below:

After a few minutes our image will be downloaded from the Oracle Container Registry (provided we accepted the terms and did a docker login prior to running docker-compose. This process is documented in detail here).

Data Container Benefits

Data containers as demonstrated in the previous example are a great way to ship small applications when the alternative of bundling a whole application with the stack that runs it is not feasible. This can be the case when you have no private Docker Registry available, when you are concerned about breaching the EULA by distributing your application with WebLogic Server or when you want to avoid a redundant push of data that is already available in the official image anyway. Fortunately, you don't need to worry about any of these concerns when you are using data containers and dynamically linking them to an official WebLogic Docker image at deploy-time.

Data Container Limitations

You will have seen in the previous example that I built the Application Docker Image FROM scratch and it worked nicely with docker-compose. To dig a bit into what happened under the covers, the data container started and the volume was mapped to the WebLogic Server container. After this, the data container shutdown because it had no command or entrypoint defined that would substantiate it's need to keep running. That's fine though as we don't need it to be running once the data volume has mapped to the underlying WebLogic Server instance.

Unfortunately, it will not necessarily work the same in all Container Cloud solutions. That is because production solutions may see a container shutting down immediately after a start as problematic and attempt to restart that. This is known as 'Flapping Containers'.

The thing with data containers is that it is expected for them to shutdown immediately as part of the happy path. At the time of writing the Oracle Container Cloud Service does not appear to specifically support this scenario but there is a pretty neat workaround that you can do to get this going. We can add command to our docker-compose.yml and execute sleep so the data container stays running and Oracle Container Cloud Service is happy. So, let's make two adjustments to get our stack working on the Oracle Container Cloud Service.

First, we will update our Application Image to be based FROM busybox instead of scratch. This will ensure that the sleep command will be available at runtime. Busybox is a tiny operating system so our image will still be very small.

Setting up our Registry in Container Cloud

We're almost ready to run our WebLogic Application Stack on the Oracle Container Cloud Service (OCCS). Let's jump over to the Registries section and enter our details for the Oracle Container Registry. This will allow OCCS to be able to pull down the official Oracle WebLogic Image at deploy-time. If we are hosting our Application Image on the public Docker Hub, we don't need to do anything else for that to be pulled down, but if we are instead using a private registry for our Application Image, we will also want to add this private registry and our credentials under Registries as well. For more detailed instructions on setting up the registries in OCCS you may be interested in my previous blog on the subject.

Up and Running on the Cloud in 2 Minutes 😍

Now that our OCCS instance is able to easily pull down the official WebLogic image and our Application image, we can simply copy and paste the stack definition from docker-compose.yml into the Advanced section of our stack. Let's click Deploy and voilà!

We have pushed our locally tested WebLogic Java Application(s) to the Oracle Container Cloud Service in a few minutes. We are running the exact same containers as we tested on our local workstation in the cloud, pretty neat?

Want to see that in action?

I recently delivered a 2 minute tech tip for the OTNArchBeat where I step through the details using a Java application for Medical Records. I hope you find it useful.

Rubicon Red MyST is the most advanced and intuitive DevOps solution for Oracle Middleware. MyST 5.5 is a major release specifically focused around Application Release Automation, Role-based Access Control and Event Auditing. It also comes with additional installation options making it easier than ever to get a fully functional DevOps for Oracle Middleware stack up and running. In this blog, we will dive into the new features of MyST 5.5.

The MyST 5.5 release comes with Source Code Introspection and a built-in Artifact Property Registry allowing the automatic discovery and management of properties from a single role-based management control console. At deploy-time service endpoints can be discovered by querying the Artifact Property Registry.

Source Code Introspection

Whether you're using MyST's Automated Build Server or a third party CI server, MyST now performs source code introspection during automated build allowing for customisable settings to be automatically discovered.

The MyST Plugin for Jenkins auto-discovers property references within the source code where a property is an expression following the common property-reference syntax; e.g ${some.property}. These properties are reported for each automated build from the CI server so that MyST will automatically manage the association between given versions of an artifact and it's customisable properties.

Artifact Property Registry

Through the MyST console, we can see the relationships between Artifacts and the common properties which they share. This can provide some powerful insight. We can ask question such as "if we change the Production Siebel Endpoint which Artifacts will be impacted?".

In addition to the property relationships, we can provide meaningful descriptions and defaults values for properties directly from within the console.

Promotion Stage Models

At deploy-time MyST will automatically determine which properties need to be defined for the artifacts in a given deployment. This will generate a Stage Model which can be updated with the environment-specific values. This model is persisted at the platform instance-level so we only need to enter the values for the first deployment and they will be reused for subsequent deployments (unless of course you want to change them).

Application Release Approvals

At deploy-time users can be granted Approver and Deployer role permissions for manual promotion stages within a Release Pipelines. These role permissions integrate seamlessly with the existing Role Based Access Control framework.

Role Based Access Control 2.0

In addition to the Application Release Automation enhancements, MyST 5.5 ships with a complete overhaul to the new Role Based Account Control capabilities including:

New Intuitive User Experience for managing role permissions.

Role Based Access Control for Application Release Automation including Release Pipelines, Pipeline Templates, Artifacts and Application Blueprints.

Finer-grained Role Based Access Control for resources such as the Platform Instance to allow for the design of roles at a much more granular level with restricted permissions.

Clearer and more consistent User Experience so that only controls which are operational for the given users role are displayed. Previously, users who were prevented from executing certain controls based on their role could still see the controls even though they would disallowed from choosing them.

Simple request of necessary grants due to improvements to the authorization failure message in the case of insufficient permissions.

Focus on value-added capabilities through the removal of unused capabilities such as Bulk Activation and Deactivation of Roles where the old approach is no longer needed due to the new user experience.

Removed Obsolete Role Permissions such as the ability to create and delete Continuous Delivery Profiles.

Application Release Automation resources can be grouped into Workspaces allowing for Multi-tenancy.

Comprehensive Event Auditing

As organisations continue to manage all automated tasks from the MyST Studio console, it can be used as a powerful tool for change auditing. MyST 5.5 comes with a number of enhancements including:

Complete audit of all events including failed events irrespective of whether it was due to a data retrieval or a manipulation.

Authorised users can see which user performed which action including each login and logout operation.

Comprehensive troubleshooting information by capturing the complete error details in the audit log for all failed requests.

Drill down on each event to see low-level HTTP interactions for complete traceability.

Simplified installation

MyST supports direct integration with an organisation's existing Version Control System, Continuous Integration (CI) Server and Artifact Repository to provide a seamless release management experience across a best-of-breed landscape.

But what if an organisation does not have an existing CI Server or Artifact Repository?

In MyST 5.5, the installer now has options to include an embedded MyST Build Server for Continuous Integration and a Maven-compliant Artifact Repository for storing and managing Artifacts built on the MyST Build Server so organisations have everything they need in one place. Simply, we can run the installer and at the click of a button we will have a fully functional and self-sufficient MyST Stack ready for Automated Platform Lifecycle Management and Application Release Automation.

Summary of MyST 5.5 features

As we have shown, MyST 5.5 is a major release with many exciting new features including:

Application Release Automation 2.0

Release Management Approvals

Auto-discovery of properties from code

Dynamic lookup of service endpoints and operation settings

Centrally managed Artifact Property Registry with the ability to discover all artifacts that use common properties (and vice-versa).

Role Based Access Control 2.0

Usability First

Finer-grained control

Comprehensive Event Auditing

Search, filter and drill down on all events

MyST Studio Installer 2.0

Batteries included for a fully functional DevOps for Middleware stack.

Want to learn more about MyST?
Why not take our free MyST TestDrive for a spin?

Whether you like it or not, organisations worldwide are using customisable commercial off-the-shelf software products to deliver Enterprise Integration solutions that underpin the systems we rely on in the business ever day. Whether you're filling out a pre-populated web-form or receiving an automated notification email, that data securely crossed the network at some point likely originating from one or more disparate systems. Middleware, Integration and APIs are that invisible glue unknown to many yet a constant source of change for any organisation.

In this blog, we will look at the Release Management challenges organisations may face for Middleware, Integration and APIs in general; followed by some of the specifics for Oracle SOA Suite and related Middleware. Finally, we will introduce the driving principals behind Application Release Automation using MyST, the market leading DevOps for Oracle Middleware software and show how organisations are using it to release valuable business outcomes while controlling delivery cost and risk.

But first...

What's in an integration release?

How is it different to the release of a standalone web application?

Integration releases most commonly require different settings on a per-environment basis. For instance, in a development environment, a Customer Relationship Management (CRM) Integration may point to a cloud-based Siebel backend. In production, it may point to an on-premise Siebel backend. We may also have monitoring and security settings which differ on each environment. We are in-a-sense deploying a slightly customised version of our code with different endpoints and settings for each environment so...

How do we ensure that we are promoting and testing the same codebase from Development through to Production?

How do we ensure that a developer doesn't accidentally promote an incomplete piece of new code to a controlled environment when intending to only change the endpoints?

How do we ensure a release engineer understands what endpoints to customise and which version of the code to promote?

And mostly importantly, how do we ensure that we are testing in a way that there are no nasty surprises by the time we get to production?

But first, let's dig a little deeper into how Integration Release Management is typically done within an Enterprise.

Integration Delivery in an Enterprise

The evolution of a piece of integration code, from conception to release will likely involve interactions and collaboration among a number of individuals, or even a number of teams. For instance:

Solution Architects and Business Analysts (BAs) may work with Business Stakeholders to formalise the business-case, produce requirements and ideally develop an initial set of acceptance test cases.

Testers will work with the BAs and Developers to flesh out the Testing Strategy and execute the Exploratory and Automated Testing according to the plan.

Developers will write the source code for the solution based on the requirements and in alignment with the acceptance criteria. If they are not the ones responsible for Operations they should at least be working with those that are to ensure the solution is production-ready. They may also write a suite of unit tests and potentially system and integration tests to:

prove their code is working

protect against regression

document the codes behaviour

provide design guidance

support future refactoring

Operations (Ops) will require an understanding of the deployment process and the operational settings within the codebase so that endpoints, security and monitoring can be applied at deploy-time and so that the solution can be supported and maintained in production. It goes without saying they should be working closely with Developer.

At release-time, Managers and other so called "Gate Keepers" will want to ensure the preconditions are meet and that the release was successful. They may ask questions before a release like "What components are effected by this change?" and "Are all the tests passing?.

After the release, most likely the Stakeholders will want to make changes. So who should be informed to make the change? Is it an Ops change, a code fix or a new set of requirements to be driven by BAs?

When the necessary inter-personal interactions are unclear and performed in an ad hoc manner, it can make the release process painful and error prone even for small and close teams. When we add the Enterprise dimension we may be talking about large, bureaucratic and silo'd teams so the problem doesn't get any easier and we're not just talking about a simple web application. What about all of the End-system owners, we can't leave them out of the picture?

It has been said and shown by many startups and enterprises that DevOps culture can play a part in addressing the communication gaps within an organisation (in particular, Dev, QA and Ops) to avoid wasteful (non-value added) tasks in the delivery lifecycle.

But of course, to be completely effective, there is no "DevOps" without automation and for the highest business value, automation itself should be seen across the whole delivery chain not just development and release but everything in between. We should be using automation to help form the contract of understanding between developers and operations and visualising everything that is important to stakeholders in the delivery lifecycle. Automation is not the solution in and of itself but the enabler for better informed and collaborative teams to be more productive. This results not just in personal fulfilment for employees but deep organisational productivity through the elimination of avoidable waste.

The automation underpinning a Continuous Delivery Pipeline can help teams to make decisions faster by providing them with the insight they need and reducing the reliance of guesswork, snooping and politics.

An Automated Pipeline can visualise a single change from a developer's commit to a version control system through to it's deployment to production system while managing the approvals in between.

Automation can empower the release engineer by auto-discovering the operational variables to be applied per deployable artifact and prompting for them on release if they are not set.

Automation can help to discover implicit and explicit dependencies between components within large Enterprise Integration landscapes to allow change impact to be well defined.

Build and Deploy Concepts

SOA, Microservice and general Middleware solutions tend to wire services together at deploy-time rather than bundling them as libraries in a monolithic application. To support this notion of customising endpoints and other operational settings at deploy-time each technology has a concept of a "plan" or "customisation" file.

Typically, there will be one file per-environment and it will contain all of the required customisations to be applied to an artifact for the given environment. In the Oracle Fusion Middleware stack, the corresponding plans concepts for given Artifact types have different names but are fundamentally achieving the same objective.

SOA Composites use per-environment Configuration Plans

Oracle Service Bus Projects use per-environment Customization Files

Java and WebLogic Applications use per-environment Deployment Plans

Without these solutions, developers would need to build a unique artifact for each environment, which would be highly error prone due to it's complexity and the higher risk of making a mistake such as forgetting to replace a property. However, it should be said even the per-environment, per-artifact customisation strategy has it's own weaknesses.

Welcome to Configuration Plan Hell

If we consider a Customer service capability that needs to be deployed to DEV,TST,UAT,STG,PRD we may end up with a number of configuration files for each environment looking like this:

What happens when we evolve our code base, do you regenerate each plan again from scratch? What happens if we forget to do this for one environment? Now, it's out-of-sync. Or what if we need a new environment? Do we take a copy of production and edit it for the new environment? We may be do that, but we must be very careful as forgetting to a change a setting may result in your non-production environment pointing directly to production!

What happens if you have to do something as simple as changing a single endpoint? If that is referenced by multiple artifacts that could be lot of files to change and ensure consistency across.

These kind of patterns, or dare I say, anti-patterns of per-environment, per-artifact customisation can lead to undesirable workaround. The most common being:

Avoiding customisation files altogether

Custom coding workarounds

Let's look at these.

Anti-pattern #1: Avoiding configuration plans altogether

Here Developers and Administrators ditch Configuration Plans altogether and change the code with the environment differences on a per-deploy basis when deploying directly from JDeveloper IDE. This is a slippery slope as it leaves no way of repeating what was done consistently across multiple releases against an environment. The runtime solution may become unpredictable as it may be accidentally pointing to wrong endpoints or environments.

Anti-pattern #2: Custom coding workarounds

It's no surprise that coders like to solve problems with coding. In an attempt to avoid configuration plan hell, developers may build a custom solution which constructs a unique artifact per environment. They may create their own properties file per environment and take the environment name as a parameter. This approach may work for a while until someone makes a mistake like deploying the dev artifact to the test environment. What ever happened to promoting a single artifact to each environment?

The MyST antidote

By now it should be clear that building an artifact per-environment can cause major application release risk due to inconsistency and complexity. MyST release management avoids the need to build artifacts per-environment and guarantees release certainty, consistency and reliability. This is achieved through a number of driving principals

Every artifact built is a potential release candidate.

Every artifact is packaged with operational variables to be defined at deploy-time on a per-environment basis.

Every artifact should be connectable to other integration endpoints at deploy-time without a need to rebuild the artifact per-environment.

Other unique settings per environment such as monitoring, security and performance tuning parameters should also be definable without a need to rebuild the artifact.

Common environment-specific values are versioned controlled centrally and looked up at deploy-time.

MyST utilises the customisation file type available to each technology (e.g. Config Plans for SOA Suite, Customization Files for OSB). The file can be packaged within the artifact itself so it doesn't drift from the code it is designed to support. The file can contain references to operational variables.

Below is an example of a generic OSB Customization file with ${app.stock.host} and ${app.stock.port} property references that are replaced at deploy-time.

Where the out-of-the-box customisation technologies fall short, you can define XPath, Property Reference or Find and Replace Rules against file patterns through a powerful MyST customisation plan. Can't change a service account with an OSB customization file? No problem! Create an XPath rule and apply that at deploy-time.

Containerized. Releasable. Portable.

Now that any given source artifact can be deployed to any environment by decoupling the runtime settings (such as external endpoints, security and monitoring) from the code, it's easy to compose portable solutions that will run on-premise or cloud, on our local development environment or in production. We get away from the works on my machine mentality and ensure we are not wasting our time on testing anything less than a production release candidate. We can consistently test and promote our code artifacts from Development to User Acceptance Testing and ship to Production at anytime without a unique and untested release ceremony every time we do so.

In this post, we've shown first hand how the MyST antidote for Oracle Middleware can be useful for reducing risk and cost in the delivery lifecycle and how you can benefit from those principles, today.

If you'd like to learn more about MyST for Application Release Automation (ARA), you can check out our summary of the MyST 5.5 release here.