Amis Blog

The topic of quickly creating an Oracle development VM is not new. Several years ago Edwin Biemond and Lucas Jellema have written several blogs about this and have given presentations about the topics at various conferences. You can also download ready made Virtualbox images from Oracle here and specifically for SOA Suite here.

Over the years I have created a lot (probably 100+) of virtual machines manually. For SOA Suite, the process of installing the OS, installing the database, installing WebLogic Server, installing SOA Suite itself can be quite time consuming and boring if you have already done it so many times. Finally my irritation has passed the threshold that I needed to automate it! I wanted to easily recreate a clean environment with a new version of specific software. This blog is a start; provisioning an OS and installing the XE database on it. It might seem a lot but this blog contains the knowledge of two days work. This indicates it is easy to get started.

I decided to start from scratch and first create a base Vagrant box using Packer which uses Kickstart. Kickstart is used to configure the OS of the VM such as disk partitioning scheme, root password and initial packages. Packer makes using Kickstart easy and allows easy creation of a Vagrant base box. After the base Vagrant box was created, I can use Vagrant to create the Virtualbox machine, configure it and do additional provisioning such as in this case installing the Oracle XE database.

If you just want a quick VM with Oracle XE database installed, you can skip the Packer part. If you want to have the option to create everything from scratch, you can first create your own a base image with Packer and use it locally or use the Vagrant cloud to share the base box.

Oracle provides Vagrant boxes you can use here. Those boxes have some default settings. I wanted to know how to create my own box to start with in case I for example wanted to use an OS not provided by Oracle. I was presented with three options in the Vagrant documentation. Using Packer was presented as the most reusable option.

Packer

‘Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration.’ (from here) Download Packer from HashiCorp (here).

Avast Antivirus and maybe other antivirus programs, do not like Packer so you might have to temporarily disable them or tell them Packer can be trusted.

virtualbox-iso builder

Packer can be used to build Vagrant boxes (here) but also boxes for other platforms such as Amazon and Virtualbox. See here. For VirtualBox there are two so called builders available. Start from from scratch by installing the OS from an ISO file or start from an OVF/OVA file (pre-build VM). Here of course I choose the ISO file since I want to be able to easily update the OS of my VM and do not want to create a new OVF/OVA file for every new OS version. Thus I decided to use the virtualbox-iso builder.

Iso

For my ISO file I decided to go with Oracle Linux Release 7 Update 4 for x86 (64 bit) which is currently the most recent version. In order for Packer to work fully autonomous (and make it easy for the developer), you can provide a remote URL to a file you want to download. For Oracle Linux there are several mirrors available which provide that. Look one up close to you here. You have to update the checksum in the template file (see below) when you update the ISO image if you want to run on a new OS version.

template JSON file

In order to use Packer with the virtualbox-iso builder, you first require a template file in JSON format. Luckily samples for these have already been made available here. You should check them though. I made my own version here.

Kickstart

In order to make the automatic installation of Oracle Linux work, you need a Kickstart file. This is generated automatically when performing an installation at /root/anaconda-ks.cfg. Read here. I’ve made my own here in order to have the correct users, passwords, packages installed and swap partition size.

After you have a working Kickstart file and the Packer ol74.json, you can kickoff the build by:

packer build ol74.json

Packer uses a specified username to connect to the VM (present in the template file). This should be a user which is created in the Kickstart script. For example if you have a user root with password Welcome01 in the kickstart file, you can use that one to connect to the VM. Creating the base box will take a while since it will do a complete OS installation and first download the ISO file.

You can put the box remote or keep it local.

Put the box remote

After you have created the box, you can upload it to the Vagrant Cloud so other people can use it. The Vagrant Cloud free option offers unlimited free public boxes (here). The process of uploading a base box to the Vagrant cloud is described here. You first create a box and then upload the file Packer has created as provider.

After you’re done, the result will be a Vagrant box which can be used as base image in the Vagrantfile. This looks like:

Use the box locally

Alternatively you can use the box you’ve created locally:
vagrant box add ol74 file:///d:/vagrant/packer/virtualbox/ol74.box

You of course have to change the box location to be specific to your environment

And use ol74 as box name in your Vagrantfile. You can see an example of a local and remote box here.

If you have recreated your box and want to use the new version in Vagrant to create a new Virtualbox VM:

You now have a base clean OS (relatively clean, I added a GUI) and you want to install stuff in it. Vagrant can help you do that. I’ve used a simple shell script to do the provisioning (see here) but you can also use more complex pieces of software like Chef or Puppet. These are of course in the long run better suitable to also update and manage machines. Since this is just a local development machine, I decided to keep it simple.

These can be downloaded here. Except the oracle-xe-11.2.0-1.0.x86_64.rpm.zip file which can be downloaded here.

Oracle XE comes with a rsp file (a so-called response file) which makes automating the installation easy. This is described here. You just have to fill in some variables like password and port and such. I’ve prepared such a file here.

The Java 9 SE specification for the JDK does not contain the JSON-P API and libraries for processing JSON. In order to work with JSON-P in JShell, we need to add the libraries – that we first need to find and download.

I have used a somewhat roundabout way to get hold of the required jar-files (but it works in a pretty straightforward manner):

this will download the relevant JAR files to subdirectory target/dependencies

3. Copy JAR files to a directory – that can be accessed from within the Docker container that runs JShell – for me that is the local lib directory that is mapped by Vagrant and Docker to /var/www/lib inside the Docker container that runs JShell.

4. In the container that runs JShell:

Start JShell with this statement that makes the new httpclient module available, for when the JSON document is retrieved from an HTTP URL resource:

jshell –add-modules jdk.incubator.httpclient

5. Update classpath from within jshell

To process JSON in JShell – using JSON-P – we need set the classpath to include the two jar files that were downloaded using Maven.

Start a container based on the openjdk:9 image, exposing its port 80 on the docker host machine and mapping folder /vagrant (mapped from my Windows host to the Docker Host VirtualBox Ubuntu image) to /var/www inside the container:

Create Java application with custom module: I create a single Module (nl.amis.j9demo) and a single class nl.amis.j9demo.MyDemo. The module depends directly on one JDK module (httpserver) and indirectly on several more.

The root directory for the module has the same fully qualified name as the module: nl.amis.j9demo.

This directory contains the module-info.java file. This file specifies:

which modules this module depends on

which packages it exports (for other modules to create dependencies on)

In my example, the file is very simple – only specifying a dependency on jdk.httpserver:

The Java Class MyDemo has a number of imports. Many are for base classes from the java.base module. Note: every Java module has a implicit dependency on java.base, so we do not need to include it in the modue-info.java file.

This code create an instance of HttpServer – an object that listens for HTTP Requests at the specified port (80 in this case) and then always returns the same response (the string “This is the response”). As meaningless as that is – the notion of receiving and replying to HTTP Requests in just few lines of Java Code (running in the OpenJDK!) is quite powerful.

the traditional equivalent with a classpath for the JAR file(s) would be:

java -classpath lib/nl-amis-j9demo.jar nl.amis.j9demo.MyDemo

Because port 80 in the container was exposed and mapped to port 8080 on the Docker Host, we can access the Java application from the Docker Host, using wget:

wget 127.0.0.1:8080/apps

The response from the Java application is hardly meaningful However, the fact that we get a response at all is quite something: the ‘remote’ container based on openjdk:9 has published an HTTP server from our custom module that we can access from the Docker Host with a simple HTTP request.

Jlink

I tried to use jlink – to create a special runtime for my demo app, consisting of required parts of JDK and my own module. I expect this runtime to be really small.

The JVM modules by the way on my Docker Container are in /docker-java-home/jmods

the OpenJDK has all previously exclusive commercial features from the Oracle (fka SUN) JDK – this includes the Java Flight Recorder for real time monitoring/metrics gathering and analysis,

Java 9 will be succeeded by Java 18.3, 18.9 and so on (a six month cadence) with much quicker evolution with continued quality and stability

JigSaw is finally here; it powers the coming evolution of Java and the platform and it allows us to create fine tuned, tailor more Java runtime environments that may take less than 10-20% of the full blown JRE

Java 9 has many cool and valuable features besides the Modularity of JigSaw – features that make programming easier, more elegant more fun more lightweight etc.

One of the objectives is “Java First, Java Always” (instead of: when web companies mature, then they switch to Java) (having Java enabled for cloud, microsevice and serverless is an important step in this)

Note: during the JavaOne Keynote, Spotify presented a great example of this pattern: they have a microservices architecture (from before it was called microservice); most were originally created in Python, with the exception of the search capability; due to scalability challenges, all Python based microservices have been migrated to Java over the years. The original search service is still around. Java not only scales very well and has the largest pool of developers to draw from, it also provides great run time insight into what is going on in the JVM

I have played around a little with Java 9 but now that is out in the open (and I have started working on a fresh new laptop – Windows 10) I thought I should give it another try. In this article I will describe the steps I took from a non Java enabled Windows environment to playing with Java 9 in jshell – in an isolated container, created and started without any programming, installation or configuration. I used Vagrant and VirtualBox – both were installed on my laptop prior to the exercise described in this article. Vagrant in turn used Docker and downloaded the OpenJDK Docker image for Java 9 on top of Alpine Linux. All of that was hidden from view.

The steps:

0. Preparation – install VirtualBox and Vagrant

1. Create Vagrant file – configured to provide a VirtualBox image (based on Ubuntu Linux) and provision the Docker host on that image as well as a Docker Container with OpenJDK 9

2. Run Vagrant for that Vagrant file to have it spin up the VirtualBox, install Docker into it, pull the OpenJDK image and run the container

3. Connect into VirtualBox Docker Host and Docker Container

4. Run jshell command line and try out some Java 9 statements

In more detail:

1. Create Vagrant file

In a new directory, create a file called Vagrantfile – no extension. The file has the following content:

It is configured to provide a VirtualBox image (based on Ubuntu Linux) and provision the Docker host on that VB image as well as a Docker Container based on the OpenJDK:9 image.

And have it spin up the VirtualBox, install Docker into it, pull the OpenJDK image and run the container:

3. Connect into VirtualBox Docker Host and Docker Container

Using

vagrant ssh

to connect into the VirtualBox Ubuntu Host and

docker run –it openjdk:9 /bin/sh

to run a container and connect into the shell command line, we get to the environment primed for running Java 9:

At this point, I should also be able to use docker exec to get into the container that started by the Vagrant Docker provisioning configuration. However, I had some unresolved issues with that – the container kept restarting. I will attempt to resolve that issue.

4. Run jshell command line and try out some Java 9 statements

JShell is the new Java command line tool that allows REPL style exploration – somewhat similar to for example Python and JavaScript (and even SQL*Plus).

Here is an example of some JShell interaction:

I tried to use the new simple syntax for creating collections from static data. Here I got the syntax right:

It took me a little time to find out the exit strategy. Turns out that /exit does that trick:

In summary: spinning up a clean, isolated environment in which to try out Java is not hard at all. On Linux – with Docker running natively – it is even simpler, although even then using Vagrant may be beneficial. On Windows it is also quite straightforward – no complex sys admin stuff required and hardly any command line things either. And that is something we developers should start to master – if we do not do so already.

Issue with Docker Provider in Vagrant

Note: I did not succeed in using the Docker provider (instead of the provisioner) with Vagrant. Attempting that (cleaner) approach failed with “Bringing machine ‘j9’ up with ‘docker’ provider…
The executable ‘docker’ Vagrant is trying to run was not
found in the %PATH% variable. This is an error. Please verify
this software is installed and on the path.” I have looked across the internet, found similar reports but did not find a solutio that worked for me.

It is quite hard to make a choice for a feature to write about. So many to talk about. And almost every day another favorite of the month. Sliding time windows. The Oracle Developer Community – well, that is us. All developers working with Oracle technology, sharing experiences and ideas, helping each other with inspiration and solutions to challenges, making each other and ourselves better. Sharing fun and frustration, creativity and best practices, desires and results. Powered by OTN now kown as ODC. Where we can download virtually any software Oracle has to offer. And find resources – from articles and forum answers to documentation and sample code. This article is part of the community effort to show appreciation – to the community and to the Orace Developer Community (organization).

And the WayBack machine is just one of many examples of timelines – presentation of data organized by date. We all know how pictures say more than many words. And how tables of data are frequently to much less accessible to users than to the point visualizations. For some reason, data associated with moments in time have always had special interest for me. As do features that are about time – such as Flashback Query, 12c Temporal Database and SYSDATE (or better yet: SYSTIMESTAMP).

To present such time-based data in way that reveals the timeline and historical threat that resides in the data, we can make use of the Timeline component that is available in:

Note that in all cases it does not take much more than a dataset with date (or date time) attribute and one or more attributes to create a label and perhaps to categorize. A simple select ename, job, hiredate from emp suffices.

I have not seen many sessions on SaaS and business applications at OracleOpen World. Yet SaaS is becoming increasingly more important. The number of SaaS applications or at least the number of functions that standard available applications can perform is growing rapidly. The availability to any organization of SaaS functions that will support them with a large portion of their business process is growing. The main challenge of corporate IT departments is going to shift from creating IT facilities to support the business [processes]to enabling SaaS applications to provide that support – by mutually tying together these applications through integration and mash up as well as embedding in authentication, authorization, data warehousing, scanning, printing, enterprise content management and other enterprise IT facilities.

Business Applications not only support many more niche functions and allow fine tuning to an organization’s ways of doing things, they also become much smarter and proactive. Smart Business Applications – apply machine learning to help humans focus on the tasks that require human attention and handle automatically the cases that fall within boundaries of normal action.

Some simple examples:

Marketing – who to send email to

Sales – who to focus on

Customer Service – recommend next step with calling customer

Oracle is permeating AI into business apps (AI Adaptive Apps), also leveraging its Data as a Service with 3B consumer profiles in DaaS, and records on over $4 Trillion spending.

Oracle offers “a full suite of SaaS offerings” :

(although they clearly do not yet all have ideal mutual integration, similar look & feel and perfect alignment)

During the Keynote by Thomas Kurian at Oracle OpenWorld 2017, an extensive demo was presented of how consumer activity can be tracked and used to reach out and make relevant offerings – as part of the B2C Customer Experience (see https://youtu.be/cef7C2uiDTM?t=47m35s )

For example – web site navigation behavior can be tracked:

and from this, a profile can be composed about this particular user:

By comparing the profile to similar profiles and looking at the purchase behavior of those similar profiles, the AI powered application can predict and recommend purchases by the user with this profile.

Here follow a number of screenshots that indicate the insight in customer interest in products – and the effects of specific, targeted campaigns to push certain products

Information can be retrieved using REST services as well:

Recommendations that have been given to customers can be analyzed for their success. Additionally, the settings that drive these recommendations can be overridden – for example to push stock of a product that has been overstocked or is at of line:

The Supervisory Controls allow humans to override the machine learning based behavior:

In his keynote on October 3rd during Oracle OpenWorld 2017, Thomas Kurian stated that the vision at Oracle around analytics has changed quite considerably. He explained this change and the new vision using this slide.

All kinds of data, all kinds of users, many more ways to present and visualize and machine generated insights to complement human understanding.

The newly launched Analytics Cloud supports this vision.

Zooming in on Data Preparation:

And from cleansed and prepared data – create Machine Learning models that help create classify and predict, use conventional (charts) and new (personalized and context sensitive and rich chat, notification, maps) and allow users to collaborate around findings from data.

Thomas K. threw in the Autonomous Datawarehouse as an intermediate or final destination for prepared data or even the findings from that data.

The keynote continued with a demo that made clear how a specific challenge – monitor social media for traffic on specific topics and derive from all messages and tweets which player was most valuable (and has the largest social influence) – could be addressed.

The initial data load is presented for the new Social Data Stream project on the Prepare tab. The Analytics Cloud comes with recommendations (calls to action) to cleanse (or “heal”) and enrich the data. Among the potential actions are correcting zip-codes, extracting business entities from images, complete names and enrich by joining to predefined data sets such as players, locations, team names etc.

The intial presentation of data is in itself a rich exploration of the data. Analytics Cloud has already identified a large number of attributes, has analyzed the data and presents various aggregations. (This has clear undertones of Endeca) At this point, we can work on the data, to make it better – cleaner, richer and better suited for presentations, conclusions and model building.

Images can be analyzed to identify objects, recognize scenes and even find specific brands:

After each healing action, new recommendations for data preparation may be presented.

Here are two examples of joining the data sets to additional sets:

and

Some more examples of what the current status of preparation is of the data.

Here is the Visualize tab – where users can edit the proposed visualizations and add new ones.The demo continued to show how through a mobile app – through voice recognition – a new KPI could be added.

That should result in notifications being sent upon specific conditions:

Notifications can take various forms – including visual but passive alerts on a dashboard or active push messages on messenger or chat channel (Slack, WeChat, Facebook Messenger), SMS Text Messages, Email.

I have played a little with Oracle’s Data Visualization cloud and it is really fun to be able to so quickly turn raw data into nice and sometimes meaningful visuals. I do not pretend I grasp the full potential of Data Viz CS, but I can show you some simple steps to quickly create something good looking and potentially really useful.

In this article, I start with two tables in a cloud database – with the data we used for the Soaring through the Clouds demo at Oracle OpenWorld 2017:

As described in the earlier article, I have created a database connection to this DBaaS instance and I have created data sources for these two tables.

Now I am ready to create a new project:

I select the data sources to use in this project:

And on the prepare tab I make sure that the connection between the Data Sources is defined correctly (with Proposed Acts adding fact – lookup data – to the Albums):

On the Visualize tab, I drag the Release Date to the main pane.

I then select Timeline as visualization :

Next, I bring the title of the album to the Details section:

and the genre of the album to the Color area:

Then I realize I would like to have the concatenation of Artist Name and Album Title in the details section. However, I cannot add two attributes to that area. What I can do instead is create a Calculation:

Next I can use this caclculation for the details:

I can use Trellis Rows to create a Timeline per value of the selected attribute, in this case the artist:

It is very easy to add filters – that can be manipulated by end users in presentation mode to filter on data relevant to them. Simply drag attributes to the filter section at the top:

In a recent article I discussed how to programmatically fetch a JSON document with information about sessions at Oracle OpenWorld and JavaOne 2017. Yesterday, slidedecks for these sessions started to become available. I have analyzed how the link to these downloads were included in the JSON data returned by the API. Then I created simple Node programs to tweet about each of the sessions for which the download became available

and to download the file to my local file system.

I added provisions to space out the tweets and the download activity over time – as to not burden the backend of the web site and to not be kicked off Twitter for being a robot.

The code I crafted is not particularly ingenuous – it was created rather hastily in order to share with the OOW17 and JavaOne communities the links for downloading slide decks from presentations at both conferences. I used npm modules twit and download. This code can be found on GitHub: https://github.com/lucasjellema/scrape-oow17.

The documents javaone2017-sessions-catalog.json and oow2017-sessions-catalog.json contain details on all sessions – including the URLs for downloading slides.

Almost done. It’s not expected that tomorrow, thursday, will be a day full of new stuff, exciting news. Today, wednesday was a mix for me between ‘normal’ content like sessions about migrating to Oracle Enterprise 13.2 (another packed room) and a very interesting session about the Autonomous Database. Just a short note about a few sessions (including the Autonomour Database of course).

As mentioned, sessions with ‘normal’ content, in this case, migrating a database of 100TB in one day – with Mike Dietrich, are quite popular. We may almost forget that most of the customers are thinking about the cloud, but at the moment just focussed on how to keep the daily business running.

The session about Oracle Enterprise Manager, about upgrading to 13c (a packed room) is quite rare. Two years ago there were a lot of presentations about this management product, this year close to none. I’m very curious to know what happens after 2020. Oracle Management Cloud is coming rapidly. But… Oracle is using it quite heavy in the public cloud, so it is expected it won’t dissappear that fast. Here are the timelines:

At the end of the day, a session was planned about the most most important announcement of Oracle OpenWorld, preview of the autonomous database.

Quite peculiar, at the very end of the day, in a room that was obviously too small for the crowd.

A view outlines. The DBA is still needed, only the general tasks are disappearing:

The very rough roadmap .

This Data Warehouse version is already there in 2017. This was technically ‘easier’ to accomplish. The OLTP autonomous database has more challenges.

Day 3 began with a very smooth and interesting keynote of Thomas Kurian, full of flawless, wonderful demo’s . The second keynote, of Larry Ellison happened in the afternoon, I couldn’t attent for as I head a product management meeting. But some hickups I heard. Beside the keynotes it was a day full of good information and surprisingly stuff, with in the end: Oracle Database Appliance, the X7-2 series. Another short note about Oracle Open World.As said, the day began with the keynote of Thomas Kurian.

A slide of the six journeys to the cloud, almost the key of whole Open World. The Journey to the Cloud. My special interest by the way, as I am interested in engineered systems, is the first . Optimizing your on-premises datacenter.

Cool stuff is the right word for it I think: Chatbots, Smartfeed, Connected Intelligence, Social Media Analyses, Analytics, IOT.

The technique behind these demo’s spans a dazzling amount of new and existing different Cloud services. Too much for now I’m afraid.

Machine Learning was the big keyword in all these demo’s I think.

Serverless with Kafka and Kubernetes Cloud Service:

There’s a new cost estimator to calculate how much universal credits the several services will cost.

Another announcement : Blockchain cloud service for secure inter-connected transactions

The announcement in the afternoon: Oracle Management and Security Cloud, with machine learning and Management Cloud.Larry Ellison talks about the severity of data hacks and information stealing while data centers get increasingly complicated and systems are harder to patch. “We’ve got to do something. It must be an automated process.”

In a product management session of IaaS several price comparisons were made with AWS on a detailed level. How cheap is Oracle compared to AWS.

Announcement of SLA’s

In the session of Oracle Database Appliance of course the X7-2 serie, the summary – SE for all models, 12.2.0.1 support:

The new HA :

KVM as a new deployment option, and the way forward. Oracle VM on the ODA is slowly disappearing.

I know, this is just a scratch at the surface of all new things that are happening…

At Monday Oracle Open World really starts. A whole lot of general sessions with announcements and strategic directions on product / service level. In this post I’ll try to summarize the highlights of this day. Quite hard, there are a whole lot of interesting sessions which overlap. Always the feeling I’m missing something. And a very interesting session at the very end of the day as a surprise.

It started with the keynote of Mark Hurd, no big news, or it must be the revisiting of his predictions a year ago. This holds true:

– By 2025 the number of corporate-owned data centers will decrease by 80%

Just a quick note about day 1 at Oracle Open World. This Sunday traditionally is filled with presentations of usergroups, customers and product management, and at the end of the day the welcome-keynote. In the presentations there’s not really exciting news, they will have to wait until the keynotes. But some observations can be done already.

The big news at the keynote – The Autonomous Database – was not that big anymore, Larry Ellison announced it a week ago, I did a wrap up yesterday.

Quite a summary of the keynote has been written by businessinsider, And here are the highlights on video.

The most important slide of the keynote regarding the changing role of the DBA is included in this post : Less time on Administration, more time on innovation. Oracle 18c requires no DBA, is highly available, and autotune queries.

So what about the presentations I went to at Sunday. Just a few observations (very limited in scope of course):

– The phrase ‘Single Pane of Glass’, which was used for Enterprise Manager 13c a while ago, is now being used at the Oracle Management Cloud (OMC). The context and scope is howerver quite different, OMC is strategically meant for monitoring and managing a complete hybrid cloud environment, including Azure, Amazon and the on-premises environment.

– The word ‘management’ has been inserted in the slides of OMC. Not just monitoring anymore. What are the consequences for Oracle Enterprise Manager?

– Security is on the agenda

– Machine Learning is trending.

– Some services are barely present at this OpenWorld: Oracle Enterprise Manager, Exa-systems (not the cloud service), Weblogic platform, Oracle Database Appliance, in summary a lot of hardware and on-premises management. And when hardware is mentioned it is just a step to the final goal: the cloud. With one big exception: the Oracle Cloud Machine for Cloud@Customer.

Oracle Open World 2017 is starting tomorrow, and as a platinum partner of Oracle, we – AMIS Services – are obliged to keep us and our customers informed with the roadmap of Oracle products.

And of course translate this to added business-value for our customers. In short: what to pick of all the coming announcements, new features, cloudiness and so on. All the arrows are pointed towards the Oracle Cloud products of course, but first we’ll have to find the answer of the two questions: ‘why should a customer go to the cloud’ and if yes ‘what role is Oracle Cloud playing in this’.

This week will be – as in the last years – full of announcements. It started early this year with the pre-announcement of Larry last week about the ‘autonomous database’, Bring Your Own Licence to the Cloud and Universal Credits.

As a reminder, hereby a wrap-up of the presentation of Larry Ellison:

– Lowest price for IaaS. That means the same price as Amazon, but faster, thus cheaper.

– Highest rate of automation in PaaS. That means the lowest TCO. Goal is to garantee a 50% lower TCO than Amazon

– The ‘autonomous database’ is available in December, based on Machine Learning. This should eliminate human Labor (DBA).

– Service Level Agreement of 99,995%, this is 30 minues of planned and unplanned downtime a year.

– Bring Your Own License for PaaS. 94% cheaper than the old price. This should lower the treshold to use the Oracle Cloud.

– It’s becoming possible to buy Universal Credits, no more linked to a Cloud product.

Oracle has taken some pretty smart moves, which will influence products like

– Engineered Systems. In general : is there still a business case in buying on-premisses hardware

– The role and feature of the DBA. DBAKevlar wrote a nice blog about this.

– Life Cycle Management of the platform. How to cope with the management of a hybrid environment, including this new autonomous databases, and containers.

– Databases. There will be a lot of new features in Oracle18.

Tomorrow it will be the start of an exhausting week and I’m not even presenting nor ACE or Developement Champion! Just a mortal visitor, attending a lot of presentations, key-notes, meetings with product-managment, network-events, appreciation-events, dinners with Oracle representatives.

Today – Wednesday 27th of September – saw close to 50 people gathering for the OAUX (Oracle Applications User Experience) Strategy Day. Some attendees joined from remote locations on three continents, while most of us had assembled in the UX Spaces Lab at Oracle’s Redwood Shores HQ – equipped with some interesting video and audio equipment.

Some important themes for this day:

The key message of Simplicity, Mobility and Extensibility is continued; simplicity means: a user experience that is to the point, only drawing a user’s attention to relevant items, only presenting meaningful data and allowing a task to be handled most efficiently.In order to achieve this simplicity, quite a bit of smartness is required: User context interpreted by smart apps lead to Simple UX, with Chat, Voice Input and Conversational UIs.and fully automated processes at the pinnacle. Machine learning is at the heart of this smartness – deriving information from the context, presenting relevant choices en defaults based on both context and historical patterns

Enterprise Mobility is a key element in the user experience – with a consistent experience yet tailored to the device (one size does not fit all at all) and the ability to start tasks on one device and continue with them on different devices and a later point in time. The experience should be light on data. Only show the absolute essential information.

The latest Oracle Cloud Applications Release – R13 – has some evolution in the UX and UI.

There is a move away from using icons to interact with the application for navigation – more towards search & notifications. The ability to tailor the look & feel (theming, logo, heading, integrate external UIs) has improved substantially.

Conversational UI for the Enterprise is rapidly becoming relevant. Conversational UI for the enterprise complements and replaces current Web&Mobile UI – for quick, simple, mini transaction and smart capture. The OAUX team discerns four categories of interactions that conversational interfaces are initially most likely to be used for: Do (quick decisions, approvals, data submission), Lookup (get information), Go To (use conversation as starting point for a deeplink context rich navigation to a dedicated application component) and Decision Making (provide recommendations and guidance to users).

Some examples of conversational UIs – low threshold user to system interaction for simple questions,requests, actions and submissions

Jeremy Ashley introduced the term JIT UI – just in time UI: widgets (buttons, selection lists) that are mixed in with the text based conversational UI (aka chat) to allow easy interaction when relevant; this could also include dynamically generated visualizations for more complex presentation of data.

The OAUX makes an RDK (Rapid Development Kit) available for Conversational UI – or actually the first half of the RDK – the part that deals with designing the conversational UI. The part about the actual implementation will follow with the launch of the Oracle Intelligent Bot Cloud Service and associated technology and tooling.This new RDK can be found at : https://t.co/m7AuSBJw5J . It contains many guidelines on designing conversations – about how to address users, what information and interaction to provide.

Another brand new RDK is soon to be released for Oracle JET – aligned with JET 4.0, that is to be released next week at Oracle OpenWorld 2017. This RDK support development of Oracle JET rich client applications with the same look and feel as the R13 ADF based Oracle SaaS apps. Assuming that there will be a long period of coexistence between ADF based frontends and Oracle JET powered user interfaces, it seems important to be able to develop an experience in JET that is very similar to the one users already are used to in the existing SaaS applications.

Additionally, the JET RDK will provide guidelines on how to developer JET applications. These guidelines were created in collaboration between the SaaS foundation and development teams, the JET product development team and the OAUX team. They are primarily targeted at Oracle’s own development teams that embrace JET for building SaaS App components and other developers creating extensions on top of Oracle SaaS. However, these guidelines are very useful for any development team that is using JET for developing any applications. The guidance provided by the RDK resources – as well as potentially the reusable components provided as part of the RDK – embodies best experiences and the intent of the JET team and provides a relevant headstart to teams that otherwise have to invent their own wheels.

Here is a screenshot of the sample JET application (R13 style) provided with the RDK:

Keystores and the keys within can be used for security on the transport layer and application layer in Oracle SOA Suite and WebLogic Server. Keystores hold private keys (identity) but also public certificates (trust). This is important when WebLogic / SOA Suite acts as the server but also when it acts as the client. In this blog post I’ll explain the purpose of keystores, the different keystore types available and which configuration is relevant for which keystore purpose.

Why use keys and keystores?

The below image (from here) illustrates the TCP/IP model and how the different layers map to the OSI model. When in the below elaboration, I’m talking about the application and transport layers, I mean the TCP/IP model layers and more specifically for HTTP.

The two main reasons why you might want to employ keystores are that

you want to enable security measures on the transport layer

you want to enable security measures on the application layer

Almost all of the below mentioned methods/techniques require the use of keys and you can imagine the correct configuration of these keys within SOA Suite and WebLogic Server is very important. They determine which clients can be trusted, how services can be called and also how outgoing calls identity themselves.

You could think transport layer and application layer security are two completely separate things. Often they are not that separated though. The combination of transport layer and application layer security has some limitations and often the same products / components are used to configure both.

Double encryption is not allowed. See here. ‘U.S. government regulations prohibit double encryption’. Thus you are not allowed to do encryption on the transport layer and application layer at the same time. This does not mean you cannot do this though, but you might encounter some product restrictions since, you know, Oracle is a U.S. company.

Oracle Webservice Manager (OWSM) allows you to configure policies that perform checks if transport layer security is used (HTTPS in this case) and is also used to configure application level security. You see this more often that a single product is used to perform both transport layer and application layer security. For example also API gateway products such as Oracle API Platform Cloud Service.

Transport layer (TLS)

Cryptography is achieved by using keys from keystores. On the transport layer you can achieve

On application level you can achieve similar feats (authentication, integrity, security, reliability), however often more fine grained such as for example on user level or on a specific part of a message instead of on host level or for the entire connection. Performance is usually not as good as with transport layer security because the checks which need to be performed, can require actual parsing of messages instead of securing the transport (HTTP) connection as a whole regardless of what passes through. The implementation depends on the application technologies used and is thus quite variable.

Authentication by using security tokens such as for example

SAML. SAML tokens can be used in WS-Security headers for SOAP and in plain HTTP headers for REST.

JSON Web Tokens (JWT) and OAuth are also examples of security tokens

Certificate tokens in different flavors can be used which directly use a key in the request to authenticate.

Digest authentication can also be considered. Using digest authentication, a username-password token is created which is send using WS-Security headers.

Security and reliability by using message protection. Message protection consists of measures to achieve message confidentiality and integrity. This can be achieved by

signing. XML Signature can be used for SOAP messages and is part of the WS Security standard. Signing can be used to achieve message integrity.

encrypting. Encrypting can be used to achieve confidentiality.

Types of keystores

There are two types of keystores in use in WebLogic Server / OPSS. JKS keystores and KSS keystores. To summarize the main differences see below table:

JKS

There are JKS keystores. These are Java keystores which are saved on the filesystem. JKS keystores can be edited by using the keytool command which is part of the JDK. There is no direct support for editing JKS keystores from WLST, WebLogic Console or Fusion Middleware Control. You can use WLST however to configure which JKS file to use. For example see here

Keys in JKS keystores can have passwords as can keystores themselves. If you use JKS keystores in OWSM policies, you are required to configure the key passwords in the credential store framework (CSF). These can be put in the map: oracle.wsm.security and can be called: keystore-csf-key, enc-csf-key, sign-csf-key. Read more here. In a clustered environment you should make sure all the nodes can access the configured keystores/keys by for example putting them on a shared storage.

KSS

OPSS also offers KeyStoreService (KSS) keystores. These are saved in a database in an OPSS schema which is created by executing the RCU (repository creation utility) during installation of the domain. KSS keystores are the default keystores to use since WebLogic Server 12.1.2 (and thus for SOA Suite since 12.1.3). KSS keystores can be configured to use policies to determine if access to keys is allowed or passwords. The OWSM does not support using a KSS keystore which is protected with a password (see here: ‘Password protected KSS keystores are not supported in this release’) thus for OWSM, the KSS keystore should be configured to use policy based access.

KSS keys cannot be configured to have a password and using keys from a KSS keystore in OWSM policies thus do not require you to configure credential store framework (CSF) passwords to access them. KSS keystores can be edited from Fusion Middleware Control, by using WLST scripts or even by using a REST API (here). You can for example import JKS files quite easily into a KSS store with WLST using something like:

As mentioned above, keys within keystores are used to achieve transport security and application security for various purposes. If we translate this to Oracle SOA Suite and WebLogic Server.

Transport layer
Incoming

Keys are used to achieve TLS connections between different components of the SOA Suite such as Admin Servers, Managed Servers, Node Managers. The keystore configuration for those can be done from the WebLogic Console for the servers and manually for the NodeManager. You can configure identity and trust this way and if the client needs to present a certificate of its own so the server can verify its identity. See for example here on how to configure this.

Keys are used to allow clients to connect to servers via a secure connection (in general, so not specific for communication between WebLogic Server components). This configuration can be done in the same place as above, with the only difference that no manual editing of files on the filesystem is required (since no NodeManager is relevant here).

Outgoing
Composites (BPEL, BPM)

Keys are be used to achieve TLS connections to different systems from the SOA Suite. The SOA Suite acts as the client here. The configuration of identity keystore can be done from Fusion Middleware Control by setting the KeystoreLocation MBean. See the below image. Credential store entries need to be added to store the identity keystore password and key password. Storing the key password is not required if it is the same as the keystore password. The credential keys to create for this are: SOA/KeystorePassword and SOA/KeyPassword with the user being the same as the keyalias from the keystore to use). In addition components also need to be configured to use a key to establish identity. In the composite.xml a property oracle.soa.two.way.ssl.enabled can be used to enable outgoing two-way-ssl from a composite.

Setting SOA client identity store for 2-way SSL

Specifying the SOA client identity keystore and key password in the credential store

Service Bus

The Service Bus configuration for outgoing SSL connections is quite different from the composite configuration. The following blog here describes the locations where to configure the keystores and keys nicely. In WebLogic Server console, you create a PKICredentialMapper which refers to the keystore and also contains the keystore password configuration. From the Service Bus project, a ServiceKeyProvider can be configured which uses the PKICredentialMapper and contains the configuration for the key and key password to use. The ServiceKeyProvider configuration needs to be done from the Service Bus console since JDeveloper can not resolve the credential mapper.

To summarize the above:

Overwriting keystore configuration with JVM parameters

You can override the keystores used with JVM system parameters such as javax.net.ssl.trustStore, javax.net.ssl.trustStoreType, javax.net.ssl.trustStorePassword, javax.net.ssl.keyStore, javax.net.ssl.keyStoreType, javax.net.ssl.keyStorePassword in for example the setDomainEnv script. These will override the WebLogic Server configuration and not the OWSM configuration (application layer security described below). Thus if you specify for example an alternative truststore by using the command-line, this will not influence HTTP connections going from SOA Suite to other systems. Even when message protection (using WS-Security) has been enabled, which uses keys and check trust. It will influence HTTPS connections though. For more detail on the above see here.

Application layer

Keys can be used by OWSM policies to for example achieve message protection on the application layer. This configuration can be done from Fusion Middleware Control.

The OWSM run time does not use the WebLogic Server keystore that is configured using the WebLogic Server Administration Console and used for SSL. The keystore which OWSM uses by default is kss://owsm/keystore since 12.1.2 and can be configured from the OWSM Domain configuration. See below for the difference between KSS and JKS keystores.

OWSM keystore contents and management from FMW Control

OWSM keystore domain config

In order for OWSM to use JKS keystores/keys, credential store framework (CSF) entries need to be created which contain the keystore and key passwords. The OWSM policy configuration determines the key alias to use. For KSS keystores/keys no CSF passwords to access keystores/keys are required since OWSM does not support KSS keystores with password and KSS does not provide a feature to put a password on keys.

Identity for outgoing connections (application policy level, e.g. signing and encryption keys) is established by using OWSM policy configuration. Trust for SAML/JWT (secure token service and client) can be configured from the OWSM Domain configuration.

Finally
This is only the tip of the iceberg

There is a lot to tell in the area of security. Zooming in on transport and application layer security, there is also a wide range of options and do’s and don’ts. I have not talked about the different choices you can make when configuring application or transport layer security. The focus of this blog post has been to provide an overview of keystore configuration/usage and thus I have not provided much detail. If you want to learn more on how to achieve good security on your transport layer, read here. To configure 2-way SSL using TLS 1.2 on WebLogic / SOA Suite, read here. Application level security is a different story altogether and can be split up in a wide range of possible implementation choices.

Different layers in the TCP/IP model

If you want to achieve solid security, you should look at all layers of the TCP/IP model and not just at the transport and application layer. Thus it also helps if you use different security zones, divide your network so your development environment cannot by accident access your production environment or the other way around.

Final thoughts on keystore/key configuration in WebLogic/SOA Suite

When diving into the subject, I realized using and configuring keys and keystores can be quite complex. The reason for this is that it appears that for every purpose of a key/keystore, configuration in a different location is required. It would be nice if that was it, however sometimes configuration overlaps such as for example the configuration of the truststore used by WebLogic Server which is also used by SOA Suite. This feels inconsistent since for outgoing calls, composites and service bus use entirely different configuration. It would be nice if it could be made a bit more consistent and as a result simpler.