Category: Oracle Cloud Infrastructure

Can I get a show of hands – whose spine shudders at the sound of their own phone ringing? If your hand is up, chances are a component of your role (or role in days gone by… the scarring can be permanent) involves operations. Day or night, it’s that dread associated with wondering “What now?”. A few years back, enterprise started outsourcing the problem of supporting key business systems to 3rd party services, and while this reduced the quantityof calls, it only served to increase the quality – now when the phone rings at 3am, you know things are bad. Real bad.

Over the past week, Oracle has soft-launched a range of new services that leverage the capabilities of our Dyn investment to offer a significant enhancement to the native Edge management capabilities of our second generation cloud. These services include:

Traffic Management Steering Policies

Health Checks (Edge)

Web Application Firewall

I’ll reserve my discussion on the Web Application Firewall for a later post, but what I’d like to discuss today is Traffic Management, and how it can be leveraged to deploy, control and optimise globally dispersed application services for your Enterprise.

In November 2018, I had the privilege to attend the Australian Oracle User Group national conference “#AUSOUG Connect” in Melbourne. My role was to have video interviews with as many of the speakers and exhibitors at the conference. Overall, 10 interviews over the course of the day, 90 mins of real footage, 34 short clips to share and plenty of hours reviewing and post-editing to capture the best parts.

In a previous blog, I explained how to provision a Kubernetes cluster locally on your laptop (either as a single node with minikube or a multi-node using VirtualBox), as well as remotely in the Oracle Public Cloud IaaS. In this blog, I am going to show you how to get started with Oracle Container Engine for Kubernetes (OKE). OKE is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud on Kubernetes.

I recommend using OKE when you want to reliably build, deploy and manage cloud-native applications. Oracle takes full responsibility of provisioning the Kubernetes cluster and managing tiers (control plane), you simply choose how many Kubernetes worker nodes you want to have and then simply deploy your Kubernetes Applications there. Oracle manages the full Kubernetes Control plane, and wait, the best part is that Oracle does not charge for it, you just pay for the primitive IaaS that you use to run your application.

For the purpose of this demonstration I am going to show how to:

Provision an OKE cluster

Configure kubectl locally, so that you can run commands against your OKE cluster, e.g. deploy your application.

Finally, I am going to show you how to deploy a microservice into your OKE cluster.

For the purpose of this demonstration, I built a microservice earlier in a previous blog. It is a containerised NodeJS application called apis4harness that allows to interact with OCI API resources. In particular, it allows to: list, start and stop Oracle Autonomous Data Warehouse (ADW) instances.

Provision OKE Cluster

Now days, we rarely do things manually. We always want to automate most tasks, especially when provisioning environments. Provisioning OKE clusters is not an exception. I recommend using Terraform Oracle Cloud Infrastructure Provider to help version control your provisioning configuration of the Kubernetes cluster, together with the networking configuration, compute, etc.

We are saving for another blog, how to use Terraform to automate the provisioning of OKE clusters. For the purpose of this blog, I want to show how simple it is to spin up a new OKE cluster with just a few 2 clicks, without involving Terraform.

Select “Quick Create” for now – By default a Virtual Cloud Network will be created, that includes: A new VCN, 2 new subnets for LBaaS and 3 worker node subnets.

Create Node Pool: Select a “shape” that works for you. See info about available shapes here.

Type how many nodes you want to have per subnet.

Public SSH Key: Paste your Public SSH key.

Leave Kubernetes dashboard and Tiller (Helm) enabled

When done, click Create.

Within minutes, you will have a working Kubernetes cluster ready to go! Simple, huh?

Click on it and get familiar with its working nodes, IP addresses, subnets, etc.

You can SSH into it using the private key pair. Also make sure to observe the Networking configuration that was created automatically. For example, click on Menu > Networking > Virtual Cloud Networks

Congratulations, your Kubernetes cluster is ready. In the next section, we will install kubectl, that is a CLI for running commands against your Kubernetes clusters.

Install Kubectl

In this section, we are going to install kubectl and configure it to point to your Kubernetes cluster, so that we can interact with it.

I am trying to summarise and simplify the steps here, but if you need more information at any point in time, refer to the official documentation.

First, we need to choose a platform to run kubectl. This is the place from which you are going to run CLI commands against your Kubernetes cluster, for example to deploy your microservices or in general to monitor and manage the state of your cluster. Make sure you satisfy these minimum requirements.

Normally, I build this platform as a Build server VM, also in the cloud, so that it can be shared among my devops-fellow colleagues. In previous blogs, I have explained how to achieve this. However, this time for simplicity purposes, your build server can also be a Vagrant VM running on your laptop.

For the purpose of this demonstration, I have prepared a Vagrant box in the same APIs 4 Harness project.

Note: Give it some time the first time. It will download the Ubuntu Box and install all dependencies. Subsequent times will be much faster.

Once it finishes, as per the bootstrap process, your Vagrant VM is going to come with all necessary components installed, like Docker Engine, so that you can build your containerised app as a Docker image.

Vagrant ssh into it.

vagrant ssh

Move into your host auto-mounted working directory.

cd /vagrant

Now, let’s install OCI CLI, so that we can grab the OKE cluster Kubeconfig easily:

Leave all defaults and respond with Y to all requests to install extra libraries (e.g. python), modify PATH or CLI update, in case you had install older versions of OCI CLI before.

Before using the CLI, you must create a config file that contains the required credentials for working with Oracle Cloud Infrastructure:

bash

oci setup config

Note: The command prompts you for the information required for the config file and the API public/private keys. For simplicity purposes, let the setup dialog generates an API key pair and creates the config file.

Be ready to enter a manager’s OCID, tenancy ID and leave the rest as default values. If you need help to bring the parameters, read this reference or feel free to drop me a question via LinkedIn.

By default, it will write a Config file into /home/vagrant/.oci/config

Since we let it generate a new key pair, we need to upload the generated Public key to the manager user in OCI console. Keys are set by default at: /home/vagrant/.oci

Now let’s retrieve the Kubeconfig from our OKE cluster. To access the kubeconfig for your cluster, run the following commands:

That’s it, kubectl is properly installed and configured to point to your OKE cluster, even though you are running it locally on a Vagrant Box on your laptop and your OKE cluster is somewhere around the world miles away.

Test kubectl by retrieving the version of you Kubernetes cluster. Also retrieve all running services:

kubectl version

kubectl get services --all-namespaces

If you are using RBAC, you need to run the following command to grant enough privileges to your OCI admin user (the one you chose to associate the public key):

Line 8: Setting the Working directory within the new Docker node image (creating and changing current directory)

Line 9: Adding all the local directory (i.e. the “APIs 4 Harness” App) content/files into the Working directory

Line 13: It will run “npm install” to retrieve all the “APIs 4 Harness” NodeJS App dependencies. In this case only the “express” module.

Line 14: Defines the intended port on which the “APIs 4 Harness” App is configured to run on.

Line 15: Setting the command to run when “this” image is run. In this case, running the “APIs 4 Harness” NodeJS App (as indicated in package.json).

As for the “APIs 4 Harness” NodeJS app, I tried to keep it extremely simple. The actual NodeJS exposes the following APIs:

Note: The actual code is at router > routes > services.js

Get all existing ADW instances:

GET: /services/adw

Get an existing ADW instance by ID:

GET: /services/adw/{ocid}

Start an ADW instance by ID:

POST: /services/adw/{ocid}?action=start

Stop and ADW instance by ID:

POST: /services/adw/{ocid}?action=stop

The other file that you might want to have a look is the actual NodeJS descriptor package.json

It is quite self-explanatory, but just pay close attention to:

dependencies -> body-parser, express, js-yaml, http-signature, jssha – This is what is executed at “docker build” time, as defined in the Dockerfile (RUN npm install).

scripts -> Start: node app.js – This is what will be executed at “docker run” time as defined in the Dockerfile (CMD npm start).

First, let’s test your Application locally

Before jumping and deploying your Application into Kubernetes, it is a good idea to run and test your application locally, just using Docker run.

Create a directory called ssh

mkdir /vagrant/ssh

Place inside your Private key. You will need to reference your private key in the next steps. By default I called it: id_rsa_pri.pem – You can change the name accordingly.

Use setEnv_template as a reference and create a new file. Called it setEnv – In there, set the properties of your OCI environment. If you need help to bring the parameters, read this reference or feel free to drop me a question via LinkedIn.

Note: Remember that the public key finger print comes from importing a PEM Public key into the user that you wish to use to invoke the OCI APIs.

Ok, now that everything is clear, let’s build our Docker image. Since we already added the ubuntu user to the docker group during the bootstrap of this Vagrant Box instance. Let’s switch to ubuntu user.

sudo su ubuntu

Build the docker image:

docker build .

Note: Notice the last dot “.”

Give it some time the first time, as it has to pull the node image from Docker Hub first (~200MB).

As the Docker build process moves across the steps, you will be able to see the progress in the console.

At the end it will show you the id of your final Docker image. Make a note of it, as you will need it later when tagging your image.

Let’s quickly test that our new Docker image works well. For this let’s run the image using its id as a reference. The command goes like this:

Note: -i is to run it in interactive mode, which means that you can stop it later on by ctrl+c.

For example:

docker run –env-file setEnv -p 3000:3000 -it c26c58862548

This will run a Docker container from our Docker image and start the “APIs 4 Harness” NodeJS App. It will map the internal 3000 port from the container into also port 3000 in our Host.

The provided Vagrant box is configured by default with NAT and Port-Forwarding on port 3000:3000, so you can open a browser on your host machine and go to localhost:3000 – You should be able to see the “APIs 4 Harness” Swagger UI.

Feel free to play with the APIs to confirm that you can List all ADW instances, as well as stop start individual ADW instances.

Now that we know that our Docker image works as intended, let’s move on to the next section to push it into Docker Hub

Push your APIs 4 Harness App Docker Image to Docker Hub

Now that we have created our Docker image and that we have briefly tested it. Let’s proceed to push it into an Image Registry. We can use OCI-R, that comes included with OKE, but for now let’s use a Docker Hub repository. For this, I assume that you already have a Docker Hub account and that you have created a repository. For example, I created one called: apis4harness – Notice that Docker Hub repos are always prefixed with your Docker Hub username, so you might choose the same name if you like.

Go back to your terminal window where you built your Docker image using the ubuntu user. Ctrl + C in case you are still running the container from last section.

In the terminal, first we need to set the Docker Hub login details.

docker login

Then enter your username and password when requested.

Tag the Docker image:

docker tag [Image_ID] [DockerHubUsername]/[DockerHubRepoName]

For example:

docker tag c26c58862548 cciturria/apis4harness:1.0

Note: You could’ve tagged your Docker image at the moment of “docker building” by using -t [user/repoName]

If you can’t remember your Docker image ID, you can type docker images

Then finally, Docker push the image:

docker push [DockerHubUsername]/[DockerHubRepoName]

E.g.

docker push cciturria/apis4harness:1.0

Give it some time as it uploads your compressed docker image into your specified Docker Hub repository.

After a few minutes, your docker image will appear in your Docker Hub specified repo.

Run your APIs 4 Harness App Docker Image in Kubernetes

Once your docker image is in a Docker repository, like OCI-R or Docker Hub, we can easily pull it and run it on Kubernetes.

Applications in Kubernetes run within the concept of “pods”, that are logical runtime grouping of Docker Containers that make up a whole Application. In our case, for the “APIs 4 Harness NodeJS App” there will be just one Docker Container. Pods are defined as YAML files.

Go back to your Vagrant Box, if not already there.

Important: If you are still under user ubuntu, switch back to user vagrant

Before executing the provision.sh script, you need to set the Application environment properties. For this, use the template /vagrant/deploy/kubernetes/apis4harness-dpl.yaml_sample to create a new file /vagrant/deploy/kubernetes/apis4harness-dpl.yaml – In this file, at the end:

Set the Docker image tag name (e.g. XXX/apis4harness:1.0)

Set all the OCI properties that you used in setEnv while testing the microservice locally with Docker run.

I am thrilled with the Oracle’s Gen2 Cloud Infrastructure architecture, where Oracle completely separates the Cloud Control Computers from the User Code, so that no threats can enter from outside the cloud and no threats can spread from within tenants.

Obviously with more security, there comes more coordination, especially at the moment of invoking OCI resources APIs. Luckily, Oracle did a good job at providing a simple to use CLI and SDK (see here for more information).

For the purpose of this blog, I built a simple NodeJS application that helps demystify the security aspect of invoking OCI APIs. Check this link for examples of running similar code across other Programming Languages.

My NodeJS application manages OCI resources in order to:

List ADW instances

Stop an ADW instance

Start an ADW instance

I started this NodeJS application to list, start and stop ADW resources. However, I designed this application to easily extend it to invoke any other type of OCI resources.

I containerised this application with Docker, to make it easier to ship and run.

Single sign on delivers a number of really important benefits. Firstly, the user experience is much smoother and seamless as users don’t get prompted for multiple passwords and don’t have to remember even more passwords. More importantly, single sign on eliminates the need to manage multiple stores of identities. This can be a big overhead for administrators and sometimes open up additional risks. Finally, an enterprise wide identity solution can often provide additional capabilities can be leveraged by your Oracle Cloud Infrastructure.

Earlier today I was given a challenge by my colleagues. Recently Oracle released the Autonomous Data Warehouse and we have a lot of excitement from customers, partners and internal folk alike. This excitement is driving a lot of innovation right now, but that also brings some challenges. The last thing we want is the Marketing team to mess with Finance resources. How do we make sure different teams don’t step on each other’s toes?

Blogroll

RedThunder.blog and contributors. All Rights Reserved. The views expressed in this blog are our own and do not necessarily reflect the views of Oracle Corporation. All content is provided on an ‘as is’ basis, without warranties or conditions of any kind, either express or implied, including, without limitation, any warranties or conditions of title, non-infringement, merchantability, or fitness for a particular purpose. You are solely responsible for determining the appropriateness of using or redistributing and assume any risks.

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.