platform – Red Hat Developer Bloghttps://developers.redhat.com/blog
Insights and news on Red Hat Developer tools, platforms and more
Sat, 25 May 2019 07:00:26 +0000 en-US
hourly
1 https://wordpress.org/?v=5.1121547928Customizing an OpenShift Ansible Playbook Bundlehttps://developers.redhat.com/blog/2018/05/23/customizing-an-openshift-ansible-playbook-bundle/
https://developers.redhat.com/blog/2018/05/23/customizing-an-openshift-ansible-playbook-bundle/#respondWed, 23 May 2018 13:30:11 +0000https://developers.redhat.com/blog/?p=495887Today I want to talk about Ansible Service Broker and Ansible Playbook Bundle. These components are relatively new in the Red Hat OpenShift ecosystem, but they are now fully supported features available in the Service Catalog component of OpenShift 3.9. Before getting deep into the technology, I want to give you some basic information (quoted […]

]]>Today I want to talk about Ansible Service Broker and Ansible Playbook Bundle. These components are relatively new in the Red Hat OpenShift ecosystem, but they are now fully supported features available in the Service Catalog component of OpenShift 3.9.

Before getting deep into the technology, I want to give you some basic information (quoted below from the product documentation) about all the components and their features:

Ansible Playbook Bundles (APB) are a method of defining applications via a collection of Ansible Playbooks built into a container with an Ansible runtime with the playbooks corresponding to a type of request specified in the Open Service Broker API specification.

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.

So the ASB (Ansible Service Broker) is the man-in-the-middle between the APB (Ansible Playbook Bundle) and a third-party user that would like to consume the service offered through the Ansible Playbook on OpenShift.

Linking up these two components, OpenShift Service Catalog is able to offer—through OpenShift Web Portal and its API—access to these pieces of deployment/configuration to OpenShift users. This enables an entire world of possibilities from an OpenShift perspective:

Please note: You need to have a fully configured and functional OpenShift 3.9 cluster before continuing. Minishift and the CDK, at the moment, do not offer Service Catalog and Ansible Service Broker enabled. Please check the project documentation.

Getting Started with an APB

Starting from OpenShift 3.9, you’ll not need any additional configuration or deployment to get Ansible Service Broker and the Service Catalog working. They will be set up by the OpenShift installer at first installation/update.

So, the first thing you need to do is to let the Ansible Service Broker search in the OpenShift default registry for container images. To achieve this, edit the configmap used by the Ansible Service Broker and edit the whitelist:

$ oc edit configmap broker-config -n openshift-ansible-service-broker

In the configmap, add a whitelist rule for the OpenShift registry similar to the one already set up for the Docker Hub registry:

As you can see, the command creates a description file called apb.yml for metadata and parameters that need to be requested from users of the playbook bundle.The metadata will be used for displaying the item in the ServiceCatalog, while the parameters will be used to prompt users of the bundle to supply necessary configuration details. We’ll take a look and customize it in the next section.

Then it creates a Dockerfile for building up the final container and a Makefile, of course, that defines the method for building and pushing the container up to the OpenShift internal registry.

Finally, you’ll find the two key directories containing—guess what?— Ansible “playbooks” and “roles.” These directories contain pre-built playbooks for provisioning and de-provisioning and a skeleton for a custom role you may want to build.

As you can see, the playbook is really simple. It runs against localhost (connection: local), and it will execute two pre-defined roles: ansible.kubernetes-modules and ansibleplaybookbundle.asb-modules. These two roles will set up the basic actions for letting your container communicate with the current OpenShift platform and its underlying Kubernetes layer.

Finally, the playbook will execute a custom role, provision-mariadb-test-apb. This role is basically empty; you should fill it with your code!

Customizing Your First APB for Connecting to a Remote Host

As mentioned in the introduction, you will not use the standard behavior for your APB. Instead, you’ll make it connect to an external host for installing and configuring MariaDB.

First, you need to edit the apb.yml file to add some metadata and variables that you’ll use later in the playbooks:

Inspecting the playbook, you’ll see that first you add the host (from the variable) to the inventory, and then you set up the SSH private key for connecting to the remote host. To accomplish this, I use a single shell command instead of taking the command apart using all the available Ansible modules.

Then in the second playbook, you connect to the remote host to configure the firewall and MariaDB.

And finally, you can push the APB into the registry. But before proceeding, you should be logged in to OpenShift as an admin user with a valid token. The user system:admin doesn’t have a token by default, so create an additional user and give it the cluster-admin” role.

Sometimes, some nodes can get a different version of your APB in the Docker cache, so you might experience different behaviors if you did multiple builds and pushes. You can manually log in to the various OpenShift nodes and then clean up the outdated Docker images (you may want use Ansible from bastion host for doing that).

That’s all!

Feel free to ask if you have any questions!

About Alessandro

Alessandro Arrichiello is a Solution Architect for Red Hat Inc. He has a passion for GNU/Linux systems, which began at age 14 and continues today. He works with tools for automating enterprise IT: configuration management and continuous integration through virtual platforms. He’s now working on a distributed cloud environment involving PaaS (OpenShift), IaaS (OpenStack) and processes management (CloudForms), container building, instance creation, HA services management, and workflow builds.

]]>https://developers.redhat.com/blog/2018/05/23/customizing-an-openshift-ansible-playbook-bundle/feed/0495887OpenShift 3.6 – Release Candidate (A Hands-On)https://developers.redhat.com/blog/2017/07/28/openshift-3-6-a-hands-on-the-release-candidate/
https://developers.redhat.com/blog/2017/07/28/openshift-3-6-a-hands-on-the-release-candidate/#respondFri, 28 Jul 2017 11:00:08 +0000https://developers.redhat.com/blog/?p=437356Hi, Everybody! Today I want to introduce you to some features of OpenShift 3.6 while giving you the chance to have a hands-on experience with the Release Candidate. First of all: It’s a Release Candidate and the features I’ll show you are marked as Tech Preview, so use them for testing purpose ONLY! We cannot […]

Docker-Machine: A Virtual Machine with docker installed! – “Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers” – https://docs.docker.com/machine/install-machine/.

Virtualization software (VirtualBox/Libvirt/KVM/Xhyve).

Enough RAM for running a 4GB (or any other Minishift-like) virtual machine.

Please Note:

If you didn’t use OpenShift Clients (oc) binary before, it’s not so hard: just unpack it, place it somewhere and run it.

If you didn’t install docker-machine before, just follow the how-to provided in the link before: it will be easy!

Depending on the Virtualization layer you’ll use, you may need configuring/installing an appropriate driver to let docker-machine works with it, these are some examples:

In the following steps, I’ll use commands for my Libvirt/KVM driver: sorry mac/win-users! But you will easily adapt commands to your driver, don’t worry! So, be aware of editing commands when you see “-kvm-” options!

All the commands can run as a standard user: we don’t need super powers!

In the previous command, we’re creating an “Openshift” named virtual machine, starting from Minishift boot2docker image, with some infrastructural configuration (CPU/RAM) and most importantly with the Openshift’s insecure registry subnet configuration (172.30.0.0/16).

In the previous command, we just set the public-hostname for our Openshift’s platform, its wildcard DNS for routing apps, some options for letting it being persistent and finally the option for setting up the new brand new Service Catalog.

Once the OpenShift platform starts we need to log in as system:admin and then grant unauthenticated access to the template service broker API for using it with the Service Catalog:

If you now log into the interface, you should see a bunch of brand new templates available!

The ones ending with “(APB)” are the Ansible Playbook Bundle‘s template!

PLEASE READ:One more step is required, just because some of the containers used by APB templates requires “root” permissions, we need to enable the ANYUID Security Context for every authenticated user (eventually you may restrict it to the user ‘developer’):

$ oc adm policy add-scc-to-group anyuid system:authenticated

That’s all folks! Enjoy you’re OpenShift 3.6 RC and don’t forget about using it ONLY for testing purposes!

Ciao,

About Alessandro

Alessandro Arrichiello is a Solution Architect for Red Hat Inc. He has a passion for GNU/Linux systems, which began at age 14 and continues today. He worked with tools for automating Enterprise IT: configuration management and continuous integration through virtual platforms. He’s now working on distributed cloud environment involving PaaS (OpenShift), IaaS (OpenStack) and Processes Management (CloudForms), Containers building, instances creation, HA services management, workflows build.

]]>https://developers.redhat.com/blog/2017/07/28/openshift-3-6-a-hands-on-the-release-candidate/feed/0437356The CoolStore Microservices Example: DevOps and OpenShifthttps://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/
https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/#respondTue, 16 May 2017 11:00:32 +0000https://developers.redhat.com/blog/?p=435057An introduction to microservices through a complete example Today I want to talk about the demo we presented @ OpenShift Container Platform Roadshow in Milan & Rome last week. The demo was based on JBoss team’s great work available on this repo: https://github.com/jbossdemocentral/coolstore-microservice In the next few paragraphs, I’ll describe in deep detail the microservices CoolStore […]

In the next few paragraphs, I’ll describe in deep detail the microservices CoolStore example and how we used it for creating a great and useful example of DevOps practices.

We made some edits to the original project to simplify all the concepts during the demo. (We just edited some usernames and timeout.) You can browse our repo on the next link, anyway keep an eye on the original repo too: https://github.com/alezzandro/coolstore-microservice

PLEASE NOTE: we used “stable-ocp-3.5” branch, so please refer to this when looking at the repo.

The whole CoolStore project has been set up in an OpenShift environment using the script available in the repository at:

CoolStore PROD: Project that will hold the application’s production environment.

CoolStore TEST: Project that will hold the application’s testing environment.

Inventory Integration: Project that will hold the integration tests environment for Inventory microservice.

Let’s demo

First, of all, we introduced to the audience the “CoolStore TEST” project with all the running pods for every microservice defined previously in the architecture. It was an opportunity to walk through the various OpenShift’s entities involved in the project: Pod, DeploymentConfig, Service, and Route.

Then we introduced the “CI/CD” project with the running services:

Then we started browsing the running Gogs service GUI and the manager‘s repo: “coolstore-microservice”.

On the Gogs’ side, we have two pre-configured users:

manager: the user that will hold the main code repository. This user will be responsible for the code that will be run in production environment. See it as a development manager, or code reviewer.

developer: the user that will hold his own copy of code repository. This user will be responsible for patching/fixing the code and requesting, “pull request” vs. the team’s repo, about the changes.

Next step of the demo, was about the process of forking the repo, clone it, commit a fix, then create a pull request, as describe in the following schema.

During the demo, we also demonstrated that developer user connected to OpenShift and created a new project for hosting the build/tests for his upcoming fix.

We also demonstrated, just before the “commit phase” how to create a “webhook” for the Inventory BuildConfig that started automatically the local developer’s build once upon the commit (this step, of course, is optional, as one may want to manually trigger the build of the code).

We’ve introduced the concept of Source to Image (s2i) and how it works under the hood in OpenShift, helping the developer in the code build process.

Before the pull request, we also checked that the fix we applied is actually working through the integrated swagger interface for the Inventory microservice.

We finally logged in the git server as “manager” user for approving the incoming developer’s pull request.

Jenkins’ pipelines, now.

Back on OpenShift, we’ve introduced then the Pipeline’s concepts, its usage and how the inventory pipeline will work for moving the applied fix in production using the better and safer way.

The Inventory-Pipeline will:

Take the team’s code repository and start a new Build in the Inventory-Integration project that was set up by the “operation” user.

Deploy and run tests on “Inventory-Integration” project. This step overlaps a bit, with what we did in the “developer” self-service environment. However, one should notice that in theory, you can have multiple fixes/features coming from many developers (each working in his environment), and the integration environment is used to test that everything plays nicely together.

Promote the container image (that we just built), in the project “CoolStore TEST” and run tests on it.

Deploy the container image in the “CoolStore PROD”, without switching live production traffic (OCP Route) to it (e.g. a “silent” prod deploy).

Wait for “Go Live”.

Switch traffic on the new running container, by moving the production route to the service pointing to the latest version of the service.

As you can see, we’ve set up two versions of the running Inventory container following the principles of the Blue/Green Deployment Model. Using this deployment model one version of our application is running live and the other it’s idle ready to be switched on, in the case of new deployments. The advantages include zero downtime, testing your software in a real environment, and potentially rolling back easily to a previous version, if necessary.

In order to complete the process, we ran the “inventory-pipeline”, then tested the “Inventory-Green” just deployed in prod (as idle) through a temporary route. Finally, we approved the Go-Live, by clicking on the manual input step required by the jenkins pipeline.

Defend your application using: the Circuit Breaker

The last demonstration was on the Circuit Breaker features provided by JBoss Fuse Integration Service (FIS) and Netflix OSS Hystrix library.

In the production project, indeed, we also deployed two containers of the Netflix OSS project:

You’ll find below a snippet of code showing how this library (hystrix) can be used in a Camel route (on JBoss FIS).

After a quick introduction, we demonstrated that after shutting down the live inventory container (by down-scaling it to 0 pods), the inventory service was unavailable but the whole application was still running, just reporting that the message “Inventory unavailable”. That’s because in our camel – hystrix route, we defined a static fallback (a string) in the case of service unavailability.

Looking at Hystrix console the circuit for the Inventory service is “open“.

That’s all!

Thank you so much if you joined OpenShift Container Platform Roadshow in Milan/Rome and see you soon!

About Alessandro

Alessandro Arrichiello is a Solution Architect for Red Hat Inc. He has a passion for GNU/Linux systems, which began at age 14 and continues today. He worked with tools for automating Enterprise IT: configuration management and continuous integration through virtual platforms. He’s now working on distributed cloud environment involving PaaS (OpenShift), IaaS (OpenStack) and Processes Management (CloudForms), Containers building, instances creation, HA services management, workflows build.

About Giuseppe

Giuseppe Bonocore is a Solution Architect for Red Hat Inc., working on topics like Application Development, JBoss Middleware e Red Hat Openshift. Giuseppe has more than 10 years of experience about Open Source software, in different roles. Before joining Red Hat, Giuseppe worked in technical leadership roles in many different international projects, deploying open source based projects all across Europe.

]]>https://developers.redhat.com/blog/2017/05/16/openshift-and-devops-the-coolstore-microservices-example/feed/0435057How to Get Developers to Adopt Your Producthttps://developers.redhat.com/blog/2017/04/20/how-to-get-developers-to-adopt-your-product/
https://developers.redhat.com/blog/2017/04/20/how-to-get-developers-to-adopt-your-product/#respondThu, 20 Apr 2017 11:00:46 +0000https://developers.redhat.com/blog/?p=434413Recently, I participated in a focus group where developers were asked to discuss how they make technology adoption decisions. Even “the big guys” seem unsure of how to get developers to notice and adopt their products. So, in this post, I’m going to try to reduce our learning and adoption process down to some concrete steps. […]

]]>Recently, I participated in a focus group where developers were asked to discuss how they make technology adoption decisions. Even “the big guys” seem unsure of how to get developers to notice and adopt their products. So, in this post, I’m going to try to reduce our learning and adoption process down to some concrete steps. The truth is, we don’t just pick up tools, components, libraries, or languages just to complete a particular task or project. In truth, any technology we adopt has to help us do one or more of three important jobs. The more of these jobs your product can do, the more likely developers will pick it up and stick with it.

Job #1. Staying ahead of the curve.

As developers, we’re constantly navigating through a meteor shower of new frameworks, platforms, and libraries being hurtled at us at blinding speed. Somehow, we are driven and expected to possess a “technology radar,” a sense of what the next big platform or paradigm is. When the business need for a new technology arises, we must be ready to leverage that new technology or fall behind other developers who are.

So, does a developer sense which way the technological winds are blowing? We are ordinary mortals (however reluctant we may be to admit it), and the mechanisms by which we perceive “the next big thing” are often products of habit and evolution rather than conscious choice. Here are a few:

Influencers. Everyone is promoting something, but we are most likely to listen to those who have already earned our trust or gained our rapport. These can be highly esteemed thought leaders (e.g. Martin Fowler) or senior colleagues and friends.

The Availability Heuristic. We tend to accept the familiar thing as the truth. So when we seek to read up on some hot new web framework or programming language, we may well choose to catch up on the first one that comes to mind.

These processes can lead us to determine which platform, language or tool to get to know in order to “stay up to date.” It’s important to note here that the desired outcome of “staying up to date” is not mastery of the tool being studied. Rather, the desired outcome is an assessment of the features and developer experience of the software, as well as its fitness for any potential development uses.

Thus, for staying up to date, a developer need not even employ the technology itself. Rather, he or she may most immediately rely on the tool’s website, documentation, tutorials, and examples. Therefore, it is crucial for technology products to reveal their value up front. An application example showing a real-world scenario reveals the value of a programming language much more quickly than a lengthy tutorial that goes through a high-level overview, installation instructions, and a contrived “hello world”-like the program.

Once we’ve assessed the value proposition of the tool, we may become curious enough to investigate it more extensively or file it away what we’ve learned until it’s time for job #2.

Job #2: Getting something into production.

Most developers won’t get to leverage every hot new technology in their work life. But occasionally, many of us do get to choose a platform or a technology to use in our day jobs, delivering value to our employers and customers. If the job at hand cannot be done with a technology already known to us, we are forced to repeat Job #1 at an accelerated pace. The unconscious permeation by influencers, social proof, and availability becomes replaced by intentional research in this scenario. The ability of the candidate product to reveal its value up front becomes even more crucial.

However, chances are we‘ve already come across a technology that’s a candidate for the job at hand.

Most likely, it’s a technology we’ve already used in a production-bound project. Such a technology has a gargantuan home court advantage. The greater the pressure, the more likely we are to err on the side of the familiar.

When the job calls for a technology in a category we haven’t used yet for a production-bound job, that home court advantage goes to the technology we’ve become familiar with in the process of “staying up to date.” So, if one candidate technology has become familiar to us through influencers, social proof, and accessibility while another has not, our preference toward the familiar technology will be disproportionate to its objective merits.

Once we’ve selected a technology for a production-bound job, our experience and outcomes with that technology serve to determine whether the technology is “rehired” for the next production-bound job. The totality of these experiences can play a part in determining whether the technology is adopted for the ultimate and most glorious job of all, job #3.

Job #3. Keeping us happily employed.

Why do so many engineers, teams, and enterprises still learn and use Java, all while whining continuously over its verbosity, type erasure, mammoth footprint, and more issues? The Tiobe index, which tracks programming language adoption, shows Java well ahead of the herd through most of the last 15 years. I’ve already discussed the attraction of familiarity, but 15 years is enough time for any technology to become significantly ubiquitous and familiar. And the next two runners up, C and C++, are even older.

So why are there so many Java developers? Because it “runs everywhere?” Possibly, but most of the time java applications run on the enterprise backend, which can usually be made to conform to tight specifications. What most of us will not readily acknowledge – but know to be true – is that our most important job as engineers is not to deliver value to an employer or a customer. An engineer’s most important job is to get and to keep a job. Java has a proven track record of keeping engineers, including this engineer, happily employed for many years. This is why, for all the latest-and-greatest languages that have come since, Computer Science graduates today come into the workforce knowing Java.

How did Java get to this lofty position? In the enterprise, IBM, BEA, and others marketed proprietary J2EE solutions aggressively to the enterprise. Broad enterprise adoption created a large developer ecosystem, which gave rise to tooling and libraries that have kept Java developers sufficiently effective and therefore content to maintain the ecosystem.

Here’s an even more striking example of the power of enterprise adoption: Salesforce. Some may struggle with the notion of resting one’s career on a closed, proprietary cloud platform. However, according to an IDC analysis, by years end of 2015 through 2020, the Salesforce ecosystem will generate 1.9 million jobs among Salesforce customers. A 2014 breakdown by Business Insider found that “Salesforce Architect” was the most valuable skill to have on a tech resume, in terms of impact on salary. There are entire recruiting firms dedicated specifically to placing candidates with Salesforce skills. Despite the risk of subjecting one’s career to vendor lock-in, the demand for Salesforce in the enterprise has led to its adoption by hundreds of thousands of developers.

Enterprise adoption is not a necessary condition for developer adoption, but it is a sufficient condition. So when a technology is effectively marketed to and reliably supported for the enterprise, developer adoption follows.

If you build it, will they come?

No technology, however great, is adopted in a vacuum. To be adopted widely, a technology needs to register on developers’ technology radars, be adopted by business, and work effectively and reliably in production. Excelling at all three of these jobs may be a tall order, but when this miraculous confluence happens, the entire tech world takes notice.

For a framework for building enterprise Java microservices visit WildFly Swarm.