Blog Articles

Andrew Block

Recent Posts

Red Hat Summit provides an experience for every type of attendee: Whether it be to attend as many presentations as possible to glean the best practices related to open source technology, to visit as many booths in the Partner Pavilion to see how vendors are enabling open source solutions (or to snag as much swag as possible), or to attend hands-on labs and training sessions to get practical experience with experts to provide guidance. 2017 was my fourth Red Hat Summit event and in each of the prior appearances, I have had the opportunity to participate in both traditional breakout sessions along with demonstrations at the Red Hat booth in the Partner Pavilion. One aspect of Red Hat Summit that I had yet to participate in was with one of the many hands-on labs that are available to attendees. For those unfamiliar with a hands-on lab at Red Hat Summit, the session consists of a two-hour instructor-led course that allows attendees to test drive many popular open source tools and technologies. It provides a way for attendees to have firsthand experience with many of the concepts that are mentioned during Summit. What I did not anticipate going in was the amount of coordination and hard work that was needed in order for the labs to flow seamlessly. Since attendees may not appreciate this effort either, this write-up is intended to provide insights into what it takes to put together and execute a successful hands-on lab at Red Hat Summit.

Note: This article describes the functionality found in the Red Hat Container Development Kit 3.0 Beta. Features and functionality may change in future versions.

In a prior article, Adding Persistent Storage to the Container Development Kit 3.0, an overview was provided for utilizing persistent storage with the Red Hat Container Development Kit 3.0, the Minishift based solution for running the OpenShift Container Platform from a single developer machine. In the prior solution, persistent storage was applied to the environment by pre-allocating folders and assigning Persistent Volumes to the directories using the HostPath volume plugin. While this solution provided an initial entry point into how persistent storage could be utilized within the CDK, there were a number of issues that limit the flexibility of this approach.

Manual creation of directories on the file system to store files persistently.

Persistent Volumes need to be manually created and associated with previously created directories.

The primary theme in these limitations is the manual creation of resources associated with storage. Fortunately, OpenShift has a solution that can both automate the allocation of resources using a storage plugin that is common in many environments.

Note: This article describes the functionality found in the Red Hat Container Development Kit 3.0 Beta. Features and functionality may change in future versions.

The Red Hat Container Development Kit (CDK) provides an all-in-one environment to not only build and test Docker containers, but to make use of them on Red Hat OpenShift Container Platform; all from a single developer’s machine. Since its inception, the CDK used Vagrant as the provisioning platform. Starting with version 3.0, the CDK now makes use of Minishift for the underlying provisioner. The transition to Minishift based CDK 3.0 reduces the number of dependencies that need to be installed and configured. Only a hypervisor such as VirtualBox or KVM is now required.

In an earlier article, Debugging Java Applications using the Red Hat Container Development Kit, it was discussed how developer productivity could be improved through the use of remotely debugging containerized Java applications running in OpenShift and the Red Hat Container Development Kit. Not only does remote debugging provide real time insight into the operation and performance of an application, but reduces the cycle time a developer may face as they are working through a solution. Included in the discussion were the steps necessary to configure both OpenShift and an integrated development environment (IDE), such as the Eclipse based Red Hat JBoss Developer Studio (DevStudio). While the majority of these actions were automated, there were several manual modifications, like configuring environment variables and exposing ports, that needed to be completed to enable debug functionality. Through advances in the Eclipse tooling for OpenShift, most if not all of these manual steps have been eliminated to enable a streamlined process that offers even more functionality out of the box.

Red Hat JBoss Developer Studio Integration

Enhancements made in Red Hat JBoss Developer Studio now provide full lifecycle support of the Red Hat Container Development Kit, including starting and stopping the underlying Vagrant machine. This eliminates the need for the user to execute commands inside a terminal. To start the CDK from within DevStudio, either use an existing workspace or open a new workspace and open the Servers view by navigating to Window -> Show View and select Servers on the menu bar. With the view now open, right click inside the view and select New -> Server and under the Red Hat JBoss Middleware folder, select Red Hat Container Development Kit. Keep the default location for the server’s host name as localhost and select a name of your choosing if desired to represent the CDK connection and select Next. On the next dialog, two items are required to be configured prior to configuring the CDK:

Containerization technology is fundamentally changing the way applications are packaged and deployed. The ability to create a uniform runtime that can be deployed and scaled is revolutionizing how many organizations develop applications. Platforms such as OpenShift also provide additional benefits such as service orchestration through Kubernetes and a suite of tools for achieving continuous integration and continuous delivery of applications. However, even with all of these benefits, developers still need to be able to utilize the same patterns they have used for years in order for them to be productive. For Java developers, this includes developing in an environment that mimics production and the ability to utilize common development tasks, such as testing and debugging running applications. To bridge the gap developers may face when creating containerized applications, the Red Hat Container Development Kit (CDK) can be utilized to develop, build, test and debug running applications.

Red Hat’s Container Development Kit is a pre-built container development environment that enables developers to create containerized applications targeting OpenShift Enterprise and Red Hat Enterprise Linux. Once the prerequisite tooling is installed and configured, starting the CDK is as easy as running the “vagrant up” command. Developers immediately have a fully containerized environment at their fingertips.

One of the many ways to utilize the CDK is to build, run, and test containerized applications on OpenShift. Java is one of the frameworks that can be run on OpenShift, and these applications can be run in a traditional application server, such as JBoss, as well as in a standalone fashion. Even as runtime methodologies change, being able to debug running applications to validate functionality remains an important component of the software development process. Debugging a remote application in Java is made possible through the use of the Java Debug Wire Protocol (JDWP). By adding a few startup arguments, the application can be configured to accept remote connections, for example, from an Integrated Development Environment (IDE) such as Eclipse. In the following sections, we will discuss how to remotely debug an application deployed to OpenShift running on the CDK from an IDE.

With recent changes the to the.NET ecosystem, developers of popular languages such as C# now have the ability to develop and deploy .NET applications across multiple platforms including OSX and Linux. This is made possible thanks to the .NET Core, a modular implementation of the .NET framework capable of supporting both web and console applications.

Aside from opening up opportunities to a new pool of potential developers, .NET Core also enabled these applications to take advantage of certain technologies they were previously restricted from. One of these technologies in particular is Linux Containers. Containerized .NET applications can now benefit from a wide range of features built into containerization technologies such as Docker. This includes a rapid application deployment cycle and portability across machines whether they are located on physical, virtual or hosted in a cloud environment.

In addition, since these applications are running in containers, they are also eligible to be run in OpenShift, Red Hat’s Platform as a Service product. OpenShift provides the flexibility to manage and run containerized applications while also providing the tools necessary for developers to be productive. It is this ecosystem that gives developers the flexibility and freedom to easily build and deploy applications, without needing to be concerned about the underlying infrastructure.

Cloud based technology offers the ability to build, deploy and scale applications with ease; however, deploying to the cloud is only half of the battle. How cloud applications are monitored becomes a paramount concern with operations teams.

When issues arise, teams and their monitoring systems must be able to detect, react, and rectify the situation. CPU, system memory, and disk space are three common indicators used to monitor applications, and are typically reported by the operating system.

However, for Java applications – which we’ll be focusing on in this article – most solutions tap into JMX (Java Monitoring eXtensions) to monitor the Java Virtual Machine (JVM). For applications leveraging Java xPaaS middleware services on OpenShift, they have built-in functionality to provide the capabilities to monitor and manage their operation.

Monday kicked off the second edition of the DevOps Enterprise Summit in San Francisco. Over the course of the three day event, over 100 speakers will share their experiences on lean principles and continuous delivery to over 1000 attendees from around the world. The first keynote of the day featured Heather Mickman and Ross Clanton from Target, who provided an update from their presentation at the inaugural event last year on the ongoing transformation of IT engineers at the retail giant. What had once become a pattern of offshoring work resulting in lower quality results or leveraging contractors, Target had made the commitment over the past four years to reinvent and build a core group of engineers from within; All to the tune of over $1 billion. Their efforts during this time evolved over the course of four primary initiatives: