Create your workspace using your stack and embedding your JEE project located on a Git repository

For this second part, we’ll start configuring the workspace by adding some helpful settings and commands for building and running a JBoss EAP project. We’ll then see how to use the local JBoss EAP instance for deploying and debugging our application. Finally, we’ll create a factory so that we’ll be able to share our work and propose an on-demand configured development environment for anyone that needs to collaborate on our project.

Configuring your JBoss EAP workspace

In the previous article, we ended up with a workspace that was configured for Java but with some missing dependencies. An extra step is usually necessary: indicate that you’re dealing with a Maven project. This has to be done only once by the user that set up the workspace. For that, go to Project > Update Project Configuration and enable Maven under the JAVA section. Once that is done, an additional External Libraries item appears in your project tree. You can now open Java files and play around with code navigation, Java completion, and so on.

You should now be able to launch your first build command. Open the Commands Palette using Run > Commands Palette or the Shift+F10 shortcut. You’ll see the build command was defined when you created the workspace and you may double-click it to run it.

After a few seconds, you’ll see the successful build in the build command’s dedicated console.

Nice! You can now start modifying code and do some refactoring. We’re able to edit code, compile it, and package it, but let’s see how to test it locally within our JBoss EAP instance.

Adding some JBoss EAP commands

Let’s start by adding a new command for starting the JBoss EAP instance that is included within our stack image. Looking just above the project tree view, you’ll find an icon on the right that allows you to open the commands management view. You’ll see that commands are categorized into BUILD, TEST, RUN, DEBUG, DEPLOY, and COMMON goals. In the RUN section, create a new Custom command that you’ll call start-eap and add the command below:

You can now launch this command through the Command Palette or through the Run blue arrow on the menu bar. The command is executed in its own console and you should see output like the following indicating that your JBoss EAP 7.1 instance is up and running.

Now let’s deploy our application to the running instance. For that, let’s create a new command within the DEPLOY section and call it copy-war. Add the command below and execute it.

This enables the previously built WAR archive to be deployed to our JBoss EAP instance’s deployments folder. The instance should now hot-deploy it in a few seconds. You may now want to check your application and play with it. Just right of the command console, click the + button and choose Servers. This will open a new view displaying the URL corresponding to the different servers attached to your workspaces. Remember the eap server we declared in the stack configuration? This information is used by Red Hat CodeReady Workspaces to create a new OpenShift route that allows you to access your deployed application!

Just copy and paste the URL into your browser and you should see our test application live.

For now, we just have created simple commands deploying a packaged WAR but you can also call some commands allowing you to work using an exploded directory structure and hot-reload of JSP and static resources. For example, I use the following build-dev command for initializing a directory structure within the deployment folder of the JBoss EAP instance:

Debugging

Red Hat CodeReady Workspace tooling can also be used for debugging your application. In order to do that, create a new command as usual within the DEBUG section. Let’s call it start-eap-debug and just put there the following command, including the debug flag and the port 8000 we have used within our stack definition:

Now start the JBoss EAP instance using the debug mode. Before starting it up again, you may want to stop the running instance: you can achieve that by looking for the start-eap running process in the top EXEC menu bar and clicking the blue square. Your instance is now launched in debug mode and you have to launch a debug session within the IDE. Before doing so, remember that the Edit Debug Configurations item in the Run menu lets you configure a connection to a remote JBoss EAP instance using port 8000, as shown below.

You can now start a debug session through the Run > Debug > Remote EAP menu item. The IDE connects to localhost:8000 and switches to the debug perspective. You can now open some Java class like /src/main/com/openshift/service/DemoResource.java file. Click on line 44 to place a breakpoint. Now go to the browser tab hosting your app and click the Log Info button; you should see the debug session starting in the workspace and filling up the Frames and Variables panels.

Sharing your work with a factory

Setting up everything was not that hard but it takes a little time and can be error-prone. Red Hat CodeReady Workspaces offers the concept of a factory in order to be able to reproduce and duplicate a workspace configuration. Using factories, you can easily onboard new collaborators for your project by making everything available with a single click!

Let’s create a factory for our workspace. From the Red Hat CodeReady Workspace dashboard, choose the Factories menu item on the left vertical menu, and then give your factory a name and select the Workspace you want to use as a basis. Choose CREATE and then explore the factory properties in the detail screen:

The most important attributes of a factory are its URLs, which can be used for launching a new workspace embedding all the configuration and commands we added to the original workspace. A URL may be combined with nice badges to offer instant access for any README or wiki page.

Just copy and paste one of the URLs into a browser tab or click a badge and you’ll see this nice crane animation building your own workspace on demand, allowing you to quickly starting collaboration on a new project.

Now that your collaborator’s workspace is up and running, she can start coding and easily contributing pull requests to your original source code repository. But I’ll leave that topic for a later article.

Get started!

We have seen through this tour how Red Hat CodeReady Workspaces allow you to configure a development environment and easily replicate and distribute it among your organization. The embedded cloud/browser-based IDE provides everything you need to start quickly collaborating on projects while providing you security through centralization of source code and authentication of access. Red Hat CodeReady Workspaces gives you greater security and faster onboarding, and it ensures your code works on all your developers’ machines too.

Best of all, it’s easy to sign up for the beta. Visit the product page to get the code and everything you need to know about the product.

]]>https://developers.redhat.com/blog/2019/01/28/codeready-workspaces-streamline-jboss-eap-development-part2/feed/0554117Eclipse Che 7 is Coming and It’s Really Hot (4/4)https://developers.redhat.com/blog/2018/12/21/eclipse-che-7-is-coming-and-its-really-hot-4-4/
https://developers.redhat.com/blog/2018/12/21/eclipse-che-7-is-coming-and-its-really-hot-4-4/#respondFri, 21 Dec 2018 17:53:02 +0000https://developers.redhat.com/blog/?p=551067Eclipse Che 7 is an enterprise-grade IDE that is designed to solve many of the challenges faced by enterprise development teams. In my previous articles, I covered the main focus areas for Eclipse Che 7, the new plugin model, and kube-native developer workspaces. This article explains security and management of Eclipse Che 7 in enterprise deployment scenarios […]

Enterprise Grade Cloud IDE

Eclipse Che has gained a great deal of interest in large enterprises that are moving to containers and want to standardize the developer workspace and remove intellectual property (source code) from hard-to-secure laptops. There are a number of features needed in order to make Che a simple-to-manage tool for these large and often private environments. Organizations want to secure workspaces, deploy them on new infrastructure, and make it easier for teams to collaborate while maintaining developer autonomy.

For those reasons, we are working on a number of different facets to make Eclipse Che easier to run and simpler to administer and manage.

Eclipse Che 7 — timing?

There is A LOT that is coming with Eclipse Che 7. We spent a lot of time redefining the project’s foundations for the future, making it more enjoyable to use, easier to adopt by large enterprise, and able to support its community growth.

We are all very excited about this new version. In the following weeks, you’ll be reading more about the new capabilities and how they have been built. Eclipse Con Europe was a great event where we were able to unveil a lot of the work we’ve been doing. Now it is time to share it with a broader audience.

It’s available today: when you create a new workspace from the latest Eclipse Che release, you can select Che 7 stacks. You can test it now, and you can post feedback or report bugs — those are always helpful and valuable !

Eclipse Che 7 early beta will be available in February with GA-level Che 7 planned for March.

]]>https://developers.redhat.com/blog/2018/12/21/eclipse-che-7-is-coming-and-its-really-hot-4-4/feed/0551067Eclipse Che 7 is Coming and It’s Really Hot (3/4)https://developers.redhat.com/blog/2018/12/20/eclipse-che-7-is-coming-and-its-really-hot-3-4/
https://developers.redhat.com/blog/2018/12/20/eclipse-che-7-is-coming-and-its-really-hot-3-4/#respondThu, 20 Dec 2018 22:30:52 +0000https://developers.redhat.com/blog/?p=550817With a new workspaces model and full “dev-mode” for application runtimes—Eclipse Che the first kube-native IDE! In Part 1 of this series, I highlighted the main focus areas for Eclipse Che 7. Part 2 covered the new plugin model. This article explains the different changes that have been introduced for Che workspaces, in order to provide […]

]]>With a new workspaces model and full “dev-mode” for application runtimes—Eclipse Che the first kube-native IDE!

In Part 1 of this series, I highlighted the main focus areas for Eclipse Che 7. Part 2 covered the new plugin model. This article explains the different changes that have been introduced for Che workspaces, in order to provide full “dev-mode” capabilities on top of application runtimes by sidecaring developer tooling.

Kubernetes native IDE

This new version of Eclipse Che makes it the first Kubernetes native IDE.

Developers using Eclipse Che use containers directly in their developer workspaces. Che workspaces provide a “dev mode” layer on top of the containers used in production, adding intellisense and IDE toolings.

The work on Workspace.Next allows Che to use bare application definitions (a Docker image, a Composefile or a list of Kubernetes resources) without the need to patch them to inject the IDE services. With Workspace.Next, IDE toolings are microservices packaged in their own sidecar containers, bringing their own dependencies and keeping application’s containers untouched. The execution of IDE toolings is isolated from each other and from the application’s containers too. Each IDE tool now gets its own lifecycle, the ability for easy upgrading or switching, and coming soon its own scalability mechanism.

]]>https://developers.redhat.com/blog/2018/12/20/eclipse-che-7-is-coming-and-its-really-hot-3-4/feed/0550817Eclipse Che 7 is Coming and It’s Really Hot (2/4)https://developers.redhat.com/blog/2018/12/19/eclipse-che-7-is-coming-and-its-really-hot-2-4/
https://developers.redhat.com/blog/2018/12/19/eclipse-che-7-is-coming-and-its-really-hot-2-4/#respondWed, 19 Dec 2018 13:00:55 +0000https://developers.redhat.com/blog/?p=549967With a new plugin model and compatibility with VSCode Extensions — Eclipse Che is on Fire! In my last blog post, we highlighted the main focus areas of Eclipse Che 7. This blog post provides a deep dive on the new plugin model of Eclipse Che 7. New Plugin Model Eclipse Che is a great platform to build […]

]]>With a new plugin model and compatibility with VSCode Extensions — Eclipse Che is on Fire! In my last blog post, we highlighted the main focus areas of Eclipse Che 7. This blog post provides a deep dive on the new plugin model of Eclipse Che 7.

New Plugin Model

Eclipse Che is a great platform to build cloud-native tools. For Eclipse Che to be successful in its mission, it requires a strong extensibility model with an enjoyable developer experience for contributors.

In the past, Eclipse Che’s extensibility was focused on white-labelling use cases. ISVs were able to customize Eclipse Che, building their own version by completely customizing it and distributing it to their own audiences. While that extensibility approach has been great for many partners, it has always been seen as complex, with a technology stack (especially GWT in the IDE) which resulted in a non-optimal developer experience. The lack of a dynamic extensibility also forced a Che Plugin to be packaged in a “Che assembly” in order to make it available to end users. There was no way to quickly build a plugin, package it so that it could be installed in a running Che and make it available without rebuilding all of Che.

To address these issues we’ll be phasing out the GWT-based IDE in favour of another open Eclipse Foundation IDE project: Eclipse Theia. As introduced earlier, Eclipse Theia is a framework to build web IDEs. It is built in TypeScript and will give contributors a more enjoyable experience with a programming model that is more flexible and easier to use, and makes it faster to deliver their new plugins.

Our main goal is to provide a dynamic plugin model. In Che, a user shouldn’t need to worry about the dependencies needed for the tools running in their workspace — they should just be available when needed. This means that a Che plugin provides its dependencies, its back-end services (which could be running in a sidecar container connected to the user’s workspace), and the IDE UI extension. By packaging all these elements together, the user’s impression is that Che “magically” provided language services and the developer tooling they need for their workspace.

VSCode Extensibility Compatibility

There is one more important aspect of the plugin model — we want to rationalize the effort for a contributor who is willing to build a plugin and distribute in to different developer communities and tools. For that purpose, we have introduced into Eclipse Theia plugins API to allow compatibility with the extension points from VS Code. As result, it becomes much easier to bring an existing plugin from VS Code onto Eclipse Che. The main difference is in the way the plugins are packaged. On Eclipse Che, the plugins are delivered with their own dependencies in their own container.

See the video on the SonarSource VSCode plugin:

In order to expose these plugins and make them consumable we will build a plugin marketplace. This will be open to the community, but also allow private Che installs behind firewalls to create their own in-house marketplace with only the plugins which are appropriate for their users. Today, the plugins are under a plugin registry on a github repository

Self Hosting

Building plugins for Che must also be a fun experience and the turnarounds must be as fast as possible in the developer innerloop (the time spent between introducing a change and seeing/debugging the result. We needed to improve that from our previous GWT-based IDE, so we built a complete Hosted Mode to allow Che contributors to build Che directly from Che. It provides the complete lifecycle — from creating a new plugin, to coding it and debugging it. The team that is building that new capability is already using this and they love it. They also feel more productive than in the past

See this video on plugin development for Eclipse Che:

Try Eclipse Che 7 Now!

Want to give a try to the new version of Eclipse Che 7? Try the following:

]]>https://developers.redhat.com/blog/2018/12/19/eclipse-che-7-is-coming-and-its-really-hot-2-4/feed/0549967Eclipse Che 7 is Coming and It’s Really Hot (1/4)https://developers.redhat.com/blog/2018/12/18/eclipse-che-7-coming-part-1/
https://developers.redhat.com/blog/2018/12/18/eclipse-che-7-coming-part-1/#respondTue, 18 Dec 2018 13:00:13 +0000https://developers.redhat.com/blog/?p=549437A better plugin model, a new IDE, and Kubenative Workspaces — Eclipse Che Is on Fire ! With this article, I am starting a series of articles highlighting the new capabilities which will be introduced with Eclipse Che 7. This article provides an overview of the areas of focus for Eclipse Che 7 as well as its new […]

With this article, I am starting a series of articles highlighting the new capabilities which will be introduced with Eclipse Che 7. This article provides an overview of the areas of focus for Eclipse Che 7 as well as its new IDE and ability to use different IDEs such as Jupyter.

Intro

What a year for Eclipse Che! Release after release, Eclipse Che gets better and better thanks to the engagement of the community and your feedback.

As an open source project, the core values of Eclipse Che are to:

Accelerate project and developer onboarding: As a zero-install development environment that runs in your browser, Eclipse Che makes it easy for someone to join your team and contribute to a project.

Remove inconsistencies between developer environments: No more: “but it works on my machine….” Your code works (or doesn’t) exactly the same way in everyone’s environment.

Provide built-in security and enterprise readiness: As Eclipse Che becomes a viable replacement for VDI solutions, it must be secure and it must support enterprise requirements such as role-based access control (RBAC) and the ability to remove all source code from developer machines.

At the beginning of 2018 we shipped Eclipse Che version 6.0. That was a major milestone which added capabilities needed for developer teams and enterprises who wanted benefits from shared and rationalized developer environments. You can read more in the release note from Eclipse Che 6.0.

A few months ago, we announced during CheConf 18.1 the beginning of a new journey and a new chapter for Eclipse Che version 7. Seeing the interest from enterprises already using Eclipse Che and from the community that is building cloud-native applications, we organized the Che roadmap into 4 main areas:

IDE.next: Updates to the editor to increase the joy of development.

Plugins: Features to drive further growth in the Che ecosystem.

Workspace.next: IDE tools running as microservices in containers to improve the fidelity between developer workspaces and production environments.

Enterprises: Features to support large scale use of Che.

IDE.Next

We have integrated Eclipse Theia into Che to replace the GWT based IDE. Eclipse Theia has the foundation required to help us to enrich Eclipse Che.

Here is a small video showing the new IDE:

Only a few capabilities are shown in this video and there are a lot more to come. The most exciting ones are:

Monaco based editor: blazing fast and responsive editor, codelens and much more

However, there is a substantial feature gap between Eclipse Theia and our current Che IDE. Most of this year has been spent adding needed features to Theia so that it can fully replace the current IDE. The Eclipse Che contributors have spent more than five years building web IDEs in the cloud. So when we decided to switch to Eclipse Theia, we naturally wanted to make good use of that experience to make the new IDE really substantial. And enterprise grade.

We’ve been working hard to bring:

Debug Adapter Protocol

Language Server Protocol

Commands

Preferences

Keybindings

Textmate Support

Security

In the following months, that new IDE will become the default IDE for your workspaces.

Different IDEs for different use cases

There is one more thing. Che will still provide a default web IDE for workspaces, but we also did important work in order to decouple the IDE so that it is possible to plug a different IDE into Che workspaces. There are a lot of cases where the default IDE will not cover the use cases of your audience, or you might have stakeholders who are using a dedicated tool that covers their needs instead of using an IDE. In the traditional Eclipse IDE world, that was done with RCP applications.

With Eclipse Che 7, you’ll be able to plug any tool you want into a Che workspace:

It can be based on Eclipse Theia (which is a framework to build a web IDE), such as the popular Sirius on the web: See the youtube video.

Check out Red Hat CodeReady Workspaces for Red Hat OpenShift (Beta)

Built on the open-source Eclipse Che project, Red Hat CodeReady Workspaces provides developer workspaces, which include all the tools and the dependencies that are needed to code, build, test, run, and debug applications. The entire product runs in an OpenShift cluster hosted on-premises or in the cloud and eliminates the need to install anything on a local machine.

]]>https://developers.redhat.com/blog/2018/12/18/eclipse-che-7-coming-part-1/feed/0549437Red Hat Summit 2018: Trends in cloud-native developmenthttps://developers.redhat.com/blog/2018/04/25/red-hat-summit-2018-future-cloud-native-development/
Wed, 25 Apr 2018 16:59:11 +0000https://developers.redhat.com/blog/?p=488087At Red Hat Summit 2018, learn about the the top trends shaping the future of modern application development. You’ll find out how service mesh and serverless computing are continuing the evolution that started with the move to microservices architecture. Hear Burr Sutter, and Brad Micklea discuss the 10 major changes that are poised to reshape the developer tools […]

]]>At Red Hat Summit 2018, learn about the the top trends shaping the future of modern application development. You’ll find out how service mesh and serverless computing are continuing the evolution that started with the move to microservices architecture. Hear Burr Sutter, and Brad Micklea discuss the 10 major changes that are poised to reshape the developer tools market for years to come. Gain insight as Red Hat CTO Chris Wright shares his views about how serverless, AI, and blockchain are likely to influence the future of technology.

Session Highlights:

Red Hat’s developer tools group is focused on creating compelling experiences for developers working with containers and serverless technologies. In this session, you will learn about the 10 major changes that we believe will reshape the developer tools market in the next 10 years and how our four product goals will help Red Hat customers and Red Hat OpenShift developers thrive in this new world. We’ll conclude with a demo of our cloud-native developer tooling for Red Hat OpenShift.

Attend this session where Red Hat CTO Chris Wright will provide insights into Red Hat’s research, innovation efforts, and direction related to emerging technologies, such as serverless, artificial intelligence (AI), blockchain, and more. Learn how innovative projects and technologies are evolving to help you plan and refine your IT strategies and roadmaps.

In this fireside chat, we’ll interview Clayton Coleman (Chief Engineer for OpenShift) and Brandon Philips (previously CTO of CoreOS, acquired by Red Hat) on their long term view on platforms and where we’ll be taking Kubernetes and OpenShift in the future. Clayton and Brandon have led development for most of the major technologies that power the Linux Container ecosystem today (Kubernetes, Open Container Initiative, etcd and more). Come prepared to ask questions and learn about the past, the present, and the future.

The first generation of microservices was primarily shaped by Netflix OSS and used by numerous Spring Cloud annotations all throughout your business logic. The next generation of microservices will use sidecars and a service mesh. In this session, we’ll give you a taste of Envoy and Istio, 2 open source projects that will change the way you write distributed, cloud native, Java applications on Kubernetes.

Then we’ll show you the power of Serverless architecture. Serverless is a misnomer; your future cloud native applications will consist of both microservices and functions, often wrapped as Linux containers, but in many cases where you, the developer, ignore the operational aspects of managing that infrastructure.

In this session, we start off building a Function-as-a-Service (FaaS) platform with Apache OpenWhisk deployed on OpenShift. With OpenShift being the de facto platform for cloud-native Java applications, we’ll explore further to see how to make cloud-native Java applications (a.k.a microservices) complement the serverless functions.

For years, the future of software architectures has been described as a proliferation of lightweight, cloud-connected devices. Many organizations are adopting microservices architecture (MSA) alongside more traditional ones to realize this future, where modular systems can be created more quickly and managed on a scale that exceeds earlier approaches. And yet, we continue to move forward. An exciting future that includes serverless is emerging. Serverless is a major shift in the way developers will build and deliver software systems by further insulating them from infrastructure concerns.

Each approach offers its own set of benefits and challenges. The reality is that most organizations will have a mixture of architectures, platforms, tools, and processes for the foreseeable future. How should you be thinking about the evolution of your application architecture and the platforms that support it?

If you do it right, developers will be able to exploit complex logic and large datasets from a variety of sources to build applications that they could never have imagined just a few years ago. Attend this session to learn what existing and emerging technologies Red Hat is exploring in this area and which ones you should be considering to tie it all together.

Microservices, data streaming, and serverless computing are trendy, and for good reason. These technologies have evolved to provide economical solutions to modern problems, and have been enabled by technical innovations, such as cloud platforms and an increased demand for data, and experiment-driven business models.

However, simply following trends is not a recipe for success. To adopt new technologies and architectural patterns efficiently, developers and architects must understand how they evolved in the larger context of messaging and event-driven architecture.

Join us for an overview of different styles of messaging and event-driven architecture, ranging from enterprise integration to event-driven microservices, data streaming, and serverless computing. We will show you how these technologies came into existence, how they evolved from each other, and what problems they solve. To keep things practical, we will show you how to build and run them on Red Hat OpenShift with Red Hat portfolio components.

]]>488087Red Hat Summit 2018: Learn how other developers are producing cloud-native applicationshttps://developers.redhat.com/blog/2018/04/17/red-hat-summit-2018-learn-developers-producing-cloud-native-applications/
https://developers.redhat.com/blog/2018/04/17/red-hat-summit-2018-learn-developers-producing-cloud-native-applications/#respondTue, 17 Apr 2018 15:55:24 +0000https://developers.redhat.com/blog/?p=486267Want insights into how other organizations are building cloud-native applications and microservices? At Red Hat Summit 2018, developers from a number of different companies will be sharing their stories in break-out sessions, lightning talks, and birds-of-a-feather discussions. Learn how they solved real business problems using containers, microservices, API management, integration services, and other middleware. OpenShift […]

]]>Want insights into how other organizations are building cloud-native applications and microservices? At Red Hat Summit 2018, developers from a number of different companies will be sharing their stories in break-out sessions, lightning talks, and birds-of-a-feather discussions. Learn how they solved real business problems using containers, microservices, API management, integration services, and other middleware.

In this session, you’ll hear from Sabre and USAA, participants of the OpenShift service mesh early access program. They’ll share their experiences and expectations along with the most important lessons they learned and expected next steps.

Editor’s note: If you aren’t familiar with the concept of a service mesh and how Istio can help you build resilient microservices, check out:

We will give an overview of the backend application landscape powering BMW’s Connected Car backend offering and the challenges and solutions found migrating it to a cloud-ready containerized platform such as OpenShift. It is primarily Java-EE based, comprising about 300 application and services and is developed in a decentralized fashion, involving BMW staff and partners. Cloud-nativeness is important to address massively growing scaling and elasticity demands of new vehicles and services as well as the ability to utilize hybrid cloud scenarios to address new markets. We will discuss softer aspects such as a changing developer workflow, knowledge management, and education approaches in a decentralized environment as well as technical solutions such as a tool-supported migration factory approach, compliant access to log data for partners or integration of existing application performance monitoring tools. In addition, we will explain the impact on the infrastructure and its management approaches to support the effort.

SIA (Societá Interbancaria per l’Automazione) and Red Hat will share the experience of developing and putting in production the instant payments platform, which serves all European banks. This system is capable of processing 27 million payments daily with an average round trip of 20 milliseconds and has a 99.999% availability.

In this session, we’ll cover how we made it blazingly fast, and at the same time preserved all data consistency and functional requirements needed by a payment solution, thanks to Red Hat OpenShift, Red Hat JBoss Data Grid, and Red Hat JBoss Fuse Integration Services.

BP has spent the last two years building a solid Red Hat OpenShift platform to run business-critical functions. Core to the business is the BP Integration Layer, which connects every part of BP’s energy trading systems and requires reliability.

BP’s CTO of Oil Trading will talk about how the company built its Integration Layer with Red Hat OpenShift, Red Hat JBoss AMQ, and software-defined storage using Red Hat Gluster Storage.

Deutsche Bank rolled out a minimally viable Red Hat OpenShift platform at the end of 2016. By the Fall of 2017, that platform had grown across two geographies spanning nine clusters and hosting a thousand projects and many thousands of pods.

In this session, we will compare the initial vision for the platform with what was delivered and highlight the top lessons learned. We will discuss the nontechnical challenges and solutions used to roll out new technology like Red Hat OpenShift at a large scale. Top technical challenges and pain points and their corresponding solutions will be discussed in detail.

Swiss Railways operates a substantial Red Hat OpenShift hybrid cloud installation, hosting many thousand containers. Introducing microservices at scale and moving to hybrid container infrastructures introduces a new set of challenges. What about security, life cycle, dependencies, governance, and self-service with thousands of services on a hybrid environment?

To handle the enormous growth of APIs, an API management platform based on 3scale by Red Hat on-premise and Red Hat single sign-on (SSO) was built, integrating internal and external IdPs. The solution is portable, scalable, and highly available, and all processes are automated and available as self service. The platform is in production, serving multiple critical internal and external APIs targeting 100K+ API calls per second.

In this session, you will learn about the benefits of building a fully automated self-service API management and SSO platform in a distributed, hybrid environment, how we approached the project, what challenges we faced, and how we solved them.

In this session, Bell Canada shares its journey transforming legacy applications into new modern applications. Find out how they integrated their legacy pipeline and software into a new microservices-based architecture to merge, integrate, rebuild, and refresh the customer experience. Learn how they instantly add new features in a heartbeat, as well as build, test, and deploy with a combination of agile and waterfall methods.

For 25 years, InComm has been the market leader in the prepaid and payments industry. Driven by application modernization and integration requirements, Incomm has embarked on a strategic initiative to migrate from monolithic integration technologies to a more agile integration technology stack based on Red Hat OpenShift, Red Hat 3Scale API Management, and Red Hat JBoss Fuse.

In this session, they will discuss migrating from their legacy environment to how it’s now using an agile integration stack to rapidly develop, test, and deploy new integration applications that support business-critical needs.

]]>https://developers.redhat.com/blog/2018/04/17/red-hat-summit-2018-learn-developers-producing-cloud-native-applications/feed/0486267Red Hat Summit 2018: Speakers on the forefront of Cloud-Native application developmenthttps://developers.redhat.com/blog/2018/04/13/red-hat-summit-2018-speakers-cloud-native-app-dev/
https://developers.redhat.com/blog/2018/04/13/red-hat-summit-2018-speakers-cloud-native-app-dev/#respondFri, 13 Apr 2018 16:00:02 +0000https://developers.redhat.com/blog/?p=484927May 8th – 10th at Red Hat Summit 2018 in San Francisco, you’ll get to see, hear, and meet speakers who are working on the forefront of cloud-native application development. Some are core developers working on Red Hat products or in the upstream open source communities. A number of speakers have published books on topics […]

]]>May 8th – 10th at Red Hat Summit 2018 in San Francisco, you’ll get to see, hear, and meet speakers who are working on the forefront of cloud-native application development. Some are core developers working on Red Hat products or in the upstream open source communities. A number of speakers have published books on topics such as microservices and integration. Others are working directly with developers at Red Hat customer sites helping those organizations efficiently move to cloud-native application development. The speakers include:

Brad Micklea is the director of product management for the developer tools group at Red Hat and project lead for Eclipse Che.

He is focused on building software portfolios that enable developers to build better software more quickly and easily. He came to Red Hat through the Codenvy acquisition where he ran all customer-facing aspects of the business.

Burr Sutter (@burrsutter) is Director, Developer Experience at Red Hat. He is a lifelong developer advocate, community organizer, and technology evangelist, Burr Sutter is a featured speaker at technology events around the globe—from Bangalore to Brussels and Berlin to Beijing (and most parts in between)—he is currently Red Hat’s Director of Developer Experience. A Java Champion since 2005 and former president of the Atlanta Java User Group, Burr founded the DevNexus conference—now the second largest Java event in the U.S.—with the aim of making access to the world’s leading developers affordable to the developer community. When not speaking abroad, Burr is also the passionate creator and orchestrator of highly-interactive live demo keynotes at Red Hat Summit, the company’s premier annual event.

Claus Ibsen (@davsclaus) is a senior principal software engineer at Red Hat, working primarily as project lead on Apache Camel. Claus has been a full time developer on Apache Camel for the past 9 years. He is author of the Camel in Action 1st and 2nd edition books.

Claus is very active in the open source communities, where he helps others, blogs, records videos, writes, and tweets as well. He is the author of Camel in Action.

Steven Pousty is a Dad, Son, Partner, and Director of Developer Advocacy Red Hat Middleware. He goes around and talks about cool technology that sometimes involves Red Hat Technology. He can teach you about Java, Python, PostgreSQL MongoDB, some JavaScript, Docker, and Kubernetes. He has deep subject area expertise in GIS/Spatial, Statistics, and Ecology. He has spoken at over 75 conferences and done over 50 workshops including Monktoberfest, MongoNY, JavaOne, FOSS4G, CiscoLive, Fluent, DevNation, Where2.0, and OSCON. Before OpenShift, Steve was a developer evangelist for LinkedIn, deCarta, and ESRI. Steve has a Ph.D. in Ecology. He likes building interesting applications and helping developers create great solutions. He can be bribed with offers of bird watching or fly fishing trips!

Keynote speaker and doer of many things, Jen Krieger (@mrry550) is Chief Agile Architect at Red Hat. Most of her 20+ year career has been in software development holding many roles throughout the waterfall and agile lifecycles. At Red Hat, she leads a department-wide adoption of DevOps methodologies focusing on CI/CD best practices. Most recently, she worked with the Project Atomic & OpenShift teams, the company’s two leading products, to help establish strong working relationships while the organization scaled rapidly. Now, Jen is guiding teams across the entire company into agility in a way that respects and supports Red Hat’s commitment to open source.

John Osborne is the Lead OpenShift Architect for Red Hat Federal. He has been at Red Hat for 4 years with a strong focus on Kubernetes and DevOps. Before his arrival at Red Hat, he worked at a start-up and then spent 7 years with the U.S. Navy developing high-performance technologies using JBoss Middleware and deploying them to several mission-critical areas across the globe. He has a strong background in all phases of the software development lifecycle. He currently holds his B.S. in Computer Science, M.B.A., and an M.S. in Software Engineering.

Clement Escoffier (@clementplop) is a principal software engineer at Red Hat. He has had several professional lives, from academic positions to management. Currently, he is working as a Vert.x core developer. He has been involved in projects and products touching many domains and technologies such as OSGi, mobile app development, continuous delivery, and DevOps. Clement is an active contributor to many open source projects such as Apache Felix, iPOJO, Wisdom Framework and Eclipse Vert.x.

James Falkner (@schtool) is a Sr. technical product manager with Red Hat Middleware and is dedicated to open source and Red Hat’s open computing philosophy. His career spans 20 years in the software industry taking on roles up and down the software stack, from firmware, operating systems, cloud infrastructure and most recently helping customers, partners and the open source community with application development focusing on Linux containers and modern app architectures.

Stian Thorgersen is Principal Software Engineer, an engineering lead on Red Hat Single Sign-On and the community project lead on Keycloak. Prior to joining Red Hat, Stian was the lead developer at Arjuna Technologies working on Agility, a cloud federation platform.

]]>https://developers.redhat.com/blog/2018/04/13/red-hat-summit-2018-speakers-cloud-native-app-dev/feed/0484927Red Hat Summit 2018: Getting Started with Modern Application Developmenthttps://developers.redhat.com/blog/2018/04/11/red-hat-summit-2018-getting-started-with-modern-application-development/
https://developers.redhat.com/blog/2018/04/11/red-hat-summit-2018-getting-started-with-modern-application-development/#respondWed, 11 Apr 2018 10:55:21 +0000https://developers.redhat.com/blog/?p=484137Are you interested in writing cloud-native applications? Want to learn about building reactive microservices? Would you like to find out how to quickly get started with Vert.x, Wildfly Swarm, or Node.js in the cloud with Red Hat OpenShift Application Runtimes? Are you an Enterprise Java developer looking to try new programming paradigms? To learn about […]

]]>Are you interested in writing cloud-native applications? Want to learn about building reactive microservices? Would you like to find out how to quickly get started with Vert.x, Wildfly Swarm, or Node.js in the cloud with Red Hat OpenShift Application Runtimes? Are you an Enterprise Java developer looking to try new programming paradigms?

To learn about modern application development, join us at Red Hat Summit 2018 for sessions such as:

Session Highlights

Developers are being asked to learn a lot in a short period of time. They are moving from monolithic architectures to microservices, from application servers to container platforms, from one application runtime to another, and from an agile methodology to DevOps. This can introduce a lot of complexity.

Red Hat OpenShift Application Runtimes combines WildFly Swarm, Spring Boot, Eclipse Vert.x, and Node.js into a single product that makes developing with these runtimes a natural experience on OpenShift.

In this session, we’ll show you how developers can become rapidly productive by following a prescriptive path provided by Red Hat OpenShift Application Runtimes.

This hands-on lab on cloud-native apps will introduce the key concepts of modern application development using microservices runtimes and frameworks. In this lab, you’ll learn how to use the microservices runtimes included in Red Hat OpenShift Application Runtimes—such as Spring Boot, WildFly Swarm, and Vert.x—to build a cloud-native application. We’ll also share how to automate build, configuration management, and deployment of your cloud-native apps using the application life-cycle management capabilities of Red Hat OpenShift.

JavaScript has always played an important role in the browser, and now its use in enterprise server-side development has exploded with Node.js. Its reactive architecture and lightweight design makes it an ideal technology for the containerized microservices architectures you’ve been hearing so much about.

What does this mean for your enterprise? Where does it fit, and how can Red Hat OpenShift Application Runtimes help you benefit from this technology while still using a Platform-as-a-Service model?

We’ll answer these questions and more as we demonstrate how quickly you can setup a non-trivial, enterprise-grade Node.js application on Red Hat OpenShift. We’ll explore how to integrate with other open source technologies, such as Istio, and discuss strategies for your Node.js development and deployment pipleline, including canary and blue/green deployment strategies.

This session presents how to develop reactive microservices on Red Hat OpenShift. The reactive movement proposes a way to build distributed systems, infusing asynchrony at the heart of the application. Reactive microservices are more responsive, robust, and interactive. They efficiently use the CPU and memory, making them perfectly suited for the cloud and containers.

However, becoming reactive is challenging. How do you exchange messages, handle concurrent requests asynchronously, process streams, and develop asynchronous code?

The reactive facet of Red Hat OpenShift Application Runtimes offers everything you need to build such a system. Based on Eclipse Vert.x—a toolkit to build reactive distributed systems, it enables the development of reactive microservices on top of OpenShift. Vert.x combines an asynchronous execution model, reactive eXtensions, and a thrilling ecosystem. It’s also incredibly flexible—whether it’s an API gateway, sophisticated web applications, or a high-volume event processing, Vert.x is a great fit.

Modern applications are data intensive and deal with large volumes of data from a variety of heterogeneous sources. Use cases are becoming more complex as well—for example, combining IoT, analytics, and traditional enterprise applications in a unified data pipeline. In this scenario, data is flowing continuously. How do you handle this data? How do you deal with a large number of concurrent clients sending data continuously to your application? How can you manage heterogeneous, ever-changing data?

In this session, we’ll share how reactive data pipelines provide a resilient and elastic backbone to face the data flow and get the job done. We’ll present how applying reactive principles to data pipelines provides a flexible, responsive way to integrate data ingestion and processing scenarios in a microservices-based architecture. This solution integrates Red Hat OpenShift Application Runtimes—specifically its reactive facet, Vert.x—Red Hat AMQ, and Apache Kafka.

What if there was a way you could take advantage of the latest microservices architectures by using many of the developers and skills you already have? In this session, we’ll show you how with Eclipse MicroProfile and Red Hat’s implementation, WildFly Swarm. We will discuss all the cool features it allows you to easily use, such as fault tolerance and metrics, and we will explain current roadmap plans.

We will also include a demo that showcases what’s possible with Eclipse MicroProfile, utilizing the existing specifications and built with WildFly Swarm as the implementation. We will develop a simple microservice that integrates metrics, health checks, configuration, fault tolerance, open API, tracing, and type-safe REST clients. By the end of the session, attendees will have a better understanding of Eclipse MicroProfile and how to develop to it with WildFly Swarm.

Don’t miss Red Hat Summit 2018

Red Hat Summit 2018 is May 8th – 10th in San Francisco, CA at the Moscone Center. Register early to save on a full conference pass.

]]>https://developers.redhat.com/blog/2018/04/11/red-hat-summit-2018-getting-started-with-modern-application-development/feed/048413718 Recorded Sessions on Cloud Native Development – from Red Hat Summithttps://developers.redhat.com/blog/2017/05/31/18-session-recordings-cloud-native-development-from-red-hat-summit/
https://developers.redhat.com/blog/2017/05/31/18-session-recordings-cloud-native-development-from-red-hat-summit/#respondWed, 31 May 2017 14:00:26 +0000https://developers.redhat.com/blog/?p=435516As I mentioned prior to Red Hat Summit, there was a whole lot of activity around the complementary aspects of microservices, containers, open source, and cloud, so I’ve assembled this recorded set of sessions on the topic Cloud Native Development. Enjoy! Lessons Learned – From Legacy to Microservices – The Road to Success of Miles & […]

]]>As I mentioned prior to Red Hat Summit, there was a whole lot of activity around the complementary aspects of microservices, containers, open source, and cloud, so I’ve assembled this recorded set of sessions on the topic Cloud Native Development. Enjoy!