Topics

Featured in Development

Understandability is the concept that a system should be presented so that an engineer can easily comprehend it. The more understandable a system is, the easier it will be for engineers to change it in a predictable and safe manner. A system is understandable if it meets the following criteria: complete, concise, clear, and organized.

Featured in Architecture & Design

Sonali Sharma and Shriya Arora describe how Netflix solved a complex join of two high-volume event streams using Flink. They also talk about managing out of order events and processing late arriving data, exploring keyed state for maintaining large state, fault tolerance of a stateful application, strategies for failure recovery, data validation batch vs streaming, and more.

Featured in Culture & Methods

Tim Cochran presents research gathered from ThoughtWorks' varied clients and projects, and shows some of the metrics their teams have identified as guides to creating the platform and the culture for high performing teams.

Cloud Native CI/CD with Jenkins X and Knative Pipelines

Summary

Christie Wilson and James Rawlings explain the CI/CD challenges in a cloud native landscape, and show how Jenkins X rises to them by leveraging open source cloud native technologies like Knative Pipelines. They demo a GitOps based Jenkins X workflow, showing how simple Jenkins X makes it for developers to stage and deploy changes on demand.

Bio

Christie Wilson is a software engineer at Google, currently leading the knative build-pipeline project. Over the past ten years she has worked in the mobile, financial and video game industries.
James Rawlings is a co-creator of the open source project Jenkins X and works for CloudBees, where he aims to help developers and teams move to the cloud.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Rawlings: Hello.

Wilson: Hello. Hey, James, I'm really sorry to interrupt, but I'm actually not going to be able to give this talk with you today.

Rawlings: Wait. Why not? What's wrong?

Wilson: Well, I've been working on this pull request and I'm having some trouble and I think I need to skip the talk and work on it.

Rawlings: Okay, what's the problem with this?

Wilson: The integration tests are failing. And I thought everything was fine, but the CI isn't working, and I don't know what's going on.

Rawlings: Can you run it locally?

Wilson: That's what I thought, so I looked into it. I figured out the end-to-end tests are where the problem is. I found the script that runs the end-to-end tests and then I ran it, but this started happening.

Wilson: I installed container diff, I found some more dependencies I was missing, then I ran the script again, and this started happening.

Rawlings: Catfactory-production. That kind of looks like you're trying to push to a production docker registry image.

Wilson: Yes, I probably shouldn't be pushing to the production registry, but it's hard-coded in the script. So I manually changed the script myself, and I'm going to have to remember to clean that up. By the way, I should probably remember to uninstall all those dependencies before I'm done. But anyway, then I ran into this.

Rawlings: That script, it looks like it was trying to connect to a cluster there.

Wilson: So the end-to-end tests need to deploy to a Kubernetes cluster and then they run tests against the cluster. But we hard-coded the staging cluster in here, so now I'm going to have to figure out how to make my own Kubernetes cluster, and I'm going to have to make the scripts use that. That's all going to take me a really long time. I think you have to go ahead and present without me.

Rawlings: Ouch. We can do better than this, right? It's 2019, we're all cloud native. What we're trying to look at- Christie, have you looked at Jenkins X, or the project that you're working on, Tekton Pipelines?

Wilson: Yes, that's a really good point. I think both of our projects could probably help us out here. Maybe we should introduce ourselves before we keep going.

Rawlings: My name is James Rawlings. I work on an open source project called Jenkins X. That's me on the left, for people at the back there. I've been working the last few years on creating software to help developers become more productive.

Wilson: I'm Christie Wilson. I'm a software engineer at Google. I'm also the one on the left there. Over the years, I've worked in a lot of different industries. I've worked in mobile, I've worked in finance, I've worked in AAA games, but I always end up gravitating towards work that has to do with tests and CI/CD. And Google has been no exception. So right now I'm really excited to be leading the Tekton Pipelines project, which was formerly called Knative Pipelines.

Rawlings: We're here today, and we're very excited to be here talking to you about Jenkins X and Tekton Pipelines, and how we've been putting these two projects together.

Wilson: I think that this could probably help me out with the CI problems that I was having at the beginning. I really wasn't looking forward to trying to reverse-engineer all those bash scripts. And I think the types of problems that I was running into are the same types of problems that lots of engineers and companies are running into. Maybe we can show how we could help with that.

Rawlings: Cloud native, that's interesting, but what is cloud native?

Wilson: That's a good point. Cloud native is one of these terms that we kind of throw around a lot, but we don't really talk about what it means very often. So maybe we can turn to the CNCF, or the Cloud Native Computing Foundation, and see what they say it is. So their definition is, "Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container and dynamically orchestrating those containers to optimize resource utilization." That's kind of a mouthful, but I think we can break it down. Applications and infrastructure that are cloud native are open source, the architecture is microservices that are in containers, those containers are dynamically orchestrated, and resource utilization is optimized.

Containers

Rawlings: Containers are the building blocks. Can you tell us a bit more about what containers are?

Wilson: Yes. At a high level, a container is a unit of software that contains an application's binary and then all of its dependencies and configuration that it needs to run. Then when the container actually executes, it shares an operating system with other containers, but the containers themselves run in their own isolated processes.

Rawlings: I can see there are going to be some benefits here. Using containers makes it super easy for developers to package and deploy and deliver their software. Also, some other benefits we could see here, say compared to the virtual machines, is the startup times of a container. There's significant improvement in those start times, all about faster feedback for developers. Also compared with virtual machines, for example, because you're not actually including an OS in all of this, then it's actually using less resources, which is then friendlier for your monthly cloud bills as well.

Wilson: There are some applications for CI/CD here. You might remember the very first problem that I was running into is I couldn't run the bash script because there were dependencies I was missing. But if instead of being a bash script it had been a container, then I could have just run the container and it would have had everything I needed right away.

If you need to put a lot of containers together, this is where Kubernetes comes in, and this is what cloud native often ends up meaning for most of us. We use containers as our building blocks, and then we take care of the dynamic orchestration and the optimized resource utilization using Kubernetes.

Kubernetes

Kubernetes lets us orchestrate many containers, but then beyond that, it also abstracts managing the underlying hardware from managing the containers themselves. So this gives us benefits, like if the machine that's running your container goes down, it can just automatically be rescheduled to another machine.

Rawlings: Again, you can start to see some benefits around here, there’s becoming some standardization around a cloud implementation. Every major cloud provider is having a solution around Kubernetes. A lot of these cloud providers actually have managed solutions as well. So you don't actually have to maintain and manage upgrades of this. That really starts to be appealing, especially when you think about application portability. You're reducing the risk of vendor lock-in, which obviously has benefits there.

Some of the other benefits we're seeing as well is, because of that standardization, then the level of innovation around open source projects on a stable platform is pretty phenomenal. It's a very exciting time to be in that community, which we can all be a part of as well. Also, some of the resources- when we talk about some of the say, serverless models, we can actually start to benefit as well. In the CI/CD space where we're thinking we only want to be running builds for the time we actually need it, so spinning out builds and then throwing them away afterwards so we're not actually incurring any extra costs. Then also the approaches about extending Kubernetes as well, using microservices controllers, individual applications that are running on their own release cycles, has a nice way to actually extend Kubernetes, too.

Wilson: But for all the benefits that Kubernetes gives us, there are some other challenges as well. And some of these are the same challenges that we already had. So even though containers make my life easier because I can run them easily myself, I still have to build and distribute them somehow. Some things are just harder. If we take all of our monolithic applications and break them down into microservices, then suddenly we're dealing with a lot of tiny pieces, and we have to manage the dependencies between all of them. This is the landscape that we're dealing with, and this is what cloud native has meant for us. James, maybe you can talk a little bit about what cloud native has meant for Jenkins and how that led to Jenkins X.

Jenkins

Rawlings: Perfect. With Jenkins settled, just a free slide about some of the history. Jenkins has had tremendous success as an open source project. It's quite phenomenal actually. It was created in 2004 by Kohsuke in the form of Hudson, with an open source community that's floated around it and helped to improve that. It's been quite amazing. It's estimated that there's over 200,000 Jenkins installations running, with around 15 million Jenkins users. That is astonishing.

But there are some challenges that we recognize. There are a lot of changes, a lot of technology changes that have happened, advancements that have happened since then, and more so some of the challenges that we want to recognize, too. Right now today, Jenkins has a single point of failure. If you want to perform some maintenance or you're installing a new plugin, then you could actually reboot the JVM or it might not restart, for example, then you're actually going to miss any webhook events for git triggers, and you actually miss builds. So kind of not great.

Also down to some of the successes of Jenkins as well, was the plugin model, which is overwhelming. It's brilliant. There are thousands of plugins that you can install, but that also means there are thousands of plugins that you can't install, because they're all going to be installed in the Jenkins master or requiring more memory, and then potentially more conflicts as well. Again, from the memory side, we're incurring extra costs on our cloud bills. Running a JVM on the cloud is going to cost you a bit of money.

Also, some of the other challenges we've seen because of the success is scaling the number of jobs. When the pipeline runs in Jenkins, that's actually executed on the Jenkins server, on the Jenkins master, which also means that with your teams, you can actually create a job, maybe a bad job, and that can have a negative impact, if that's a bad job, on your other teams in your organization if they're sharing that Jenkins master. But that's okay. We recognize these problems, and it's not all doom and gloom because that also leads us into Jenkins X, where we can look at some of those advancements in technology, architectural approaches that the industry is learning from.

Jenkins X

Jenkins X is building on those successes from Jenkins. It's aiming to create a developer experience for Kubernetes. Now, we recognize Kubernetes is not the easiest thing to get started with, but we want to help developers get on to Kubernetes and start iterating and getting feedback as fast as possible. So Jenkins X is looking to create this developer experience, but we're not just about for modern cloud native workloads. One of the misconceptions, I think, when we talk to people is it's just for deploying Kubernetes. You can actually run your traditional workloads on there as well, but get some of the other benefits that we're actually adding in.

With Jenkins X, you can create or import existing applications onto Kubernetes using CI/CD. It's actually going to automatically set all of this up for you. We're hopefully going to see a live demo shortly demonstrating this. It's got the notion of environments. And that's really key because with our pipelines that we're actually developing, we are deploying out to different services or different cloud providers, but within Kubernetes, we can actually have these environments.

Also, preview environments. Now, this kind of blows my mind when we added this in. Very exciting. We actually use this ourselves. It's the idea to temporarily spin up an environment, deploy a proposed change from a pull request into that, and then actually have that running, and you can actually start collaborating on that, right? Where it gets interesting- James was talking on the earlier session - about maybe [inaudible 00:12:11] even some routing, some kind of experimental traffic. But the idea about being able to experiment and rapid feedback on pull requests. Also just points and notes, how we automatically set up git repositories for deploying to those environments, so that all your changes are traceable, and no change can go into that environment unless it's gone through a git repository approval.

There's a new extensibility model, which we're very excited about. This allows people to actually add and extend, like in a plugin way. If you have commercial open source apps, you can actually add in and extend platform as well, using these recommended practices around microservices, event-driven architectures, extending Kubernetes using controllers and operators. Then, of course, this last bullet point leads us on very nicely. We're hugely excited because we can support pluggable pipeline engines. So static Jenkins servers, maybe one-shot Jenkins, also now, nicely to Tekton Pipelines, which we're tremendously excited about.

Tekton Pipelines

Wilson: Tekton Pipelines is all about taking the brains of CI/CD and putting it onto Kubernetes and taking advantage of everything that Kubernetes has to offer. To give you a little bit of history, in early 2018, the Knative Build and Knative Serving projects were created. So Knative Serving was all about an open source Kubernetes-based serverless solution. In order to take care of the source to deployment piece of that, Knative Build was created. The people got really excited about being able to build images on Kubernetes, but they really quickly found that they wanted more. They wanted to do things like run tests before they built the images, and they wanted to plug these things together in more complicated pipelines. So that's where Tekton Pipelines came from, which was previously called Knative Pipelines.

One of the big features of Tekton Pipelines is portability. The goal is that we have created an API spec that any CI/CD vendor could comply with. So this means that I can write pipelines for my project, and then I can use them with a variety of vendors or use them with multiple tools. We also want to add types into CI/CD systems, and we want our pipelines to be decoupled. I can take a pipeline and I can run it against my own infrastructure, or I can take pieces of that pipeline and I can run them on their own.

We want to support deploying the Kubernetes very well, but we also want to support other completely different deployment targets as well, like maybe mobile. A lot of people are pretty excited about this. At the moment, we have regular contributions from developers from Google, Cloudbees, Pivotal, IBM, RedHat and more. If any of you are interested in contributing, we've put a lot of work into making it really easy to ramp up as a contributor. So please, join us.

Custom Resource Definitions

Now, particularly if you're interested in contributing, but also if you just want to use this, you might be wondering, how does this all work? So the key to how Tekton Pipelines works, is it's implemented as Kubernetes CRDs. So CRD stands for custom resource definition. Kubernetes out-of-the-box comes with a number of types, like deployments, pods, and services, but it also has a model that lets you add your own types and extend Kubernetes itself. To do this, you create your own resource types, and then you create processes called controllers or operators which operate on those resources.

What CRDs did we add for Tekton Pipelines? I'll start with the most basic building block. So the most basic building block is something we call a Step. And what this is it's actually a container spec, which is an existing Kubernetes type. This is how you say what image you want to run and how to run it. What arguments to use, environment variables, all of that. The first new type we added is called a Task. So a Task lets you put multiple steps together, those steps or containers will run in sequence on the same Kubernetes node.

The next new type is a Pipeline. A Pipeline puts tasks together. You can run them in any order you want. You can run them in parallel, you can run them sequentially. The Pipeline will also let you do more complicated things, like you can take the output of one task and provide it as an input to another task, even though they run on different nodes.

Both of those, pipelines and tasks, are things that you define once but you use again and again. So to actually invoke these, there are two new types we added, called PipelineRun and TaskRun, which will actually invoke the Pipeline or the Task. Since both of these need runtime information about what infrastructure to use, we added another type called the PipelineResource. So this is all the runtime information that you need to actually invoke your pipelines and tasks. Altogether there are five new types of CRDs. We have Task and Pipeline which you define once and use again and again, and then you invoke those using TaskRuns, PipelineRuns, and PipelineResources.

If we go back to the problem that I was having at the beginning of this talk, there were a few different things that were going wrong. So the first thing is that I was missing dependencies that I needed. Next, the CI that I was using was relying on production infrastructure that I couldn't and probably shouldn't access. Then there was kind of a meta-problem, which is that when I went into this and tried to run it, I didn't know that I was going to run into any of these problems up front.

We can take a look at what this could look like if we are using Tekton Pipelines. First of all, as soon as I went to the repo that I was working with, I should be able to find a YAML file that defines the pipeline that the repo uses. From there I can see exactly which tasks it's using and what the steps are in those tasks. The very first problem I had about dependencies, I wouldn't have that problem because we would be using images to execute the actual logic, and they come with all of their dependencies.

The next problem I was having about the production infrastructure, that is runtime information, so I would be able to provide that with my own PipelineResources. And then the meta-problem of not knowing any of this up front- to give you a glimpse of what an actual pipeline definition looks like, the very first thing that the pipeline has is a declaration of all the resources that it needs, the PipelineResources. So I would know right away from looking at this that I'm going to be interacting with a git repo, that there's going to be an image involved that I'm probably going to be building and pushing, and that there's going to be a Kubernetes cluster that I'm probably going to need to be able to deploy to. So I can get all that ready and set it up before I even try to run the pipeline.

If you're interested in more examples, we have an example's directory in our repository that has examples of pipelines and tasks. So, James, you've been adding support for a Tekton Pipelines to the Jenkins X. How has that been going?

Jenkins X + Pipelines

Rawlings: It's going very well. I just want to say that this is our own cloud native journey. It's an ever-evolving journey that we're actually taking part. We're very open and we want to tell people how we are doing this, and ensuring that we are relevant in the cloud. But not just relevant, but also efficient and we're using the cloud well.

This is an evolution CI/CD leveraging those cloud capabilities. One of the other projects worth mentioning is a project called Prow. Prow is many things. To try and sum it up very briefly is, it's a highly scalable webhook event handler which you can trigger many other things, which kind of fits into Tekton Pipeline, being able to trigger builds. It comes from the Kubernetes ecosystem. It's used on every single Kubernetes repository. And we'll see some of the developer experience actually that becomes very consistent across your repos in the demo. We've added that in as well.

One of the other initiatives we're actually working with the Jenkins X project is the Next Gen Pipeline initiative. Now, this is something that we want to leverage. So all those 15 million Jenkins users, we're very aware that we need to be making sure they're familiar with what we're doing here. For example, we have a syntax that we are creating at the moment that matches how people may be using Pipelines Jenkins files at the moment now. YAML-based Jenkins file, there's going to be lots and lots of blogs and demos coming out over the next coming weeks and months, but this is something we're very excited to be talking more about.

Again, dogfooding. Jenkins X is actually built using Jenkins X. Just last week we started moving our pipelines over to be Tekton-based, using the Next Gen Pipeline. We're already having huge benefits. The speed, reliability, and the cost savings involved in this is actually dramatic. But again, we'll have some actual metrics on this very soon and then be open with our journey.

Demo

And with that, I think we're going to try our hand at a live demo. So everyone cross your fingers. Hopefully, the conference Wi-Fi holds out. Let's switch gears. We have a Jenkins X installation already run. To interact with Jenkins X, we have a client binary called jx. This is our main interaction with a Jenkins X installation. So I've created a cluster, at the moment using - it's going to be a little bit hard for the back- but at the bottom there, there's tekton. This is just because it's experimental at the moment, but that will get moved soon to be the main default. We've already created a cluster on gke.

One of the things we can do is lots of different commands the jx can allow us: creating clusters, actually getting build logs, creating quickstarts, importing applications. There's a huge list, and we have extensive docs on it. I'm just going to use another client, just very quickly to show you. I'm trying to remember how to type with everyone watching me, "kube CTL get nodes." If you can see that there, we've got a Kubernetes installation using gke. I've tried managing Kubernetes clusters myself- using a managed offering is much more preferable for me. So we've created a three-node cluster on gke. What we're going to do is switch gears and we're going to create a new quickstart using Jenkins X, and we're going to have that deployed, and we're going to see some of these builds actually running, fingers crossed at least. Let’s do "jx creates quickstart."

Because we created our quickstart, we're actually going to create some new git repository, and we're going to be automating all of the CI/CD set up for this. I'm going to create using my GitHub details. Which organization am I going to actually create this application in or this git repository in? This is a CI/CD world for teams, if you are creating a repo in your own organization, it kind of makes it a little bit harder to manage. Using GitHub organizations is much easier. I'm going to create this into this other GitHub organization. Enter new repository name. Let's go for qcon1.

Now we're being asked which quickstart we want to add in. Jenkins X comes with some defaults. They're really kind of basic. Hands held high, but we're looking for more. Let's see. Yes, I still do have a browser. That's brilliant. You can customize this for yourself, for your own quickstarts, for your own teams and organizations. We've just got a bunch of different languages. Today we're going to show you the node.js one. Let's have a little look at this. You can see it's just real basic [inaudible 00:24:05] index.html and a JavaScript file here. But notice there's no docker file, there's no pipeline, and there's no packaging in here so that we can actually deploy some Kubernetes.

Let's switch back to our wizard. Let's look for our node.js. Okay, there it is there. Now, would you want to initialize the git? Yes, we do. This will import commit. Let's do this. So we just detected that it was JavaScript. It was a node.js, it was JavaScript. Phew, that worked. We've now created a quickstart locally, and we've created a repository in my shared GitHub organization, and we've actually done language detection, and recognize this was JavaScript. If we go and have a look at this, cd qcon, which was qcon1 or qcon? Qcon1, phew. There we go. Let's have a look at what that repo now looks like.

Remembering from our GitHub organization, you can see we've actually got some extra files in here now. So we've got Jenkins-x-yml, which right now is just referring to a buildPack. We have the notion of buildPacks. I don't want to go too much into it because we're going to do lots and lots of blogging around this, and it’s still very much work in progress. But the idea is, again, that similar syntax of stages and the syntax that you know people know and love.

We've also added a docker file in here as well, because we detected it was a node.js application. We've also added some charts. These are helm charts, which is a way of packaging your application and deploying them onto Kubernetes. We've automated all of this and committed that into the git repository. So we want to set everything up for you. We don't want to really hide anything. When you start getting more familiar, you can start tinkering and playing, at least you've got CI/CD set up so that you know when you break something.

Let's switch gears a second. Now, it's probably going to be a little bit hard to see, but you don't really need to read the characters on here. What I wanted to show you is we're just watching using kubectl these CRDs, that Christie was talking about here. At the top here, these are PipelineRuns. So that's the CRD for actually running that pipeline. Next, we've got TaskRuns. At the bottom, we've got tasks, which we can go and have a little look at. And in here, that's actually our build pod. A pod is a unit that is actually running our build. That's a temporary pod that's spun up just for the purpose of this build. You can see that's actually running still.

Let's go and have a look at one of those steps. Never great showing lots of YAML, so hopefully this will be the last time to do it. But let's just go and have a look. "Kubectl get task." I might say oyaml. So we can actually see. Admission, at the moment, this is like a big bad thing we should be doing, a mount for the docker socket, but now with Tekton Pipelines, we're now quickly fast moving away and using Kaniko, which is another beautiful project, open source project. So that's actually happening as we speak at the moment.

But you can start to see some of the steps that have been generated, that we're now using, and all using different images. Let me just see. Using an image. We've actually got node.js, we can actually build the image. These are just our individual steps, and we can start to actually have advanced pipelines of parallel steps and stages as well, using the cloud native approaches and design from the Tekton Pipelines project.

So that's automatically triggered. What we should be able to do is, let's go, "jx get applications." Here we can see- see the one I tried earlier- but here we can see we've got an application in version 0.0.1 in our staging environment. That's our very basic node.js quickstart.

Let's switch gears because we want to demonstrate some of the CI side of this as well. In the effort of moving, of shifting left, we want to be pushing as much as we possibly can before we merge into master. So any reason why that change shouldn't be deployed to a production environment, you really want to find out as soon as possible, including security checks as well. But let's create a branch, "git checkout -b" branch. And next, we're going to update that index.html. Let's make a little change. Hello LONDON. I'm going to save that, committed. Commit. Update the welcome message. We're going to push that to our work-in-progress branch. It's like the big screen and tiny little bit down in the corner. Sorry about that.

Let's go and have a look at that on GitHub. We've got a work in progress branch here. Let's review and create a pull request. You can see that's our change. Hello LONDON. Yes, we're happy with that. Let's create a pull request. Now, what we're actually going to see now is, the hook is one of the Prow's microservices, which is highly available webhook event handler. We're actually receiving a git event, webhook event, because the pull request has opened, which has automatically triggered our CI. Again, this is familiar for all of the Kubernetes repositories.

It also gives us a lot more control over our CI pipelines as well. You can actually have conversations, like almost chatters, on your pull requests, triggering different behavior. For example, Christie, this one might be a nice one for you, that we might actually talk using comments on pull requests and trigger these different behaviors. We have a nice little picture of a cat. That's appropriate, right? Okay, phew. One of the things here, that's just demonstrating that we can actually start talking our pull request using git events. You probably want to do something a little bit more serious around may be reviewing, triggering different sets of tests or approvals as well.

But what you can see is, I'm not sure why my picture has popped up there, but the idea is our CI has run these tests. It's also built a temporary version, created the temporary environment, built a version of that to change and deployed it into a preview environment, and then has come back and commented on that test. So we can see our change, our proposed change has been deployed running in a temporary environment. You can start collaborating together, whether that's test teams, product teams, marketing teams, however, your organization is set up. Anybody that really has a decision about actually merging that into milestones, whether that could go to production, can start collaborating here much earlier on in the process, which really helps continuous delivery [inaudible 00:31:10] commit to master.

What we're going to do, just to finish off, we've got serverless Jenkins, maybe rename that, but we've got a status of okay there. Let's go and approve this. Approve. This is again, a familiar workflow for a lot of people now. We can go back to jx, "jx. get build logs." One of the other thing I wanted to highlight was this is the same experience for using jx, whether you're using static Jenkins masters, or you're using these serverless one-shot Jenkins servers as well. We should be able to get some points. Let's have a look if that's merged. We should see. Here we go. We've got another PipelineRun. Interestingly though, we've still got the single task as well. So we've quoted a new PipelineRun, but reusing the same task as Christie was mentioning before.

Let's go and have a look at the logs, "jx gets build logs." We can select our second build. Here we go. We can actually seal this pod and we can see the logs. Just to hit the message home, there's no JVM involved in this. These pipeline steps are being executed in containers, from Steps Tekton Pipeline. The efficiency, the speed and reliability and resource usage is quite significant. But yes, we're very excited by this.

We've built an image, these are all the different containers for all the different steps, individual steps that are running. We can see different containers here. That's actually created a pull request onto our staging git repo, which has now been merged because it's quite fast with Tekton Pipelines. There we go, version 0.0.2 is being deployed into our environment. Hopefully, it should be there, or the deployment might still be happening. Applications. I might have to do that a couple more times whilst the deployment actually happens. It's still running there.

That's really what I wanted to show from the Jenkins X perspective, of how we're actually embracing the advancements around containerization, but also the new technology and the innovation that's happening around with Kubernetes as well. I'll get back up the slides. That would work, by the way. I'm pretty convinced honestly. Let's go back to "Present." Demo worked.

Try It Out

Wilson: Yes, that was pretty cool, James. I'm glad that I stuck around for the talk instead of wrestling with my CI instead. I think we've seen a lot of things that hopefully people who will be working with CI/CD in the future can keep in mind.

Rawlings: Yes, these are both open source projects which are hugely welcoming for involvements as well. Any feedback as well. We've got a quickstart from Jenkins X that shows the command that was created, and then node.js quickstart. It is experimental. As we start, we're looking at as much involvement to accelerate this as much as possible, and a contribution guideline as well.

Wilson: And for Tekton Pipelines, we have a quickstart tutorial if you'd like to try it out. Also if you're interested in contributing, please join us. You can take a look at the contributing guide. It'll help you ramp up on the project and also point you at issues that are good for starting out. It's worth saying that Tekton Pipelines is in really early days, and we're looking for a lot of feedback and use cases from people who are interested in using this, so we can add the right features that it needs going forward.

This is a really cool time for CI/CD. A cloud native technology has a lot to offer. We should be able, from this point on, to create really complex pipelines which are reproducible and also show us at a glance what they're up to. And that's it. So thank you very much.

Questions & Answers

Participant 1: One thing that was mentioned there was you get more visibility because you've got these resources, and they define exactly what tasks you run and all that. In a traditional Jenkins Pipeline, you could, for example, say, "Well I've got maybe some shared code” or whatever else …

Rawlings: Sorry, what was that last bit?

Participant 1: In the traditional Jenkins Pipeline, you could see the entire Jenkins file and exactly what's being done in it. Or if you've got a shared pipeline, you can see all the different steps and the scripts involved in those. You've got complete visibility over absolutely everything. With tasks, you've now hidden the implementation behind that. So you can see what the pipeline does, but then you have to dig down another level to see what each of those tasks do, if one of them is doing something that you weren't expecting or it's misbehaving. Is there any way that you can get the visibility back from that kind of situation?

Rawlings: There are probably two sides to this, because certainly from a Jenkins point of view where we're making the effort, putting a huge amount of effort around the Next Gen Pipeline initiative, to ensure that you can still check in a familiar Jenkins X YAML, which will be familiar to Jenkins Pipeline. As you see, it's as today, probably more the clarity side of things rather than shared pipelines, and then that translates to these resources. Jenkins X wants to be supporting other ways where people are more familiar. So if you have a task in a git repository, we can still be able to do that as well. I don't know if there'll be a task in a git repository from pipeline, from Tekton?

Wilson: Well, one of the goals with Tekton Pipelines is that we want all of the code that's required for running your pipeline to be checked into your repo. You should be able to see the tasks that the pipeline is referencing inside of the repository. And in order to actually set the whole thing up and run it yourself, you need to be able to apply those tasks to your Kubernetes cluster. They have to be there. I think it's pretty likely that we'll end up with a library of reusable tasks, but I think the model is most likely going to be that you have to actually copy them into your repo before you can use them. So you should be able to see everything at a glance in one place.

Rawlings: One of the things with Jenkins X, it was actually referencing just a build pack, because there is a maintenance overhead- what we found ourselves on our own project, on a previous project I worked on called Fabricate, I think we had like 200, 300 repos in the GitHub organization. We had to change all these Jenkins files. It was like, "No." So there's an element where that shared pipeline now comes in, but then that's hard to manage and maintain and test. There's an approach around build packs that we're using in Jenkins X, but you can still base, say, your organization or your team's base, but then you can override elements as well. And that's maybe something we can bounce the two projects' ideas around and see if we can iterate on that.

Participant 2: I was going to say, is it bad that one of my takeaways is that you've made it much easier to get cat GIFs in GitHub?

Wilson: I think that that is the killer feature actually. And that cat GIF thing comes from Prow.