Use Case – Cloud-native container image build, test, deploy pipeline for Kubernetes with Artifactory

Abstract:

Ray Tsang and Kit Merker, Google. May 2016: Kubernetes is a powerful, open source, container orchestration / cluster management tool that helps you manage and deploy your containerized application and services on a cluster of machines. You can use Artifactory to store and manage all of your application’s container images and deploy to Kubernetes.
Join this session to see how to setup a build, test, deploy pipeline using Jenkins and Artifactory. Once an image is ready to be rolled out, Artifactory can trigger a rolling-update deployment into a Kubernetes cluster without downtime – automatically!

Talk transcription:

[Kit and Ray — quickly switching] Well, you do it. And? Yeah.

[Kit] I have the power. There you go.

[Ray] All right. Well, morning everyone. Thanks for coming to this session. My name is Ray, I’m a developer advocate for the Google Cloud platform. And I do two things. One is to bring some of the latest and greatest technology that Google has to offer to developers, and the other thing is that I love to hear about your feedback and how you’re using the technology today. So the best way to contact me is at my Twitter handle at saturnism.

[Kit] And I’m Kit. I’m a product manager on Google Cloud. I worked on the Kubernetes project in the early days and I’ve also worked on […] — on container engine, container registry, I’m kind of active in the container ecosystem. Working on Cloud Launcher now which is our application marketplace and building partnerships with companies like JFrog. So always — always trying to work the ecosystem.

[Kit] So what we’re going to cover, first of all, we did a little poll. Not that many people are using Kubernetes or heard of it so we are going to give you a little taste of Kubernetes. We’re going to give you a brief overview. And then we’re going to jump into actually some working code and show you how that works and then we’re just going to, kind of, walk you through, give you a little tour.

[Kit] So, start off. Think about like the old way of deploying. I think most people kind of get this. Right? When you deploy to whether it’s a VM or to a bare metal machine, everything’s kind of running together, right. This is the way that things are packaged up. You got your executables, your configuration, you got to make sure you don’t have version conflicts between the things. And you put it all kind of in one place in an image. You can deploy that. And managing the state of that can be tricky, right. You can have conflicts or changes you don’t know the exact state of the system. You can move into what we call, you know, an immutable deployment with VM images, but the downside is they’re kind of heavyweight, they take a long time to boot up.

[Kit] And the new way that everybody is so crazy about is containers, right. And the idea is because you have this smaller package that just brings its own dependencies and can co-locate on the same VMs it gives you a much more portable, much more reliable way to deploy code. And you can have things sit side by side on the same physical host and even the same VM host that have conflicting binaries. And that’s a really important powerful thing to enable developers to move more quickly. Let’s see. Yeah. Okay.

[Kit] So, and this maybe — I don’t know if this surprised you guys, just talking about Google, right. Inside of Google we have a system called Borg. I’m going to talk about it a little bit more. It runs containers. It has for about 10 years. And Google actually contributed c-groups into Linux that enables the modern container. And so this is the infrastructure you see at all the, sort of, big internet scale companies like Twitter and Facebook. They have a similar approach. But at Google we run everything in the same container based system. And really — 2 billion a week. Right. It’s the […] number. But we — you know, whether it’s Gmail, search, YouTube, all of it is running on top of our container infrastructure inside the company.

[Kit] What it lets us do is actually decouple our engineers who are writing code, from the guys who are building data centers. I think the easiest way to think about it is like building data centers takes a really long time. From, you know, real estate, to concrete, everything else goes into that. And writing features is a really fast process. If you put the two together you end up with either really upset people building data centers or really upset developers. So we got to decouple it. And that’s where the container orchestration platform comes into play.

[Kit] So as a developer, what do I do? Well first, this is how you build Borg. This is what developers inside of Google would do for their Borg job. So I have this hello world job that I defined. And then I put in a cell. IC is just the name of a cell that we have. A cell’s a 10 thousand machine group that’s inside of a data center. I don’t know which machines, I don’t know the names of the machines, I have a very short two letter acronym to describe the machines I want. I tell it my binary, where it’s located in source. Right. Say, hey run this. I’ll tell it, you know, maybe I got some arguments I need to define like the port number so I can give it some environment variables. And then I define my resource requirements for that job. What does it need to actually execute. So I’m going to define my CPU, my ram, my disk, whatever I need for this thing to run successfully. Right? And then I’m also going to tell it a number of replicas. Right? But since we’re Google we do more than five replicas, we’ll probably do something more like 10 thousand replicas. There you go, that’s more like it. And that’s what you do, right. As a developer, that’s my entire job config that puts it out into the world.

[Kit] And behind the scenes, here’s what really happens. This is the architecture, basic architecture of Borg. And I’ll show you the float here. So first you have your binary and you stick it into a binary repository. Right. And then from there we set up the configuration in the Borg master. Borg master talks to the scheduler. Makes — starts the process of scheduling the work it’s been defined to run. And then it goes and finds a machine to go run it own. The Borg handles actually placing those binaries onto machine and pulls it in from the artifact repository. Okay. Pretty simple flow. But that what that little description is doing. And behind the scenes, there’s a lot of optimizations that are going to run and schedule this sort of the magic there. And so the result is you get a bunch of hello apps — hello world apps running in your database center. Right?

[Kit] And now the next slide. Let me — actually, let me talk about Kubernetes. So Kubernetes. First question we always get is what the hell happened with the naming. Who named this thing? Kubernetes, it means the helmsman of a ship. You know, the nautical theme that we saw with Docker kind of played into it a little bit. But this is — that’s what it is. And we call it, you know, for short, K8S. So you might see that, K 8 S. Eight represents the eight missing characters between the K and the S in Kubernetes.

[Kit] But this is — this is the reimagination of Borg. Right. The system that we built internally at Google that runs all of our application infrastructure. Kubernetes is a reimagination of it, rewritten in Go, 100 percent open source, and very fast growing. Vibrant community. We’ve seen a bunch of companies making strategic bets on Kubernetes and we also have the Cloud Data Computing Foundation that was formed last year to take Kubernetes and other cloud data technologies and make sure they stay vender agnostic, and focus on the right things for developers. So I’ve been really proud to be a part of the Kubernetes project and it’s an amazing community of people and I hope that you all try it out.

[Kit] But this might look familiar to you. Cause this is Kubernetes. And if you remember the slide before, we basically just changed the names on stuff. And it’s very similar. But one other thing that I’ll point out that we did, just for today, for this slide, is we actually have Artifactory off to the side there too. Right? So we replaced our binary store with Artifactory. I’m going to show you how that works in just a second. But same idea, you take the container image, you go put it into your container store. Right? You go and push your Kubernetes config file to the Kubernetes master. The master talks to the scheduler and starts the scheduling loop. And it goes and deploys the — pushes it out to a kubelet that’s running on an actual VM in your environment. And then pulls the binaries and deploys it to the machine and kicks it off. Okay. Simple flow. Same exact flow that we use for every engineer at Google.

[Kit] So with that, that’s all the slides basically. We actually have a few more slides and then we’re going to get into a demo. You’re going to take over here.

[Ray] Yeah. Sure.

[Kit] Okay.

[Ray] Okay. Very cool.

[Ray] So I’m going to show you a little demonstration here. And this is probably the most beautiful application I can ever build. I’m not a front end developer. This is with bootstrap and it’s also one of the best demo application I ever built as well. Why? Because it actually has two of the best applications. One is a hello world and the other one is a guestbook. Right? You probably seen both but you’ve never seen both at the same time. Now. Now what this actually does you enter your name in the message. Right? And you press on the button and it’s going to say hello to you, of course, and it’s going to persist the data via another service behind the scenes into MySQL database.

[Ray] And it is actually using, you know, the patterns from the microservices patterns where you have the front end that is decoupled with the hello world service and those are the guestbook service behind the scenes. And it is using json over HTTP or REST to do the computation. And as you can see it can get very complicated very quickly if you are deploying an application in the microservices architecture. Because rather than having a single package to deploy and manage here, each one of these boxes are actually separate components that you have to manage separately. Okay?

[Ray] And you really need tooling to help you to do that. So here we have like three components but also we have persistent storage as well. We have Redis which is a memory database that we’re using for session replication. And we are using MySQL which, of course, we need to be able to persist that data onto like a persistent volume. So this is the application that I’m going to use for today’s demonstration on how you can actually deploy into Kubernetes and manage your applications there. And also being able to, you know, create like a continuous integration pipeline. So once you check in the code, you can propagate and perform build, and test, and promote in Artifactory and finally deploy into your environment with what we call a rolling update.

[Ray] So let’s see it. So here I have a cluster that’s already set up and this cluster is set up on the Google Container engine. And that is our managed Kubernetes offering, however, just remember you can run Kubernetes anywhere. Right? You can run it in cloud, you can run it on-prem, I personally run Kubernetes on a Raspberry Pi cluster and you can definitely do that too. But here this is using the cloud. We have four nodes. These are four virtual machines that’s represented with four boxes on the top. And I have a repeat — preview of one of the containers. And it is, let me show you. It’s a very simple Spring Boot application. Okay. This is the — one of them. First one is hello world. And it uses a REST controller that returns hello to you. It’s using Groovy and all that stuff. You can be — did I hear?

[Audience] Groovy.

[Kit] Yeah. Groovy.

[Ray] It’s nice for live coding. I’m not doing today but that’s what I usually do. Now I can configure this application with another properties file. You can see some of these things later. Okay, so I already built the application into a container but I think when I did a poll, many people hasn’t really been using Docker before. So I just want to show you what that looks like very quickly.

[Ray] It’s very important to just understand a little bit about it because how you build a container is by defining a Docker file. And Docker file is really nice. It’s kind of self-describing. It kind of tells you what tells Docker, in this case, how you want to build the container. Now this one’s very simple. It only has a few lines because I simplified a lot of the things. But if you look into one of the examples here. Hold on a second. Groovy on build. And let me see the Docker file here. I just want to show this very quickly. It’s really nice because it’s self-documenting. You specify all of the steps that you need to create this container image that has not just your application but also the runtime that you need in addition to be able to run your application. Right?

[Ray] So, I took the steps off of the internet and then I put it into the Docker file and that’s going to create the image for me that has Java preinstalled and has Groovy in this case and Spring Boot pre-installed for me.

[Ray] Now, because I already have built the container already, now how do I actually deploy this being a cluster of machines. Now remember the slide that Kit was going through on how it actually works behind the scenes and this is what I’m going to do. Give me one second, let me get into the right directory. Spring Boot Docker. Kubernetes — examples Kubernetes one point two. All right. So very simply if you have a container that you want to run in a cluster of machines, you can do this. Kubectl run. Okay. Now, kubectl is a command line application that can interface with your Kubernetes cluster. Everything I do here is gonna be using this kubectl command line, but behind the scenes it actually makes API calls. So whatever I’m doing here right now, just remember you can actually make API calls from your tools as well, directly into Kubernetes behind the scenes.

[Ray] So what this command line is going to do is kubecto run, the name of the application, which I can name it anything I want, and then dash dash image which is the image I want to deploy and in this case I actually have a pre-built image that’s actually on Docker hub but later we’ll actually use Artifactory registry as well. The point here is that you can use images from any registries as well. And then that’s my image. That’s pretty long. And then dash L. This is very important. This actually specifies a set labels onto this application. Now in Kubernetes, labels are really really important. Everything in Kubernetes can be labeled. What else labels do?

[Kit] Are you asking me?

[Ray] Yeah.

[Kit] Labels — labels give you an open ended key value pair that you can search. So everything in your app you can define, you know, different stages, different environments, everything with labels. They’re non-hierarchal by default. They’re by design. Right? Everything’s sort of a flat, key value pair and it lets you manage your application really easily. You just add a label to it and then you can use labels to refer to it later. And Kubernetes only refers to objects by labels. So, anyway.

[Ray] Yep. Very cool. So it’s key value pair and you can name it however you want. You can give it any types of values that you want. This is completely managed by you and you alone, you define it. We don’t really care, Kubernetes doesn’t really care how you want to label your stuff. But it is important that Kubernetes only select by labels. Right. This is something that you can actually create in the future against. Hey give me all the application that has the labels that’s like environment staging and the version is one point zero. Right? You can create that later.

[Ray] So this is what I’m going to do, I’m going to run this command line.

[Kit] Do it.

[Ray] Yeah. And that was really quick. What happened just behind the scenes is that it took my image, it sent a configuration, you know, file that’s actually automatically generated for me, into the Kubernetes master and then the memory consulted the scheduler and they just started one application in one of the nodes. And if you look carefully at the bottom, in parentheses, it is running on the machine tpla. Now what’s important here is that I never told Kubernetes where exactly to run it. I just said run it in this cluster. And it’s going to figure out where to run this application.

[Kit] Yeah.

[Ray] But here we have two more boxes, right. On this screen. We have the box in the gray, and the box in blue. Now, up until this point, we talk about containers but the box in gray is actually called a pod. Now, what is a pod, Kit?

[Kit] What is a pod?

[Ray] Yeah. Why is it a pod? Why is it in a container?

[Kit] Yeah. It’s an interesting thing. So the reason why Kubernetes has pods is because sometimes you want code that has different dependencies to run together and work together. So a pod is a way of constructing, if you think about the container that’s running on a machine, you can take multiple and say, these are a unit together and they share URL. They can share local hosts actually and they share a volume. And so they can be referred to by the same IP address. They work together. And they share a fate too. If one of the containers dies, the other one will die as well. If they put schedule, they always get scheduled the same VM. The reason why this is super powerful is that you get to decouple your dependencies even at runtime. Canonical example of why pods are really powerful is if you had a webserver, a static webserver and a system that can fetch static data from an external system. You put those two side by side and those two systems can iterate separately but they work together. They can be written in different languages and use different dependencies. So […].

[Ray] And, you know, because our application has the frontend, the backend, two of those things, you might be thinking, maybe I can put both the front and backend into the same pod?

[Kit] No.

[Ray] No? No? Why not?

[Kit] Well, it’s the wrong level of abstraction. What you want to do is you want to have different scale pieces in separate pods. Right? So you think about the pods are going to be scheduled on the same machine. If you put in your frontend and backend, you’re not going to be able to scale because you’ll have to grow your backend along with your frontend and it won’t run on separate machines.

[Ray] Yeah.

[Kit] Yup.

[Ray] Very cool. And in order for you to be able to scale this, you need to use something in Kubernetes is the concept of replication controller or in this case, we have the concept of […] deployment. And that is actually the box in blue. Right? And what does that do?

[Kit] Replication controller?

[Ray] Yep.

[Kit] Yeah. So replication controller is a constant loop that tries to match the current state of the system to your desired state. Right? So we declared up front we want this many replicas. I want, you know, in this case, you needed one, right. You’re going to go up to four?

[Ray] Yeah.

[Kit] From one to four. So right now it matches. There’s one running, he asked for one. Now we’re going to change our desired state to four and the system’s going to notice, the replication’s going to notice in its loop that it doesn’t match. And so it’s going to fix the problem. You can scale it back down, you can change that. Actually, you know, might be a good time to show what happens if we kill one of them.

[Ray] Oh no. Really?

[Kit] Let’s do that. Let’s see what happens if we kill one of them.

[Ray] Okay. So I’m going to scale this back to four and you can see how easy this is. It actually figures out which machines has the capacity to run your job. And it just deploys it for you. Now, what do I need to do to kill one of these things? Do you want to kill from the machine?

[Kit] Don’t do it through Kubernetes API because we want to show what happens if it was through, you know, some catastrophic event. So we should —

[Ray] Catastrophic. Somebody getting to your machine and kill the container. Okay.

[Kit] Yeah. Exactly, running out of memory.

[Ray] Okay.

[Kit] You guys all use the Google Cloud Platform console, by the way? Have you guys seen this? Oh you used it, good. […] used it. So, this is the — this is — this is our products, we’re real happy with it. But, one of the cool features — you just use single SSH to get into the box.

[Ray] Yep.

[Kit] You can find the pod name.

[Ray] So you can find out which — where is this running first? So we have four pods running. Sorry.

[Kit] Your font’s too big.

[Ray] Yeah. It is, isn’t. I don’t know how to make this smaller. There we go. Okay so let’s see here. Get pods owide. So we have one running on wskh.

[Kit] Okay. Let’s go find it.

[Ray] Let’s find this […].

[Kit] You would never do this in real life. Right? This — we’re — you know, go to the box and kill it. So yeah. You saw Ray just one click SSH. Gives us a browser with an SSH terminal. There’s also a developer console that has all the Google cloud tools you can have in browser as well. One click and you get access to the latest API, so sometimes you’re just browsing around, you don’t have your dev box handy, you can just kind of click in and hack around on stuff. Which is really convenient. Okay, so we’re in the box now.

[Ray] Okay.

[Kit] So.

[Ray] Here we go. So let me see here. Docker ps.

[Kit] Woah.

[Ray] There it — there’s quite a few things here. Let me see —

[Kit] Can you make it?

[Ray] — and find the Hello world. Hello world. And it is running right here. It’s running in Java in this container. So let me go ahead and kill it.

[Kit] And Docker kill. And then we’re going to kill that container. Okay. Now, what’s going to happen. Well, let’s just do it.

[Ray] Just do it. All right. It’s gone.

[Kit] Now do Docker ps again. Yeah let’s see what’s running.

[Audience] […] unspecified nature.

[Kit] Yes, yes. Yeah. Some angry sys admin just went into Docker, killed the thing. So —

[Ray] Look, it already just got restarted, what happened?

[Kit] Already restarted of 10 seconds. So it’s that fast, right. We — it notices and fixes the problem. Now in this sort of slightly contrived example, it’s going to reschedule in the same box. Actually it’s going to reschedule it on a different box. But normally if it has room it’s going to put it back where it was. But this is happening all the time. Right? It’s always running and looking for these restarts. This is the idea of cattle not pets, if you’re familiar with the concept. You let them die and you bring back new ones. You don’t care and feed and name them, and treat them specially, and give them medicine. You just kill them and get a new one. Right?

[Audience] That’s brutal.

[Kit] It’s a little ruthless. I’m sorry. Yeah.

[Audience]

[Ray] Okay. That was pretty cool. So Kubernetes really tried to keep your system up and running. But we have a problem here, these IP addresses that’s associated with each of these pods, or these gray boxes for your application. They are ephemeral. Okay, that means they can come and go. They changes, you don’t really know what this is going to be. How do I actually get to this application if that’s the case? I mean.

[Kit] Almost like you need like a service endpoint with a load balancer.

[Ray] A load balancer? Right. That’s what people usually do, right. You load balance your requirements from the load balancer and you brought it from different instances of your application behind the scenes. And how do we do that in Kubernetes, it’s pretty simple. All you need to do is to — whoa — what just happened here.

[Ray] Kubectl expose. Okay. So I can expose this deployment right now. Give the name of the application. I can give it the port I want to map to and the port that you want the external audience to be able to access. And here I’m going to specify the type of load balancer. And this is a special one because by default Kubernetes will provision a internal load balancer for you with an internal IP. But, load balancer, what does that do?

[Kit] Well so because of all these pods can all do the same work, it doesn’t matter which one it goes to. Right? And so what the service represents is an abstraction around pods that can all fulfill the requests that come to it. Right? And so the service, the Kubernetes service it attaches to a load balancer in your cloud environment and will round robin the work across all of those different pods that can fulfill the task. And actually now, are we ready to sign the guestbook? Or what?

[Ray] Not yet. Not yet. It’s almost here.

[Kit] Almost there.

[Ray] Almost here.

[Kit] We’re going to ask you guys, by the way, to pull out your phones or your laptops and sign the guestbook. But please, you know, remember that we are going to put this video on the internet. So just be cool.

[Ray] Yeah. But the —

[Audience]

[Kit] We’ll blur it out if —

[Audience]

[Ray] I’ll tell you what, I have done this with a live chat with high school students before. And they’ve been really nice. So I’m just setting the bars here.

[Audience]

[Ray] Now, before you are able to sign the guestbook. You know, this is the — this is only one service at this moment. Okay. This is not the full thing. But I just want to show you that you can get to the service from the load balancer. Now, remember my guestbook has multiple components. I have the frontend, the backend, the database, and all of that.

[Ray] Now to save you time because we’re kind of short on time, what I’m going to do which is to code a script. It’s called turn-up script. And in turn-up is just a series of — oh, sorry — it’s a series of kubectl commands. And I have checked in all of the — my application deployment descriptors into my Git repository. So I got these files and you can combine these files together if you want to, I just have them separate because it’s really easy for me to change and mix and match things. So what I’m going to do is to run this turn-on script which will in turn tell Kubernetes to deploy a lot of things. It’s going to deploy MySQL with a persistent volume. It’s going to deploy Redis. It’s going to deploy guestbook service. It’s going to make sure my hello world service is still there. And […] the user interface as well.

[Ray] And as we’re doing this, from here you can actually see this happening live. Right. As new services are being deployed I can visualize this with this special visualizer which is open source. You can get it from my GitHub. And all of a sudden, without doing a lot of work, because I’m checking, I described on all of my applications the way I want to deploy it in this Kubernetes canonical model, and I just — I can deploy this anytime I want and anywhere I want. As long as the deployment target is a Kubernetes cluster. And it doesn’t matter whether, again, you’re running on cloud or on prem. The descriptor looks exactly the same.

[Ray] So I’m going to wait a little bit because now MySQL, my guestbook service, everything’s up and running. The only thing I’m waiting for here right now is the actual load balancer IP that’s external and there it is.

[Kit] There it is.

[Ray] So, how many people think this will work? Thank you. Yeah.

[Kit] Oh yeah. Vote of confidence.

[Ray] Thanks for the confidence. Thank you.

[Kit] Let the record show that everyone raised their hands. Yeah.

[Ray] For those of you who are watching on the video, nobody raised their hand. So let’s see. Okay. So very cool. I have my IP address. This is what was provisioned to me by the load balancer by Kubernetes. And you can actually go here and say something nice, if you want to. I can try this myself. This is a make it or break it moment. Whether Kit fires me or not. All depends on if it works.

[Kit] If only I had the power.

[Ray] Hello SwampUp. And let’s see. Oh come one. There we go. Not bad.

[Kit] Oh we have someone. Pretty awesome demo. Nice.

[Ray] Thank you, thank you. Yeah. And this is actually persisting the data into Kubernetes behind the scenes. Very cool? You like that so far. Yeah.

[Kit] All right. So I was promised continuous integration Ray. So can you make some changes to the app or what?

[Ray] Yeah, yeah. Sure. Let me make a few changes here. First I’ll show you without continuous integration. How you can actually do this with Kubernetes. Okay.

[Ray] So, for example, if I had two different versions or tags of the same container. One is the updated one the other is the older one, how do I actually use Kubernetes to do a rolling update. Okay, that’s one of the most common tasks that you’re going to be wanting to do. And in Kubernetes, if you just want to use it directly. I’m going to show you how it’s done. So first I can get my deployment. And here, what I want is to update the hello world service here. Okay. I’m going to do that. And it’s really easy to do. I can kubectl edit. Deployment. And the name of the application I want to edit. And what this is going to do is to show you the actual descriptor that was stored inside of Kubernetes. Now you can actually copy and paste these out. You can save this file into your Git so that you can always redeploy the same application with the same setting. And here all I have to do is to change the image I want to deploy. And watch very carefully.

[Ray] So I’m going to scroll down. Hold on a second. Where is he? Do a refresh. It’s a little slow. So I’m going to do a refresh. I’m going to update this service right here. And I’m going to update it to two point zero. And let me change that to two point zero. Maybe that’ll work. And, here we go. Okay. Wow. That was quick. What just happened? Did you see that? Was that too fast? It actually just performed a rolling update on my application and replaced my service, one by one, okay, one by one, one instance at a time, and replaced it with version two point zero.

[Kit] Yup.

[Ray] Now if I go back to my guestbook application here. And let me do a refresh.

[Kit] By the way, that’s what we mean by immutable. Right? We didn’t change what was running on container, right. What we did was we said bring a new one up. Kill the old one. Bring a new one up. Kill the old one. By the way, you can do rollbacks the exact same way. Right? You bring up the old one, bring down what was the new one. And it’s the new new one. So you’re always — you’re only, always pushing a container that’s ready to run. You’re not modifying the container that’s running. You’re not installing updates to wherever. That’s where you have this immutability and repeatability in the deployment.

[Ray] Yep. Exactly.

[Kit] Oh. Look at this. This guy’s. Okay. Yeah.

[Ray] Version two point zero. Yeah.

[Ray] That’s pretty good. Good messages so far. I hope it stays that way.

[Ray] Now let’s say that this deployment was bad and you don’t want to do — you want to rollback. What do you do? You can do kubectl rollout. You can actually see a history of deployments if you want. Hello world service. And you see what happens. And look, I have three versions right now. And you can actually see which command line caused the system to change. Okay. If I want to roll back to any one of these revisions. I can do that. Or if I want to just go back one of these revisions. I can do kubectl rollout undo the deployment hello world service. And again what’s going to happen behind the scenes, oh look, it’s rolled back. And it’s going to do another rolling update, rolling back the application. Just like that.

[Kit] Yep. Yep.

[Ray] Now what we have also done, is by using this kind of construct, we actually create a demo where we’re using Jenkins pipeline to do a few things. This is actually monitoring my GitHub repository right now. So for example, this is what I’m going to do. I’m going to check out my different branch that this monitoring tool — this Jenkins monitoring tool right now. So I’m going to do a Git checkout demo. And I’m going to go ahead and update my user interface. And somebody from marketing told me that the white background doesn’t work really well. So they recommended light blue.

[Kit] Can you — can you make it green?

[Ray] Green?

[Kit] Yeah.

[Ray] Green — yes. Let me do green.

[Kit] Yeah.

[Ray] But is there a specific shade of green.

[Audience]

[Kit] Green […]

[Ray] Green would be okay.

[Kit] You don’t have it memorized.

[Audience] […] our corporate green. Next time.

[Kit] Yeah. Next time.

[Ray] I’m going to commit this. I’m going to git add. I’m going to do a git commit and update background to green. And then I’m going to push it. This is actually pushing it to my GitHub repository. Okay. And because I set up Jenkins in such a way that is actually — I’m using quite a few plugins here. One of the things is the GitHub plugin. And it’s actually — why is my push not working? Is my internet down? Yeah. Maybe. Well, let’s see if this goes through. Oh here we go.

[Ray] Get off the YouTube videos guys. Come on.

[Kit] They’re busy signing the guestbook.

[Audience] They are chatting in the channel.

[Ray] Yeah. So I did a push. And this actually triggers, hopefully triggers my pipeline. If I go here. And it’s going to do a couple of things. Oh yeah, it’s building already. Right. It’s actually trigging my pipeline. It’s downloading my latest application. It’s building the containers. And if I go back to the UI pipeline here. These are the things that it’s going to do at the high level. Build the container, test it, okay — and also, by the way, once the container is built we’re actually pushing into Artifactory. And Artifactory instance is actually running on GCP as well.

[Kit] That’s right. And if you guys may have heard. Now you can run Artifactory online on Google Cloud Platform. And we hope you do. And actually there’s a free thirty day trial, I’ll show you later, you can use with that. Here it is. Where the magic happens. GCP Artifactory. Is it almost done?

[Ray] Yeah. I need to enable auto refresh but it’s also going to push my container image into Artifactory. It will try to do some component testing. You know, for this specific component I just built. And if the component test was successful, then we can use Artifactory to promote the image into the release registry instead.

[Kit] Where are you running this Jenkins and how did you deploy it, by the way? Oh yeah, this Jenkins is also running inside of Google Cloud Platform. And it is really cool because I didn’t have a lot of time to set this up myself. So what I did is that I went to the console. And this is really cool. And I went to this cloud launcher. And here, it’s really nice. You can find a bunch of other applications you want to deploy, and one of them happens to be Jenkins. And all I had to do was to click on that and it’s an […] instance, how big is the machine, and they just post everything for me. And subsequently I was able to install all the plugins I need to run this pipeline. Okay.

[Ray] Cool.

[Kit] Okay. So it seems like this pipeline has completed. And one of the really cool thing is that, you know, not only are they calling Artifactory behind the scenes, and doing the promotion of the Docker images when, you know, it’s ready to be promoted. It also calls Kubernetes with a kubectl command line in this case, and there’s a Kubernetes plugin, and it actually did a rolling update for me. Now you may have missed it, but hopefully, again make it or break, if I go back to here.

[Kit] Oh look at that. Green.

[Ray] We got green. Actually somebody from Google behind the scenes did this for me. No, I’m kidding, it was all going through the Jenkins pipeline. Pretty cool. Pretty cool.

[Kit] Pretty cool. Yeah.

[Ray] And I don’t have just a little bit of more time left. I’m going to go into one of these things specifically and show you what configuration I did. Let’s see here.

[Kit] Actually, do you want to? Should we take questions? Or.

[Ray] Yeah, if you can take questions first. Yeah. Let’s do that.

[Kit] Let’s do that. Yeah. Unless we can leave it kind of up, but. Actually, go to the final slide though just real quick.

[Ray] The final slide. Yep. One second.

[Kit] All right. Yeah so. There’s two takeaways for you guys. Thanks so much, by the way, for your attention. We’ll take some questions that will be great. But, JFrog Artifactory, you can get a free trial and try it out on GCP. We’d love for you to try it. Be awesome. And then also, you can get a 300 dollar free trial on GCP and so I would encourage you to take that free money and build something and try it out. It will be a lot of fun. So, anyway, thank you very much for your attention.

[Ray] Yeah. Thank you.

[Kit] All right.

Trusted Releases Built For Speed