Use Case – Containerized Continuous Deployment with Mesosphere DCOS, JFrog Artifactory, and Jenkins

Abstract:

Roger Ignazio / Mesosphere, May 2016: This talk will present Mesosphere’s approach to improving developer agility with Jenkins and Artifactory. You’ll learn how to run multiple Jenkins instances on a single DCOS cluster, with each of them sharing a common pool of resources based on need, not static partitioning. You’ll also learn how to build, test, and deploy containerized applications using Jenkins, the CI solution of choice for Velocity, how we’re partnering with JFrog for artifact storage on DCOS.

Talk Transcription:

So, just to get started. My name is Roger Ignazio. I’m a tech lead at Mesosphere, specifically working on CI and CD both internally and for customers. I’m also the author of Mesos in Action. It’s a book about the Apache Mesos project and pretty much the entire ecosystem. So it predates the whole idea of DCOS and, you know, some of the work that Mesosphere has been doing in the space but it also touches on a lot of their work heavily.

And then, that kind of gets me to some of the work that we’ve been doing with JFrog and getting Artifactory on DCOS. And, you know, just a recap, you know, we see that, you know, there’s like these four major components in, you know, a continuous delivery, continuous deployment pipeline. You know, and obviously Artifactory is going to play a big role there. So, you know, I think, you know, they got this — this Artifactory plugin for Jenkins that actually allows us to use and publish artifacts from within a Jenkins pipeline. You know, we’ve included that as well in the sort of turnkey Jenkins image that we have for DCOS. And all this is open source, by the way, so, I mean, you can actually go upon GitHub and check it out at, you know, the Mesosphere Jenkins Mesos repo. And all that stuff’s over there.

But I think most of you are probably familiar with this particular deployment diagram from the JFrog website. And, you know, if you consider the different components that you have, you know, a load balancer, Artifactory primary, Artifactory secondary, some sort of database in shared storage on the backend. These are all things that you can do on any operating system, including DCOS, but really where — where it starts to shine is, you know, if you have Artifactory HA deployed and your primary goes down, you know, that’s something that you want to get back up and running. Like there’s a lot of background maintenance tasks that happen on the primary instance, you know. Sure, like Artifactory will continue serving artifacts and you can continue storing artifacts but that’s something that you need to actually go and do as a sys admin or as a developer. And with DCOS, if an underlying machine fails or if the instance crashes, or something happens, Marathon will see that failure and it will reschedule it elsewhere on the cluster. Updating any load balancer configs or anything.

I actually saw a demo of this from the JFrog guys earlier this week, or sorry, last week and I think the total down time was about three minutes or so. Which, I mean, you know, if you get paged for an Artifactory instance going down, I don’t think most of us would be able to, you know, check our email within three minutes, let alone actually have the problem resolved.

So, like I was saying before with DCOS package install Artifactory, we can get a turnkey Artifactory deployment on DCOS and, like I said, that comes with, you know, load balancer, database. You know, sure I think there’s a — there’s still the NSF or the S3 requirement for Artifactory. I know over at Mesosphere we’re working pretty hard on the storage story around that. Stateful services aren’t really a great user experience right now. In a container-centric world. It’s something that DCOS is capable of doing, but, you know, there’s some manual set-up and we’re trying to make that a little bit easier.

You know, we’re still polishing the Artifactory HA package for production. But, like I said, I saw the demo of it last week and hopefully we’ll have that added to the Mesosphere universe really shortly.

But otherwise, just to recap. You know, we’ve been able to have a turnkey, Jenkins, and Artifactory infrastructure running on top of DCOS. We have a container orchestrator for actually deploying those applications to some fleet of machines in production. Or staging or however you want to group those. You know, service discovery for apps being able to connect to themselves like Artifactory secondary connecting to the primary. We have a service called Mesos DNS. It’s kind of like console where it takes information from the Mesos cluster and actually publishes it via DNS so that services can find each other. We also have a layer for load balancer, called Minuteman. It actually does, well, like I said, layer for load balancing across containers. And that’s completely distributed as well.

But really, you know, if — if you’re looking for DCOSes, a container deployment platform really, as an application developer, you want to have, you know, some sort of automated, first class, experience to get your application from source code through to production. And really just combining a lot of these technologies and kind of piping them together, we’re able to do that.

Some of our future work is just a kind of a way to visualize that whole process. You know, being able to plug into GitLab or GitHub Enterprise and, you know, see your repository, see your code all the way through to production or provide access to an authorized user to actually do that deploy right from within the DCOS web interface. And we can do that just with the various APIs that are available in Artifactory and GitHub and Jenkins. Et cetera.

So with that, thanks for listening. We got a little bit of time for questions if anybody has any. But thanks.

Trusted Releases Built For Speed