Wistia video thumbnail




Everyone, really quickly before we begin, let's do a sound check. If you can hear my voice, can you type "Yes" into the questions or the chat box? Awesome, it's a bunch of yeses, so welcome. It's great to see some new faces in this room, as well as a bunch of returning faces.


Welcome to Easy Continuous Deployment You Can Trust. We've got Brian Kendzior here, Founding Engineer of Solano Labs, and Troy Presley, who is the Solutions Engineer at Apica. They're going to give us a really exciting presentation here. If you have any questions, please ask them throughout the presentation, and we will get to them at the end of the show. We've also got Jorge here who I think is going to be the moderator on the Solano Labs side, so he's going to say hi here, too.


Hey, everyone. Welcome to the webinar.


Awesome. Without further adieu, let's get going. Welcome. [Next 00:01:52].


Excellent, thank you for the introduction, Whitney. I'm Brian, again, as she mentions, founding engineer over at Solano Labs, and today we're going to be talking to you guys about continuous deployment and the differences between continuous deployment, continuous delivery, and really how continuous deployment can help your software engineering teams deliver faster, more robust, and safer code. With me is Troy who's going to be helping us out on the load testing side of things.


Hi, everybody. Like Brian said, my name's Troy and look forward to a great webinar.


Awesome, so let's jump right in. We have a bit of an agenda here. First we're going to talk about continuous deployment and exactly what that term means. I know it can be kind of confusing with all these different buzzwords jumping around in the DevOps space, so we're going to go over continuous deployment, then we're going to do a quick demo showing you how easy it is to build a pipeline using continuous deployment, and then again going over some more terms around continuous deployment and how to make your builds safer and better tested. Then we're going to have a chance for questions and a few resources for you guys to continue learning. Like Whitney said, if you have any questions throughout the webinar, I am going to be jumping back and forth in between a few different tabs doing the demo, so if you need me to slow down, if you need a little bit more explanation, I'm definitely happy to answer those questions as we go.


So the journey to continuous deployment, there are two terms that kind of get thrown around in the deployment delivery space, and I just kind of want to clear up the confusion between continuous delivery and continuous deployment. Continuous delivery is having all of your testing and your builds happening in a pipeline, but deploying and actually pushing your code and your product to production is a manual process. You have a DevOps engineer that pushes a button and does a bunch of manual load testing, may do some manual testing, may give it to QA engineers to make sure the deployment was successful. As opposed to that, there's continuous deployment where all of those steps are automatic, and that can be scary to some people, especially people who are used to being able to do deployments manually and be able to do all of the sanity checks and make sure everything works on their own and using their own system, but with continuous deployment, as you'll see throughout this presentation, you can automate most of those manual steps, get rid of the human error and the human error equation when you're talking about verifying and deploying, and you'll actually end up deploying faster


What do you need to be confident that continuous deployment is going to be the thing for your organization that's going to work, right? Your worries are, is the deployment going to push out broken code? Is the deployment not going to work? Is it going to put out stuff that I'm not ready to put out? One of the things that you need in a continuous deployment pipeline is you need it to be fast. You need to be able to have a reason to do continuous deployment, and the biggest reason is to increase the speed at which you can put our your new features and increase the speed at which bug fixes reach customers. A lot of times there's a huge disconnect between your developer making a bug fix and when that bug fix is going to make its production. You know, is it going to be merged into a bunch of different things? Is there going to need to be a bunch of different CI that happens? Maybe there's a failure of another feature that's going out at the same time, and that bug fix can't reach the customer because maybe there's a daily release, and there's a few things in that daily release that are broken, but the bug fix is fine.


So you want it to be fast and you want things to be able to go out potentially as individual changes so you're not getting backed up by other people's code changes. You also want it to be clean, so you want to be able to deploy cleanly into whatever infrastructure you have. In this case we're talking about AWS instances, and you also want it to be clear, so a lot of times a problem within an engineering organization is the deployment process is muddled, and you need to talk to five or six people to figure out all of the different manual processes that happened to get your code to production. With the continuous deployment pipeline, developers can see and visualize every single step along the way to a deployment, so they can have clear and easy reasons for why the deployment didn't work or potentially how long it's going to take before their code hits production. 


I would actually also add to what Brian's saying here. I think it wraps all these concepts together, is what you're really kind of on a high level going for is consistency, right? You want to bring consistency of the effort of going from producing code to getting it out into the world. That's what these systems exist to do, and each of the little bullet points here, like making it fast, making it clean, making it clear, it's absolutely true. Then wrapping all that together is like turning your process into a consistent thing, and the last thing ... the worst thing that can happen to consistency is to add human judgment in there. The more human judgment you can remove from the process and the more automated you can make that, the more consistent your releases are going to be.


Absolutely, yeah, it is a very good point. Obviously, to get consistency, you need to be able to trust the tools that you're using, so that is going to be where we bring in CI and where we bring in load testing and where we bring in the tool that we're going to be talking about that chains all this together, which is AWS CodePipeline. So we need to be able to make all of these checks, and you need to make sure that it's a tool that you can trust, and AWS CodePipeline is a very robust pipelining tool that allows you to string together multiple different steps, build, test, deploy, and it's all controlled in Amazon, and it brings together a lot of different integrations. Solano Labs has an integration. Apica has an integration and allows you to kind of just drag and drop, more or less, the solutions that you need to make your deployment pipeline.


With no further adieu, let's get to the demo. What I'm going to go ahead and do is show you guys the AWS CodePipeline that we're going to be building. Very briefly, the source step is going to be pulling your source code, the build step is going to be running your CI tests and your load tests, and then the deploy step is going to be deploying your code to these AWS instances. To show you guys how easy it is to create one of these pipelines, I'm going to go ahead and actually build a pipeline right here.


You have to give your pipeline a name. You choose a source provider, so where your code is actually coming from and how the pipeline gets the code. I'm going to be using GitHub and connecting it to my GitHub account, which I've already been logged into, so really quickly and easily I can go ahead and select a GitHub repository. This repository is public, and we will be posting a link at the end of the slides so you guys can look at it if you have any questions. The next piece is the build provider, and we're going to be using Solano CI as the build provider. I'm going to connect to Solano CI, which actually uses the GitHub OAuth integration, so it drops right in, and I'm going to click Connect to register this source build with Solano CI. 


Then you have to choose a deployment provider, so we're going to be using AWS CodeDeploy, which is a tool that allows you to manage deployments over multiple AWS instances, and we'll talk about that a little bit later. You have to give an IAM role to the pipeline. This is the AWS CodePipeline IAM role, which gives access to an S3 bucket, and that S3 bucket is essentially used to pass the different code blocks in between each pipeline step, and just like that, I have a pipeline. 


I'm going to quickly disable the pipeline so that it's not running while I'm trying to explain this, but the next thing we're going to do is add the Apica load test step. Here we're going to demonstrate that code pipeline doesn't need to be linear, so what I just created here was a source step, a build step, and a demo step or a deploy step, and that's all linear. If the CI step doesn't pass, then it won't go to the deploy, but what you can also do is add a parallel step, so you can actually run your tests quite a bit faster. Here for the parallel step, I'm going to be adding the Apica load test and going to connect to Apica here, so quickly grab Login Credentials, and Troy, if you could kind of talk through setting up the code pipeline integration.


Yeah, absolutely, so just like what Brian had done to connect up to the Solano Labs that he had done, basically what he's doing is he's just entering his credentials, you do have to have an account with Apica to do this. We're talking about adding load testing so Solano's going to do your unit testing and that kind of testing. This is now load testing we're talking about, so you do have to have an account, but once you've got an account, you can basically choose the script and the kind of testing settings that you want to do very quickly from the dropdowns.


We already had a settings preset for this demonstration for code pipeline, and we already have a script that's going to hit the demo application. You can see when you selected the settings that this test is going to run 100 users for three minutes. One thing I want to just point out with load testing as part of a continuous deployment solution is you don't have to do ... A lot of people, when they think about load tests, are going to think about running thousands or tens of thousands or millions of uses against your application, but when you're talking about doing deployments and testing to gate those deployments, smaller numbers are actually still very relevant. If you were to run like a 50-to-100-user load test, you can get the results of those load tests over time after each deployment, and you're still get a lot of relevant information you can use, both for historical tracking of what the performance has done in your application and also to use it to decide whether to continue with that deployment or not, so just pointing that out.


The other thing I wanted to point out before we move on is right down under Thresholds, the other setting you can do in here is you've got the ability to set up thresholds, and that's what ... when we're going to talk about stopping a deployment based on certain metrics. If you want to go ahead and click on Add Threshold, you have a couple different kinds of thresholds you can choose. Right now you've got a failed loop percentage, which is basically just the percentage of times that the percentage of attempts to run the script that have failed, and you also have the average response time, which is going to basically be your performance metric during the load test. If you wanted to, you can add one or both of these thresholds and actually send back a failed mark to AWS CodePipeline which is going to tell it to stop that deployment if you wanted to.


But you can also, if you don't want to have that as a gating factor, you can leave the thresholds completely off, which I would go ahead and just remove that. You can leave the thresholds off, and at that point what you're doing is you're just adding a meter to your deployment pipeline, so we're just going to capture that information and allow you to see it later. If you don't have any thresholds, then we're not actually going to stop the deployment. That's just based on what you want to do and your comfort level with automating your specific pipeline, so yeah, you can go ahead and continue with those settings.


Okay, here the last thing that you need to control or add into the Apica step is your input artifact, and CodePipeline uses the concept of artifacts to allow you to pass pieces of code along through the pipeline, so MyApp is actually the artifact that was created by the GitHub source, and that is being passed both into Solano CI and Apica. Different steps can actually have output artifacts, so in the case of Solano CI, there is an output artifact which we're going to make a modification to the input artifact during the CI step and actually pass that to deploy so that what's actually being deployed isn't 100% the code that was from GitHub. It's the code that Solano CI modified. You can obviously change that. You can make the deployment be exactly what came from GitHub if you would like.


So added the action and I'm going to save the pipeline changes, and there in less than five minutes you have a deployment pipeline with two, three, four different products that you're now chaining together to have a successful and better tested deployment. Just to save you guys from having to [wave 00:15:54] through the pipeline, what I'm going to do is I have a pipeline that I ran a day ago. We'll use this pipeline to more or less go through and explain exactly how each of the steps ran and dive a little bit deeper into the pipeline. 


The first step, like I said, was a GitHub step, and GitHub, you can set it up so that every single time a commit is made to a specific GitHub branch, it runs the pipeline. If you have a pull request workflow and you're using master branch as production, every single time you make a pull request to master, you will get an event that gets sent to the pipeline, and it will build. As you see, if I click on the git hash that it sends, it takes me to the GitHub page and takes me to my repository. Very quickly, what this repository actually is is a very simple static webpage that's being served by Apache, and as you can see, there's this index.html, and it has this placeholder text within the index.html. Within the Solano step, we're actually going to be doing something that replaces this placeholder text with something that is a link to an S3 resource that we're deploying, so just remember that we've got this placeholder text coming into the pipeline. We'll show you what it looks like once it get out of the pipeline.


The other bits that I would like to show you about the GitHub repo is we have two different files in here that are controlling both the Solano step and the code deploy step. The first bit is the solano.yml, which is a YAML file that allows us to control different things about how your CI build is run. We have the ability to add language versions to specify which tests to run and then also to be able to run pre-and-post-build hooks to potentially set up your CI environment or to tear down your CI environment at the end. The other script that is of note is the appspec.yml, which is the file that controls AWS CodeDeploy, and this file will essentially gives you the ability to write scripts to happen at different pieces and different points in the deployment. So there is the application start hook, which basically tells you, okay, how do we start your new application? In this case, because it's an Apache web server, all it is is starting up Apache. Application Stop similarly is stopping Apache, but there's multiple hooks that you can get into to basically script your deployment.


Going back, we're going to jump into the Solano CI step. After GitHub pulled, it pushed the code into the package, the input artifact MyApp. It brings it down and it actually passes it both to Solano CI and Apica at the same time. These two builds will be running at the exact same time so that you can get to your deployment faster.


Just to point for anybody, if it wasn't clear, we chose to put them in parallel, and of course, you can choose to do them in serial if you had wanted to. It's not a requirement that they're in parallel, but it can save time if the tests are not mutually ... if they don't overlap with each other.


Yeah, absolutely, so you can run them in parallel. You can run things in serial, and you can actually make dependency graphs with AWS CodePipeline, so very robust tool if you ... can basically take any use case that you need for your deployment.


Jumping into the CI stuff, we're going to click on the details page, and it will bring us to the Solano CI web interface. As you can see, this is running an AWS CodePipeline build, and if I click here it will actually bring us back to the pipeline. What's more interesting here is the test results, so as you noticed we had that configuration file. That is what's displayed right here, and we changed the Ruby version. As you can see, we used Ruby 2.1.5, Bundler version 1.10.9 and all of those languages are selected at the bottom, those versions. We went through and ran a bunch of different unit tests, so obviously you want to be running your CI builds as you're developing, but as a final gate you want to be able to make sure that all of your tests are passing before you go and do a deployment. Here we're running multiple unit tests just to make sure that all of the code is in working order, and we go and actually split out each of your tests individually with the printed results.


The other interesting thing that we're doing here, as I mentioned earlier, we were changing that placeholder text with a bit of a deployment. If we go to the post-worker hook ... Nope, not the post-worker hook, the post-build hook. One of the things that we were doing is actually uploading a file to S3, so as you can see I uploaded this file to S3 and found that location and actually put it into the solano.html file, this index.html file, so we replaced the placeholder with this URL, and this is just to demonstrate that these steps can actually modify your code as it goes through the pipeline. When we go and look at the actual deployed application, you'll be able to see that there was actually a modification made.


The other thing to note with Solano CI is the way ... Basically the thing that we run on and the thing that brings value to most of our customers is the fact that we have unlimited parallelism. These builds, as you can see, I ran on about 13 different tests. Each of these were parallelized intelligently across ... It looks like I ran on two separate workers because this is a pretty simple build, but the number of workers that you can use and the amount of parallelism is not limited with Solano CI and our architecture, so you can get extremely fast test results through using Solano CI.


A lot of our customers come to us with hour-and-a-half, two-hour-long test times, and what we like to do and what we like to strive for is getting their tests to pass within coffee time, so your deployment shouldn't take any longer than going, getting up, boiling some water and making some coffee, and coming back to your desk. There's too much contact switching happening if you have two-hour-long builds. If you have builds that take any longer than 15 minutes, a lot of times you're not going to be running a build every single time that you make a commit because it's just too expensive. Here you can see that this test took about two minutes total with all the different steps, and this is a ridiculously fast build, so the nice thing is that our code pipeline is going to keep on moving. It's not going to get bogged down with a two-hour-long test.


While that CI test was running, we were actually running an Apica load test, and Troy, I think I'm going to pass the presentation over to you so you can kind of dive into what that load test looked like and show us the Apica product. 


All right, sounds good. All right, we're on a different screen now, but if you had clicked the Detail section on the Apica load test, then this would basically take you over here to your Apica account, and you'd be able to come in and you'd look in. The first thing you're going to look in is we have a continuous integration screen here, and this is set up to show you the metrics specifically that comes out of AWS CodePipeline. In this case, I've already highlighted the AWS CodePipeline deploy step that we had created. What you get then is you get a history, so like I had said before, if you choose not to gate your deployments off of the results of the load test, you can still come in here and look at a historical performance. In this case, there were two points on this date because it was run twice, so you can see that there was a performance difference on the two different deploys, but you get a historical perspective of the performance on each deploy, and that's going to keep adding points as you go through the deployment, so you can see what the effect of different deployments were.


This is just looking at the session time, but you have the ability to come in here and look at a bunch of different metrics. You get your transaction rate, your response time, your network throughput, your stability, which is going to be your [failure 00:25:20] rate, so you can [inaudible 00:25:21], and your completed loops per minute and your page views per minute, so a lot of different ways to ... Like some of these are actually just mathematical equivalents shown in a different way, but it allows you to kind of keep that relative to how you're used to consuming that data, and you can really look at what's most relevant to you when you're running these load tests. Like I said, the performance-based metric and the failure-based metrics are exposed back to CodePipeline, so you can use that to actually stop the deploy if it exceeds the thresholds that you're looking for.


In addition to that, you can go into any one of these runs ... uh-oh. I'm showing a connection lost on my screen. Am I still hearing anything?


I can hear you. 


Okay, good. Sorry, GoToMeeting was trying to tell me I had lost connection there, but what I was saying is that when you come into this and then look at the results, you can click on any one of the results and actually come down and see further details broken down into ... If this was a multi-stage type test, you can see the individual page times for each of the steps in your test, so you can get a little bit more detail. Then you can continue to go down and look at the details on these tests.


You can also, of course, at any time ... Now we're looking at the specific details of that particular run, so you can flip around in here and look at different ... You've got your time series graphs showing you that particular run and what it did over the time. This is a three-minute test, but that's completely configurable. You can do as short or as long as a run as you like with as many users as you'd like. Again, this was a 100-user test running for three minutes, so you can see that timeline over that time series. Then you can get down as detailed as you want. 


A lot of the times when you're doing these kind of ... Sorry, I'm losing myself here ... For the AWS CodePipeline, you're really just going to be interested in the historical look at these graphs. Again, that's very easy to get to by just hitting the details there. I will pass it back to you, Brian, to go to the next slide.


Excellent, so after both the CI and the load tests ran, the next step that we're going to be dropping into is the actual deployment. For this deployment, we used AWS CodeDeploy. There are other deployment providers in CodePipeline. I believe you can use ECS, but we're going to use CodeDeploy. I showed you that appspec.yml file that told CodeDeploy exactly how to deploy the code, so what it's going to do, it's going to stop Apache and start Apache. It's pretty easy, but if you'd like, you can go ahead and configure multiple different steps to potentially move files around, to do different checks, essentially whatever you'd like.


So we're going to go into the deployment that happened, and you set up more or less a demo fleet is what I did, and I have three total instances that are int his demo fleet. You can set up multiple different ways of deploying. This is the CodeDeploy default deployment, which is one instance at a time. What happens is it will deploy to a single instance, make sure that it worked, then deploy to the next instance, make sure that works, deploy to the next instance, make sure that worked. There's other deployment configurations where you can deploy all the instances at a time. You can deploy half of them at a time. It's very configurable to being able to essentially do a deployment, make sure it works, and then go and do more deployments as you are comfortable with it. 


There is also the ability to enable rollbacks, so if you're deployment does not succeed, so it doesn't hit the threshold of the total number of instances that need to succeed for the deployment to succeed, you can actually do an automatic rollback. As you can see here, we have a list of all the different revisions that we've built, and this pipeline has been around for awhile, so we actually have the ability to have a history of all the different revisions that have gone through the pipeline. You can actually go ahead and basically redeploy an old version of code really quickly.


If we go into the actual deployment itself that ran here, as you can see, here are the three instances. You can see the deployment config, which was one at a time, and then the minimum healthy hosts, which means that two out of the three instances need to have a successful deployment in order for the entire deployment to be successful. Then that bit that I showed you earlier that you can flip on and off, you can actually trigger the redeployment of the previous code if minimum healthy hosts isn't satisfied.


Within each of these instances, you can actually go and see how long each of the deployment steps took. The only two that I defined were the application stop and application start, so stop Apache, start Apache, but you also have the ability to hook into all these different hooks, potentially installing packages, validation, to make sure that your deployment actually worked the way that you wanted it to. Like Troy and I said earlier, you can run things in parallel. You could also add a load test after your deployment, right? You could do a load test after the deployment happens to make sure that everything's going as expected. You could run CI tests after the deployment. It's a very robust tool.


Let's actually go to one of the instances where it deployed and go to the public IP and see the deployed website. This is the index.html that I was showing you earlier, and the one interesting bit was you saw on GitHub how this link was actually a placeholder, underscore, placeholder, underscore, and in the CI step we actually changed that, so if I go and inspect the source, you can actually see that the A class caption actually now has an href, which is a asset that we deployed using the Solano CI step and kind of demonstrates the power of having a pipeline where you can pass your code in between the steps and actually make modifications throughout the pipeline. There's lots of things that you can add in AWS CodePipeline, including Lambda, triggers, multiple different integrations, so really the sky's the limit for what you can do with these pipelines and the modification tool.


It's amazing. They've made it simple enough that you can set it up in five minutes and really not have to do anything complicated, but they gave you enough power to add as many steps and do as much parallelism as you need and to alter your code all the way through it, and the power to really do anything you need to from the beginning to the end, while not making it so complicated that you can't get it set up very quickly.


Yeah, exactly. Now that we've gone through the demo, let's talk a little bit more about continuous deployment and ... Hooray, we've achieved it, right? In less than five minutes of set up, we have a continuous deployment pipeline. Again, why do you want to do continuous deployment? You want to be able to deploy fast. You want to be able to iterate faster, and as an end developer, I love when my code goes out. Instead of just doing a daily deployment, if we could hourly deployments, or even a deployment every single time you merged a master and you could do a safe deployment and actually have the ability to trust your deployment pipeline, that's really awesome. It actually gives your developers the feeling of a lot of power. Every single time that they make a change, it's going to go out to production and they don't have to wait a week. Like Troy mentioned earlier, the ability to remove the human error from the equation. If you can automate all of these different steps that you do manually, you're going to have a much more robust system.




And then also clarity, right? You saw the code pipeline UI. Everybody in your team can have access to looking at that UI. You can put it up on a dashboard, and in a single view a project manager can go and see, "Okay, this code has been pushed to GitHub. Why isn't it on production right now?" They can see, "Oh, wait, didn't pass a CI step," or "Wait, the load test, there is something that messed up the way that the web server's working, and now response times are up 50%." It's really easy for people to see, a, what code is on production and potentially why code isn't on production, as opposed to having to come to your desk, ask you why the deployment hasn't come out. Then you have to explain all of the various reasons that it couldn't have gone out.


It's also very great. I mean, it also depends on the complexity of the pipeline you put together, but if you end up having a very long chains of pipeline, you can also graphically see if you have multiple releases going through the pipeline at the same time, you can see where each one is and where it's stalled, and a lot of information about what's going on. CodePipeline does support multiple releases coming through the pipe at the same time, so it's very interesting when you start having these vast releases and seeing where things are getting hung up and having the power to really track that stuff with one UI.


Yeah, and so the key to implementing these pipelines, you could do this yourself, right? I'm sure you could write a bunch of batch scripts that all pipe together and try to create this pipeline, but really what you need is a partner, somebody like AWS, to build the tool that everybody is going to use that is going to be robust and that you can trust, right? The nice thing is that Amazon has worked with partners like Solano and Apica to bring in tons and tons of different integrations so that whatever tools you use or tools you want to us, you can probably drop them in really quickly.


The other piece into how to make that pipeline work and how to trust the pipeline is to obviously have those integrations that you trust, so being able to do your continuous integration to make sure that your code is ready to release, to be able to do that load testing, to make sure that you have consistent performance, make sure that you're actually testing real world scenarios as opposed to, well, what the developer's use case and what the developer's happy flow through the product is, and then obviously automated deployments so that once all of these tests pass, you can deploy to your production servers, and everything is automated and clean.




All right, well, here are some of the resources. I think this is the final slide, so if there are questions, I think we'd like to answer them now, but we have the GitHub repo that was used up here. We also have the Solano Labs docs around AWS CodePipeline and the integration, the Apica docs and the product page about AWS CodePipeline, and then the Amazon product page about CodePipeline. Obviously, we'd love it if you guys would like to use Solano Labs or Apica. There's links to the free trials, and hit us up on Twitter. If you have any more questions that you don't want answered right now or may be a little bit more complicated, please field them to the emails below.