Businesses are beginning to awaken to the benefits of cloud computing, and it shows: Gartner predicts the worldwide public cloud services market will grow 18% in 2017. According to IDC, spending on cloud computing is expected to grow more than six times faster than IT spending, from 2015 to 2017. 

The use of cloud computing gives businesses many benefits, from greater forms of agility, to widespread cost savings. But this is just the beginning. This explosion in cloud usage is also having a particular effect on enterprise architecture: infrastructure as code (IAC) and software development practices are converging through the usage of CI/CD and GIT.

While IAC is common practice, Immutable IAC (IIAC) on the other hand, is less well known and more contentious. In order to combine the software development best practices with the benefits of CI/CD, these concerns must be overcome.

It’s not just about delivering infrastructure

With the speed that companies are deploying and introducing new solutions within organisations, the danger of poorly performing or badly designed cloud infrastructure is the last thing that a business wants to come up against.

The benefits of Infrastructure as Code as well as CI/CD have been well documented and discussed. Treating IT infrastructure as software helps organisations make changes to it rapidly and easily, and at the same time safely and reliably. Meanwhile CI/CD, can be used to ensure organisations can rely on their infrastructure and mitigate risks around the clock before it’s too late.

But many companies find themselves overwhelmed by jargon. It is easy to forget, behind all the acronyms and best practices, that infrastructure delivery is today really just a software delivery problem.

The immutable infrastructure contention

IIAC is the practice where every deployment is a replacement of resources, as opposed to an in-place update. This is often referred to as the principle of ‘cattle not pets’ and can be achieved either via pre-baked machine or Docker images.

IIAC can often be contentious: it’s difficult (and most of the time impractical) to have a ‘pure’ immutable infrastructure where no configuration is done at runtime. In most circumstances some configuration will have to be done at runtime, which means the system isn’t entirely immutable.  Configuration management systems such as Chef are still desired, thus creating a counterargument that the whole immutable step can be avoided.

Immutable infrastructure also works best when the applications on top are stateless: application states should be stored outside of the containers/instances since the upgrades replace running instances.

So why do we want it?

First of all, immutable infrastructure eliminates most of the configuration drifts: each and every deployment is a brand new machine image/Docker container. Secondly, the images themselves become tested artefacts and natural rollback points. Updating in-place can often have unintended consequences!

Finally, a properly configured image is much faster to start up, making auto-scaling or recovery much faster. Recovering from an ill-behaving virtual machine or Docker container becomes a simple task of replacing it.

There are other indirect benefits such as naturally promoting blue/green deployment. It also promotes centralised log location as deployments replace running instances, destroying existing logs.

Why is GIT different?

There are many source control systems out there, for example CVS, TFS, and SVN. Ignoring the distributed nature for a moment, the main feature that sets GIT apart are the pull request and protected branch mechanisms. In GIT, pull requests are a way to group a completed feature/task. Each pull request can then be assigned to reviewers for peer review.

The protected branch function can further enhance the mechanism by only allowing pull requests which have passed peer reviews to be merged into the mainline branches. At the very minimum, pull requests simply add ‘another pair of eyes’ before code is merged. There are further benefits however: a set of review guidelines means that the resultant code should be a lot more ‘uniform’, thus more maintainable; reviewers gain implicit knowledge of what other people are doing, promoting knowledge sharing; and where responsibility lies for review, update and merging is clearer.

Why Chef?

At KCOM, we see Chef as the stronger choice when it comes to selection of Configuration Management Software (CMS).

Idempotence of our automation scripts and tasks is a key requirement, we find Chef makes it easy to write recipes that can be run together with overlapping interests or run multiple times with little chance of undesirable consequences.

We prefer writing our recipes as procedural scripts in native ruby as opposed to working in a dedicated mark-up as in tools like Puppet, this is more flexible, developer friendly and integrates well with test-driven development tools. 

Chef’s dependency management with berkshelf is far simpler to work with than the verbose and often quite complicated approach in other CMS tools, or the difficulty of managing dependencies on a series of highly coupled Shell Scripts. It also provides great support for build versioning which is vital when designing an effective CI/CD pipeline. At KCOM, we deploy AWS OpsWorks for Chef Automate, which vastly simplifies the deployment and management needed.

Charles Richards, EMEA Business Partnerships, CHEF commented “Chef is delighted to work with KCOM to bring Automation and embedded compliance to their broad customer base. By deepening their skills in Chef we believe KCOM's is ideally placed to come a regional leader in digital transformation services.  It is great that the KCOM team recognise the value of proving that the state of IT meets both industry best practices and regulations. We look forward to a long and successful partnership.”

Incorporating CI/CD with IIAC and GIT

As an AWS Premier Consulting Partner, KCOM intrinsically incorporates AWS cost effective managed services to minimise effort and maximise efficiency. 

As part of IIAC, the creation of images needs to be scripted and can be run under a CI/CD pipeline. We’ve used AWS CodePipeline with either AWS CodeBuild or Jenkins as the build provider, Hashicorp Packer and AWS OpsWorks Chef, or Chef Solo, to provision the images to run on AWS.

We’ve also automated the trigger of AMI creation by using AWS Lambda and AWS S3. When a target application is updated in the associated AWS S3 bucket, an AWS Lambda function is triggered to ensure that new releases are automatically built, tested and ready to be released. AWS S3 versioning is enabled to make sure previous releases remain available as well as automated clean-up via lifecycle policies.

Finally, it is also important to consider runtime configuration. Chef also provides the capability to patch the running state of the environments in an emergency.

By promoting re-usability and a cloud-agnostic approach in our use of Chef and Packer we achieve maximum value from our development efforts.

Combining all the above, infrastructure delivery now looks remarkably similar to the software delivery process. This allows enterprises to draw on and apply best practices – with all the benefits of CI/CD helping us deliver solutions for organisations such as Rail Delivery Group and Commonwealth War Graves Commission.

Why KCOM?

KCOM brings together years of experience in application development and cloud delivery to ensure our DevOps team are able to deploy AWS infrastructure quickly and seamlessly.

Our skills and capability cover the full software development lifecycle process from version to control to automated testing. This includes continuous integration and deployment and container management, which form part of the solutions enabling our customers to respond to constantly changing business requirements and reducing their time to market.

KCOM has dedicated team of AWS professionals with over 60 associate and professional certifications, including a specialist AWS DevOps team who are certified ‘AWS DevOps Engineer – Professionals’, so our customers can be sure we have the technical expertise to help them overcome their challenges.

 

Cloud, DevOps, Infrastructure as Code