DevOps is something of a buzzword of the moment. But what is DevOps? If your development partner was to tell you “we have a DevOps process”, what would you take that to mean? Here are some steps within a DevOps process that may have been used in the past but needs to be re-evaluated.
A Straw Man DevOps Process
- The support team tells the developer of a bug.
- The developer fixes the bug.
- The developer build the code on their machine.
- Then zip’s up the build and email’s it to the Ops team.
- The Ops team login to the production server and unzip the build into the application directory.
- The Ops team tells the support team the deployment is done.
- The support team inform the client.
However, as with any set of processes, the wheat must be separated from the chaff! DevOps can be done in many terrible ways as well as many great ways. At Storm ID, as a digital agency working with dozens of clients, we must continuously evaluate and refine all of our processes, including DevOps, to support an efficient working relationship. Let’s examine the fundamentals of our own current DevOps process at Storm ID and how it differs from the straw man process.
Evaluating with a definition
The definition of DevOps that we most closely associate with is that coined by Damian Brady, formerly of Octopus Deploy, now with Microsoft. He describes DevOps as:
Working together to get the right stuff in the hands of users faster
This definition can be used to revisit the above DevOps process to produce the following interpretation:
- Support, Developer and Ops are “working together”
- The developer fixed the bug, surely this is the “right stuff”
- It is deployed quickly and is in the “hands of the users”
…but is this really the case?
- Are the teams working together or are they just throwing “stuff” over a wall to the next team?
- The developer fixed the bug, but
- Did they test it?
- How did they test it?
- Has it affected other parts of the application?
- The developer bundles up a “thing” and throws it over the wall to Ops, where the Ops team are expected to deploy it to production.
- The Ops team “deploys” the “thing” following some sort of manual instructions provided at some point by the developer and throws a note back over the wall to the support team saying “deployment done”.
- The support team throws that note over another wall onto the client.
Where could this process go wrong? Everywhere! Some potential problems that could be faced include:
- The developers who have direct knowledge of the code are unavailable.
- The build only works on some developers’ machines.
- The bug alters the deployment process and the deployment documentation used by Ops isn’t updated.
- The members of the Ops team usually responsible for deployment of the code are unavailable.
- The bug fix breaks something else.
So, in the end, we have a valid “DevOps process” that is fragile, ineffective, inefficient and prone to human error. Shall we re-evaluate that process?
Let’s come up with an alternative process using Damian Brady’s definition.
Working together doesn’t just mean teams and individuals, it means both humans and computers. Your DevOps process should allow both humans and computers to work together.
At Storm we use a variety of tools to support working together on DevOps:
Assembla provides a project management space for each client and project, allowing the management of support tickets, service level agreements (SLAs) and communications between us and the client.
- Code Source Control (Git)
All code written by our developers is stored with a complete version history within a version control system (VCS). This allows all changes, updates, and fixes to be tracked and even allows developers to communicate historically. Using Git, developers are able to browse the version history of a file or set of files, to investigate a bug or an approach to the implementation of a new feature.
- Orchestrated builds
We currently use a C#-based build orchestration system called Cake. Using Cake, we are able to codify a build pipeline that allows individual build steps to work together to produce a set of complete, immutable and deployable build artefacts that represent the application at a specific versioned point in time. The beauty of using a tool like Cake is that we can create this build pipeline and include it in version control along with the application code. The pipeline can be reliably executed both locally on a developer’s machine or on any other continuous integration build environment and consistently generate the same output.
- ARM Templates
The Ops team handle the provision of Azure infrastructure by building declarative text-based Azure Resource Manager templates. These templates describe how the infrastructure will be set up in Azure. They are maintained under source control in Git and deployed using PowerShell scripts. The templates are specifically tailored to meet our infrastructure requirements, including:
- DNS entries in our external DNS zone to allow for the addition of vanity URLs for our QA applications.
- Blob storage containers to be used by the application.
- Backup scheduling of the application itself.
Getting the right stuff
What is the right stuff? It isn’t just the code that meets a client requirement. The right stuff is the code and functionality that satisfies the user need. Features that cost time and money to maintain but are not used are a drain on resources (both human and computational) and should definitely not be considered the “right stuff”. In order to get the right stuff in the hands of users you need telemetry to measure the usage and effectiveness of features and be willing to remove or re-engineer features that fail to engage users.
Delivering to the users
The application isn’t just used by the intended audience or end-users. During the development of an application, all project members are users of the application. After launch, the support team and client services team are also users of the application within its quality assurance (QA) environment. This means that getting it in the hands of users faster requires a process which allows the Storm ID project team and client services team to see the latest iteration of the application as quickly as possible.
By automating as much of the build and deployment pipeline as we are able, we remove the human element from the equation where possible, leaving people with only the simplest of decisions to make at crucial times – such as approving a deployment to production. At other times, the automated process begins with a developer committing new code for the application and ends with that code deployed to a QA environment for all stakeholders to see, without ANY human intervention being required.
An evolutionary process
At Storm we pride ourselves in the expertise of our very technically minded individuals in all our teams; from creative development, technical development and testing. We ensure that we utilise these great minds to build and implement a development pipeline that can be used within a wide variety of projects. Although we aim for consistency in the development pipeline, as it is used on a daily basis, we are continuously re-evaluating it, making small incremental improvements both to the pipeline and the DevOps processes it supports.