IT Pro Panel: Defining DevOps
It’s a goal for many organisations, but what makes a successful DevOps programme?
Much effort has been put into trying to divine the secrets of what makes tech titans like Netflix, Amazon and Spotify so successful. There are many factors in their meteoric rise, but one common thread is the fact that they all use DevOps methodologies to build their software and services. They’re not alone; DevOps has become a pervasive trend within the software development world, and a huge number of organisations across a range of sectors have adopted the practice.
DevOps can mean different things to different people, however, and one of the most persistent debates within the community is how to actually define a DevOps structure; what essential elements have to be in place for it to qualify? In this month’s IT Pro Panel discussion, we asked some of our expert panellists who have undertaken this journey for themselves what makes DevOps so compelling, and how they’ve navigated some of its more challenging aspects.
DevOps has many advantages, but the most commonly cited benefit is development velocity. IT leaders want to be able to release more code at shorter intervals, speeding up time to value for the business and addressing changes in a more timely manner. The practice is built on the concept of continuous integration and continuous delivery (CI/CD), and it’s this focus on pushing code as frequently as possible that drives much of the thinking in its implementation.
“Three years ago, we did around one release every two weeks, and it took us about two to three days to get to production,” explains Yolt CTO Roderick Simons. “It was too painful, took too much time and we had to improve.”
“As a first step,” Simons says, “we increased the frequency of delivery from once every two weeks to once every week. If something hurts, you have to do it more often.”
This allowed the team to get faster feedback on changes, and from there, they mapped out their development process in order to identify major bottlenecks. Now, he says, Yolt is able to do daily code releases. To support this, Yolt uses a highly-automated CI/CD pipeline, which is something that SmartDebit CIO Gavin Scruby and TempCover CTO Marc Pell have also been working on.
Scruby, Simons and Pell are all using automation tools to speed up their code releases, but some of our other panellists say they’re concentrating on alternative elements beyond speed of delivery. For instance, design house Studio Graphene has a number of internal checks and controls to ensure that code is stable before being pushed out to clients, which founder and CEO Ritam Gandhi notes must be balanced with agility and speed.
For Craig York, CTO of Milton Keynes University Hospitals Foundation Trust, making sure that all new code is stable and fit for purpose is also paramount. Even outside of the ongoing COVID crisis, healthcare IT systems operate with a razor-thin margin for error, and York’s team focuses on maintaining the availability of systems as much as possible.
The hospital’s IT estate includes systems from many external suppliers, so every time one of them releases an update, the MKUH DevOps teams are responsible for the customisation and integration work necessary to ensure it doesn’t impact the delivery of clinical care. This often involves working as “testing and development partners”, in York’s words, as well as liaising with non-IT teams across the business.
“User engagement from our organisation has been critical for us over the past few years,” he explains, “with a clinical consultant and nursing staff now embedded within IT. Each week, our change/release board includes these staff and managers from across development groups and operational IT to ensure that, as a group, we’re ready and confident of all releases.”
The DevOps journey
It’s a common mantra within the community that establishing a DevOps culture is an ongoing process, and for most of our panellists, the finish line isn’t yet in sight. Newcastle Building Society, for example, is still in the relatively early stages of establishing its DevOps capabilities. The company has set up a pilot programme, putting its mobile app product in the hands of a ‘model team’ to act as a testbed for DevOps practises and tooling, which CIO Manila McLean hopes to roll out across the rest of the development organisation.
“This team is fully DevOps,” McLean says; “multi-skilled, and accountable for product ownership, user experience, development, testing, release, et cetera. The team is accountable for the holistic product including the operational performance and future roadmap, so they need to balance between business as usual and strategic developments.”
She plans to expand this strategy over the course of this year, but notes it’s not simply a case of spinning up more DevOps teams. In order to replicate this model team’s structure across other products, buy-in will need to be obtained from the rest of the business first. While McLean is positive about the progress made on this front so far, she admits that it’s going to take time.
“It's really important to take the business on the journey, so we can't rush this.”
This journey doesn’t stop once product teams are successfully in place, however, as Pell can attest. While he’s already established CI/CD pipelines and automated elements of TempCover’s infrastructure, he says he’s closer to the middle of the process than the end, and that there’s still more to do to automate further aspects of its infrastructure to allow for more frequent releases.
“Similar to Marc's response, we are somewhere in the middle,” Gandhi adds. “It's a mindset and skill that we really started to focus on a few years ago and one of the nice things about constantly working on new blank canvas projects is that we have been able to adapt and try different approaches, before adopting them holistically across our business.”
For those panellists who have successfully productised their engineering organisations, the focus now is on maximising the efficiency of their internal tools and pipelines, tuning processes and deploying further automation where possible. As Scruby notes, improvements in one area of a DevOps structure often naturally lead into another.
“Once you start with automating testing a bit to improve quality, it links within similar tools and skills to more automated deployment, then more automated integration, and then a second cycle with more efficient tooling once people have become familiar.”
This feeds into what is arguably one of the most important aspects of DevOps: An organisation’s choice of tools. The library of tools that DevOps professionals have access to is vast, much of it open source, and the variety of options available allows teams to customise their toolchains based on the specific details of their requirements.
The most popular software tends to be built to assist with automation and testing, largely because it’s so fundamental to the DevOps process. There was a wide spread of favourites among our panellists in this regard, including JUnit, NUnit, Cucumber, Jenkins and Selenium, demonstrating the sheer breadth of choices in this category.
Almost all of our panellists, however, were united in their use of cloud infrastructure for DevOps. In addition to a pair of on-site computer rooms, York makes heavy use of Microsoft Azure and its PaaS capabilities, which his devs tell him make it easier to automate their processes. McLean, meanwhile, uses the YAML pipelines offered by Azure DevOps, as well Terraform for automated infrastructure builds and Packer for creating machine images.
Pell also uses pipelines in Azure DevOps for the actual automation aspect of TempCover’s releasing, although he notes that the company does still have a few legacy on-premise CI/CD systems that it’s yet to port over. This will be a priority for 2021, he says, as it represents a large proportion of the company’s remaining on-prem estate and will prove to be an efficiency gain and cost saving once the migration is complete.
SmartDebit is the main outlier in this case; the company is mostly based on-premise, and uses Nutanix’s private cloud technology (which Scruby describes as “expensive but less effort than OpenStack”) to deliver its applications. This naturally has an impact on which tools are most appropriate, but SmartDebit’s situation isn’t all that unusual.
“We used to run on a private cloud but transitioned to the [public] cloud a year and a half ago,” explains Simons. “We primarily started with IaaS but are moving higher in the stack, with some managed services on the data side and data storage. Similar to Manila, we use Packer for images and use Terraform for all our infrastructure-as-code. In our DevOps journey, this was actually a key enabler; giving all teams their own ‘team environment’. To keep the costs down, we scale it up in the morning and destroy it at night.”
It’s easy to see, then, that cloud technologies like PaaS and IaaS are a natural fit for DevOps toolchains, but there was a little less consensus around the necessity of container technology. Kubernetes, in particular, was identified as a framework with significant potential for enabling DevOps, but it’s not without its drawbacks.
For McLean, Kubernetes allows her organisation to develop, deploy and scale their applications faster, with greater predictability, but others noted that the value proposition isn’t as straightforward as something like cloud infrastructure. Gandhi points out that cloud platform tools like AWS ECS can often be used to accomplish the same goal, while Scruby adds that additional systems such as general virtualisation, load balancing and distributed storage may already be delivering many of the benefits of containerisation.
“There isn't the same step in advantage from virtual machines to containers as there was from tin to virtual machines,” Scruby explains. “VMs are, in essence, just flabby containers, so can do most of the job. Changing a whole infrastructure to something that provides an incremental advantage is a hard sell to a board, especially when public clouds often have their own serverless technologies to promote as well.”
“Exactly,” Gandhi adds; “I think migrating a large scale existing infrastructure is much harder compared to when you're kicking off a new blank canvas build. Hence, I completely agree that it just takes time and if the incremental benefit isn't in line with the effort, it's difficult to prioritise.”
This highlights one of the principal challenges identified by our panellists in relation to DevOps, which is the issue of prioritisation. For many of them, it’s been difficult to balance working on elements of their DevOps platform – such as CI/CD pipelines or automation tooling – with keeping the lights on and performing normal business tasks.
Part of Yolt’s strategy for addressing this problem was to set up what Simons calls a “platform team”; an internal group made up of members of various existing teams, with a remit to work on creating self-service tools for other dev teams within the organisation. In addition to the company’s CI/CD pipeline, this team is also responsible for maintaining elements such as the end-to-end testing suite, performance testing and code security tools.
“We share the same challenge of prioritising resource between normal business development and DevOps,” Pell says. “I'm a firm believer in ownership of concerns and therefore completely agree with the principle of a platform team; coincidentally, we're hiring a team with exactly that name right now!”
This approach has been somewhat challenging for Scruby, however; like Yolt, SmartDebit is beholden to a number of regulatory and compliance requirements, which he says can make breaking down the barriers between teams difficult. He also cites the impact that a DevOps model can have on processes outside of the IT organisation.
“There’s a knock-on effect outside of product. For example, the ancillary teams covering customer engagement, marketing collateral generation and support who can struggle to keep customers up-to-date with change, especially as some customers are institutions that may not have strong employee training or engagement in place. There's also a little-mentioned aspect of accounting practice that’s impacted by DevOps improvements – how to account for system capitalisation and R&D costs. Accountants are still used to big product releases for the balance sheet and struggle if something does not fit neatly into ‘widget factory’ type models.”
All of our panellists agree that DevOps is as much about people as it is about technology, so the structure of teams is key for a successful implementation. The general consensus is that groups of between four and ten individuals represent the ideal format for a DevOps team, although while Gandhi encourages these squads to be fully cross-functional, Pell takes a slightly different tack.
“We've followed a similar squad structure which features natural specialisms, but with an understanding that cross-functional requirements exist,” he says. “This covers all of our software development across the entire business, so everything from an entirely new project to a feature iteration or bug fix.”
McLean also runs this kind of structure, with a scrum team of around ten people currently acting as the model team in charge of the development and operation of Newcastle Building Society’s mobile app, supported by a platform engineering team. Scruby, meanwhile, says that SmartDebit is too small to support a separate platform engineering team, but the company is extending and empowering its existing scrum team to do more. For compliance reasons, however, it does maintain a separate Ops team responsible for reviewing and controlling its automation pipeline.
“For us, a feature team consists of all the roles needed to deliver the end-to-end journey,” says Simons. “We promote the ‘t-shaped’ or full-stack principles, and all engineers have access to the code base and can make merge requests if needed. We don't have special QA roles; the team is responsible for testing themselves.”
This is a common theme; giving DevOps teams the autonomy and responsibility to run their respective products was cited by the majority of our panellists as an essential part of building a solid DevOps foundation. McLean and Pell also advise that technical and non-technical staff must work extremely closely, and must all be equally responsible for successful project delivery.
On the other side of the coin, McLean echoes Scruby’s point that DevOps professionals need to consider what their code actually does in practice, not just how well it runs. This, Scruby says, can be a hard mindset for some developers to change.
“I completely agree,” McLean adds; “A DevOps team is like running your own business, you need to care about every facet of your product.”
“I'd say the most important factor is an eagerness to embrace a DevOps ‘mindset’ and step outside your comfort zone,” Gandhi says. “Without it, the cultural shift within the team won't really happen and one will just adopt the status quo. One of the pitfalls is to think that setting up the team means your responsibilities are complete. It's an ongoing journey that will continue. There are systems and processes that need to be embedded within the delivery ecosystem - which is a gradual process.”
To apply to join the IT Pro Panel, please click here to enter your details. Please note that we are not accepting applications from technology vendors at this time.
Accelerating AI modernisation with data infrastructure
Generate business value from your AI initiativesFree Download
Recommendations for managing AI risks
Integrate your external AI tool findings into your broader security programsFree Download
Modernise your legacy databases in the cloud
An introduction to cloud databasesFree Download
Powering through to innovation
IT agility drive digital transformationFree Download