Why 'lift-and-shift' is an outdated approach
Why you should think about your legacy apps before planning your cloud migration
There are many reasons why a business might consider moving some or all of its applications to public cloud platforms like AWS, Azure or Google Cloud. One of the most compelling, however, is the cloud's ability to reduce the complexity of an organisation's IT. Removing the need for management of the physical infrastructure that applications run on can yield big benefits for simplification.
Running your applications in the cloud allows you to take advantage of a new compute economy and scale the capacity up and down as needed, as well as letting you hook into a huge range of additional tools and services. While this is well and good for building new applications, porting pre-existing legacy apps over to cloud-based services can often prove challenging.
Organisations that want to migrate pre-existing workloads to a public cloud are faced with a choice: do they re-architect their applications for cloud, or do they simply attempt to port it to their chosen platform wholesale, with no alteration? For many companies, the latter approach – known as the 'lift-and-shift' method – initially sounds like the more attractive option. It allows them to get into the cloud faster, with a smaller amount of work, meaning the IT team has more time to devote to other elements of the migration or to developing entirely new capabilities.
Sadly it's not quite as simple as that. While some applications can be moved over fairly seamlessly, not all apps are suited to this method. Compatibility is the first issue that companies are liable to run into with lift-and-shift; particularly when dealing with legacy applications, there's a good chance the original code relies on old, outdated software, or defunct libraries. This could make running that app in the cloud difficult, if not impossible, without modification. Organisations also misinterpret the business continuity options available in public cloud and sometimes assume the options are the same as the on-premises counterpart.
“In a lot of cases with server-side applications, they're not delivered and packaged as well as workspace applications are on an end-user’s desktop,” says Lee Wynne, CDW’s Public Cloud Architecture Practice Lead, “so finding somebody who actually installed the application on the server in the first place can be difficult.”
This, Wynne points out, along with a lack of documentation and issues with upgrading the original OS that a virtual machine runs on, can prove “very costly and time consuming” when trying to port legacy applications to the cloud with an old OS. In terms of business continuity, Wynne says:
“It can take a fair amount of explaining that in the public cloud domain, the ability to move machines from host to host with zero downtime across availability zones isn’t really a thing, therefore if you are moving a critical business workload from your current data centre that is highly protected by various VMware HA features, you need to consider how that will remain online through availability zone outages. In other words, you have to architect for failure”.
Cost modelling is also a critical component, Wynne says, and organisations need to make sure that the cost modelling they’re doing is an accurate representation of what their actual usage will look like.
“The accuracy element of cost modelling is really critical when you're assessing at scale. You're not just assessing a couple of VMs, you're assessing a whole data centre or a few thousand; you've got to be accurate with the costs, and you've got to be able to get the instance types that are displayed during those accurate cost assessments.
“Therefore picking the tooling and the right telemetry at the beginning, and getting those costs accurate for your business case, is probably one of the first risks that you'll come across with a cloud migration. Otherwise, you just end up making it three times more expensive than it actually is, and therefore providing executives and decision makers with the wrong information.
“If you think way back when we went from physical servers to virtual servers, no one did an as-is migration of those physical machines – they monitored them over a two-to-three month period, and then they migrated them based on real utilisation. So they cut down memory, cut down CPU, so they could fit as much as possible on the target VMware cluster. And this is exactly the same with public cloud. That's why you ensure that you do your cost modelling right. It needs to be lean and optimised, as you are paying by the minute or, in some cases, by the second.”
It’s important to establish how your apps interact with each other, too. Very few business applications exist in isolation, so making sure that all of the integrations and connections between software still function as required – both during and after the migration – is vital. For this reason, CDW includes a dependency mapping service as part of its Cloud Plan offering, which analyses the connections between VMs and then groups them together into functions, so that they can be migrated in smaller groups.
“That reduces risk significantly,” Wynne says. “It's naive to think that if you're looking to do a migration on a couple of hundred virtual machines, that you're going to do them all in one go. It's not the way it works, you do it batch by batch. So what you don't want to do is introduce risk by migrating a batch of virtual machines over to public cloud and then realise afterwards that actually, these machines are going to communicate back to the source data centre on an application protocol, which is latency-sensitive – so it'll break it, it won't work, it'll be too slow. So you end up having to roll back, or migrate more VMs really quickly that you didn't plan for.”
With all this in mind, it's absolutely key that when starting a cloud migration project, companies take the time to look at their applications realistically, identifying which of them need to be retooled before they can be moved. There may even be cases where it's faster to rebuild an application from the ground up, rather than tweaking the existing version for a cloud deployment.
The reduction of operational complexity is a key issue for organisations of all types, and it’s one that the cloud can play a huge role in, but be warned – the process of cloud migration isn’t always a simple matter of scooping up your VMs en masse and dumping them into your chosen cloud environment. A good cloud migration involves looking long and hard at which of your applications truly belong in the cloud, and then investing time and effort in making sure that those applications are engineered to get the maximum benefit from a cloud environment.
Organisations that are looking to start this process don’t have to do it alone; CDW is a tried and tested partner, that can help guide your business to the cloud and make sure that your applications are delivering the most value with the least overhead.
Get in touch with a CDW Account Director and ask for details on. CloudCare® JumpStart and CloudCare® CloudPlan. Visit uk.cdw.com
Unlocking collaboration: Making software work better together
How to improve collaboration and agility with the right techDownload now
Four steps to field service excellence
How to thrive in the experience economyDownload now
Six things a developer should know about Postgres
Why enterprises are choosing PostgreSQLDownload now
The path to CX excellence for B2B services
The four stages to thrive in the experience economyDownload now