Fifteen years of watching the same problem get solved better
I’ve had the privilege — and occasionally the frustration — of watching organisations adopt cloud from the very beginning of the AWS era. In 2009, “cloud” meant renting a virtual machine instead of buying a physical one. In 2015, it meant automating your infrastructure provisioning. In 2020, it meant consuming managed services that abstracted away the infrastructure entirely. In 2025, it means building on platforms where the operational concerns you used to engineer around don’t exist by design.
Each of these waves looked like a clean break from the outside. From inside a managed services practice, they looked like layers — each wave of adoption sitting on top of the previous one, with clients at different points in their journey simultaneously. Understanding that layering is the most important thing an MSP can internalise about cloud adoption.
This article maps the full evolution: where it came from, where it is now, and what it means for how we deliver managed services today.
Wave 1: Infrastructure as a Service — lift, shift, and discover the bill
The first wave of enterprise cloud adoption was characterised by one dominant motion: take what you have on-premises and move it to the cloud. Lift and shift. The organisational logic was understandable: reduce capital expenditure, eliminate data centre lease obligations, and buy time to figure out what “cloud-native” actually meant.
The results were mixed in ways that the industry was slow to acknowledge. Organisations that moved on-premises workloads to EC2 without re-architecting them frequently found their cloud bills exceeded their previous data centre costs. A physical server that was 20% utilised on-premises is a fixed cost. A running EC2 instance that is 20% utilised in the cloud is a variable cost — and it accrues every hour you forget to turn it off.
The learning curve of Wave 1 was primarily financial and operational: right-sizing instances, implementing auto-scaling, managing reserved capacity, understanding the shared responsibility model. MSPs in this era were predominantly doing migration work — VMware-to-EC2, Oracle-to-RDS, on-premises Active Directory-to-AWS Directory Service. The value proposition was expertise in the migration mechanics.
Wave 2: Infrastructure as Code — automating what you built
The second wave was a response to the operational chaos of Wave 1. Organisations that had successfully migrated to AWS found themselves with sprawling, manually managed cloud estates. EC2 instances created ad-hoc through the console. Security groups modified without documentation. AMIs from three years ago still running in production. The cloud gave you speed. Without discipline, that speed compounded technical debt at the same rate.
Infrastructure as Code was the answer. CloudFormation arrived first, giving AWS practitioners a declarative way to express infrastructure intent as JSON or YAML templates. Terraform emerged as the multi-cloud alternative and rapidly became the industry standard for IaC across heterogeneous estates. AWS CDK arrived later and changed the game for development teams: instead of learning a DSL or a JSON schema, you could express your infrastructure in TypeScript, Python, or Java — languages your developers already knew.
The shift from clicking through the console to expressing infrastructure as code changed the MSP value proposition fundamentally. Suddenly, the output of managed services work wasn’t just a running system — it was a codebase. A CloudFormation stack. A CDK application. A Terraform module. The client’s infrastructure became versionable, reviewable, and repeatable. Changes went through pull requests. Infrastructure drift became detectable. Compliance became auditable.
For our practice, the IaC wave was transformational. It raised the engineering bar for what managed services delivery looked like, and it separated MSPs who had genuinely invested in engineering capability from those who were still clicking through consoles.
Wave 3: PaaS and managed services — the great abstraction
The third wave is the one we’re deepest in today, and it represents a fundamentally different relationship with infrastructure. AWS has systematically built managed services that take operational concerns off your plate — not by automating them, but by eliminating them from your responsibility surface entirely.
Consider the trajectory of database management on AWS. In 2012, running a database on AWS meant running a database on EC2: you managed the OS, the database software, the backup scripts, the replication configuration, the failover logic. RDS abstracted most of that. Aurora Serverless v2 abstracted the rest — you define a schema, set min and max capacity, and Aurora scales, backs up, fails over, and patches itself. You’ve stopped being a database administrator and started being a data architect.
The same pattern repeats across every infrastructure domain. ECS/EKS with Fargate: no EC2 nodes to manage, no cluster capacity to right-size. AWS Lambda: no servers, no OS, no runtime patching. Amazon OpenSearch Serverless: no cluster topology decisions. Amazon Bedrock: no ML infrastructure. The trend is unambiguous and accelerating.
Here is how the technology evolution maps to the AWS service portfolio across waves.
What wave 3 actually changes about architecture decisions
The shift to PaaS and managed services is not simply about convenience. It fundamentally changes the architecture decision space. In Wave 1, a core architectural decision was: how many EC2 instances do I need, and how do I configure auto-scaling? In Wave 3, that question doesn’t exist — Aurora Serverless scales to zero when there’s no traffic and to hundreds of ACUs when there is. The question instead becomes: what are my data access patterns, and is Aurora’s serverless scaling model appropriate for them?
This is a more sophisticated question. It requires deeper understanding of the service’s behaviour, not shallower. Wave 3 doesn’t make architecture easier — it moves the difficulty to a different domain.
The trade-offs are real. Lambda’s 15-minute execution limit matters if you’re processing large files. Aurora Serverless v2’s cold start latency matters if you have sub-second SLOs on connection-heavy workloads. DynamoDB’s eventual consistency model matters if your application assumes strong consistency. Fargate’s startup latency matters if you need sub-second container scheduling. Understanding where these constraints apply — and designing around them rather than discovering them in production — is the core architectural competency of the Wave 3 practitioner.
The MSP perspective: value migration across waves
The role of a managed services provider has shifted with each wave, and being honest about that shift is essential for any MSP that wants to remain relevant.
In Wave 1, the MSP’s primary value was operational expertise: knowing how to configure EC2, how to design VPCs, how to set up RDS Multi-AZ. This was genuinely rare knowledge in 2010. By 2016, it was commodity knowledge available through AWS certifications and a thriving training ecosystem.
In Wave 2, the MSP’s value moved to engineering process: building and maintaining IaC codebases, implementing DevOps pipelines, governing configuration drift. Again genuinely valuable, and again commoditising as the tooling matured and the engineering community caught up.
In Wave 3, the MSP’s value is in three areas that haven’t commoditised and won’t: architectural judgment (knowing which PaaS service to use when, and what the failure modes are), FinOps maturity (Wave 3 services have complex cost models that require active optimisation), and AI/ML integration capability (the Bedrock and SageMaker ecosystems are evolving faster than most in-house teams can track).
The MSP that built its identity on “we manage your infrastructure” is fighting an existential battle against AWS itself. The MSP that built its identity on “we accelerate your architecture decisions and operate your platform with engineering discipline” has a durable value proposition that AWS managed services growth actually amplifies rather than threatens.
The compliance layer: what hasn’t changed across waves
One thing that has remained constant across all three waves is the compliance and governance responsibility. AWS Well-Architected, AWS Security Hub, AWS Config rules, and AWS Control Tower give you the tools to enforce compliance posture across your estate. But the responsibility for what “compliant” means — the risk framework, the control objectives, the audit evidence — that’s yours.
This is where many Wave 3 adopters discover a painful truth: migrating to serverless doesn’t simplify your SOC 2 or ISO 27001 scope. Lambda functions have execution roles. Aurora clusters have encryption at rest and in transit requirements. Bedrock model invocations have data residency implications. The compliance surface area is different in Wave 3, not smaller. MSPs that can navigate this — and translate Well-Architected findings into audit-ready evidence — deliver value that neither AWS nor a generalist consulting firm can easily replicate.