Cloud 3.0 isn't about leaving the cloud. It's about unbundling the hyperscaler.

Look at which companies are quietly moving workloads out of AWS and Azure right now and the pattern is striking. They're not the laggards or the holdouts who never went all-in on cloud. They're the most cloud-native names in the industry — Basecamp, Dropbox, Cloudflare, the frontier AI labs — and they're not leaving wholesale. They're moving specific workloads. That's where Cloud 3.0 actually starts.
The shorthand on Cloud 3.0 in vendor decks is intelligent, hybrid, sovereign. That's accurate enough as a checklist but it misses the underlying mechanism. What is actually happening is that the hyperscaler bundle — one provider, one billing relationship, integrated everything — is being unbundled by orchestration tooling. Once integration becomes cheap, the bundle stops being a strategic moat, and customers start pricing each workload against the platform that suits it.
The bundle was the moat, and the moat is getting shallow
Cloud 1.0 was lift-and-shift. You moved a virtual machine from a rack you owned to a rack someone else owned, and the savings came from not having to refresh hardware. Cloud 2.0 was the rebuild — cloud-native architecture, containers, managed services, serverless functions. The premise of both was the same: integration was the value. One identity provider. One virtual network. One billing relationship. One place where everything talks to everything else. That integration was what justified the markup over running your own kit.
Kubernetes, Terraform, OpenTofu, and the broader container ecosystem broke the integration premium. The same workload that used to run only on Lambda now runs on K8s on any platform — public, private, sovereign, on-prem, or all four. Identity has decoupled through SSO and federation. Networking is mostly a configuration concern rather than a vendor commitment. Storage is portable in a way it wasn't five years ago. Moving a workload between platforms used to take a project team. In a well-orchestrated environment now, it takes a config change.
When integration is cheap, customers price each workload separately. That is the mechanism behind every Cloud 3.0 case study you're reading.
AI is the workload that exposed the bundle's pricing
The reason AI keeps coming up in Cloud 3.0 conversations isn't that AI is the only driver. It's that AI is the workload type where the bundle's pricing breaks most visibly. GPUs are sold at infrastructure margins on hyperscalers — somewhere between five and ten times what equivalent capacity costs at a colocation provider running depreciated hardware. Inference workloads have the worst possible economics for cloud pricing models: flat utilisation, predictable load, high bandwidth. Public cloud was priced for spikes. Flat workloads pay spike prices.
Public cloud was priced for spikes. Flat workloads pay spike prices.
Notice that the frontier AI labs — OpenAI, Anthropic, Mistral — run training and most of their inference on rented or owned hardware, not as standard hyperscaler customers. Not because they're contrarian. Because the math doesn't work otherwise. The same logic applies one level down to any business with steady-state model inference, fraud detection, real-time analytics, or anything else with a flat utilisation curve. The bundle's premium is most expensive on the workloads that need elasticity least.
The bigger story isn't AI. It's flat workloads finally getting noticed.
For a 50-seat Victorian SMB, the AI angle isn't directly load-bearing. You aren't training Llama-class models. What you do have, almost certainly, is steady-state workloads that have been running on someone else's metered compute for the past five years and shouldn't have been. ERP. File servers. Line-of-business databases. Reporting infrastructure. These are the boring workloads that don't flex, don't spike, and don't need elasticity. They are also where most cloud bill creep accumulates.
This is what FinOps teams have been quietly demonstrating for three years. The Cloud 3.0 framing just put a name on what the spreadsheets already showed.
Sovereignty is the second driver, and the louder one
The other reframe in Cloud 3.0 is data sovereignty. The Privacy Act reforms, the EU AI Act becoming enforceable on 2 August 2026, and the long shadow of the US CLOUD Act have turned where does our data physically live, and under whose jurisdiction into a question buyers ask instead of assume. For Australian SMBs the direct regulatory exposure is modest. The indirect exposure is real and growing. Your customers in financial services, healthcare, and government are pushing data-residency questions down the supply chain through their procurement questionnaires.
What an SMB IT decision-maker should actually do
The mistake is reading Cloud 3.0 as leave the public cloud. The shift is stop placing workloads by default. Public cloud is still the right home for genuinely elastic workloads, anything that benefits from a hyperscaler's managed-service depth, anything that needs global reach you don't want to operate yourself. The argument isn't against the cloud. It's against treating it as the only answer.
The work that pays off is unglamorous: map your workloads, classify each one against three lenses, then place each one with intent.
Communicat does this work for businesses across Victoria. The work is unglamorous and the savings are usually large, because the starting position is almost always we went all-in five years ago and never revisited. The all-in-on-cloud era is ending. The cloud isn't.
Frequently asked questions
What is Cloud 3.0?
Cloud 3.0 is the phase where customers stop treating the public cloud as the default destination for every workload and start placing each one on the platform with the right economics, jurisdiction, and performance characteristics. The drivers are AI-driven cost pressure, data sovereignty regulations, and the maturity of orchestration tooling that makes workload placement portable. Cloud 1.0 was lift-and-shift; Cloud 2.0 was cloud-native rebuild; Cloud 3.0 is intentional placement.
What is cloud repatriation and how common is it?
Cloud repatriation is moving workloads from public cloud back to private, hybrid, sovereign, or owned infrastructure. Surveys put 70-83% of organisations as actively considering or executing some form of repatriation, though most are doing it selectively, by workload, rather than wholesale exits.
Does the US CLOUD Act apply to Australian businesses using AWS or Azure?
Indirectly, yes. The CLOUD Act lets US authorities compel disclosure of data held by US-headquartered providers, including data physically stored in Australian regions. Direct exposure for most SMBs is low, but it is increasingly being raised in customer procurement questionnaires for organisations selling into financial services, healthcare, and government.
How do I decide which workloads to move off public cloud?
Three lenses: utilisation curve (flat 24/7 workloads usually belong off hyperscalers; spiky elastic workloads usually do not), data sensitivity and jurisdiction (regulated data may need sovereign options), and exit cost (workloads deeply locked into managed services are expensive to move). Then place each workload one at a time, against the platform whose pricing model matches its profile.
