The Next Trillion Dollars?
I know what you’re saying: Wait. Wut? The next trillion dollars? How big is cloud computing anyway? I didn’t know it had achieved the first trillion dollars. And now you’re talking about the next trillion dollars? What’s that all about?
It’s true. Cloud computing has yet to reach a trillion dollars of revenue. Despite its size and heady growth rates, It’s still quite a bit short of a trillion dollars of revenue.
Just how short is a matter of some disagreement. Gartner puts total 2019 cloud spend at around $228 billion (see chart below), while IDC puts it at $210 billion. As can be seen from the chart, Gartner’s estimate includes revenue numbers for the traditional categories of IPS (infrastructure-, platform-, and software-as-a-service) as well as cloud management and security and business-process-as-a-service (BPaaS, and I’m as confused as you as to what that means). From the article it’s not clear how IDC’s estimate compares with Gartner’s categories, but it’s close enough that we can say the two are roughly comparable.
So why do I refer to the next trillion dollars?
Simply this. We are on the verge of a massive shift of applications from their current deployment locations — on-prem data centers — to lower-cost, low-capital investment environments. While we’ve seen evidence of this already — witness the $200 billion plus being spent on cloud services, viz Gartner and IDC — we’re now at a tipping point where enterprise IT is going to get very serious about application migration.
How do we know this shift is occurring?
The cloud-native publication The New Stack recently ran an article citing a SIM IT survey that indicated a steep drop in spending on “hardware, servers, and facilities” over the past decade. As the chart below from the article shows, enterprise spending in this category has dropped by more than a third over the past decade. During the same period, spending on cloud services has more than doubled. Striking about both categories is the fact that their change is nearly a mirror image — the hardware spend has dropped pretty much in tandem with the increase in cloud spend.
This spending shift reflects the reality that enterprises want to get out of the data center business. After all, owning a data center is a capital-intensive proposition, and one that is not a core enterprise function. The reality is that investing in data centers displaces capital investment in key business activities. Any rational enterprise would prefer to avoid data center investment if at all possible; in the past, this wasn’t really an option, but today, enterprises have choices for application deployment that don’t require investing capital in infrastructure.
As John Dinsdale of Strategy Research puts it: “Lots of companies are getting out of running their own data centers, and there is no end in sight to the trend. Things or news items that run against the grain of a big trend tend to get media coverage and can give readers the sense the trend isn’t there. This one most definitely is.” Synergy, btw, is the research organization that publishes cloud provider market share and is regarded as a very reliable source of cloud trends.
For their own part, AMG clearly believe in this trend and are investing billions of dollars in computing facilities. The article quoting Dinsdale is headlined “Enterprises spend more on cloud IaaS than on-premises data-center gear” and discusses Synergy’s finding that in 2019 enterprises spent more on cloud infrastructure than for data center infrastructure, $97 billion vs $93 billion (and yes, Synergy’s numbers differ from both Gartner and IDC. Sigh).
One clear beneficiary of this is Intel. Its Q419 numbers blew past expectations, with the data center group (i.e., the group that sells chips used by AMG) growing 19% YoY, lending credence to Cisco’s estimate that 2020 will see a full 47% of all servers sold being delivered to hyperscale providers.
The clear theme of this material is that enterprises are highly motivated to migrate applications out of their data centers based on a desire to reduce non-strategic capital investment.
Data centers are a specialized form of real estate. In residential real estate “life events” are drivers of housing change; life events refer to things like births, deaths, marriages, divorces, and so on.
Commercial real estate tends to be driven by business “life events.” Commercial life events include acquisitions, spinouts, depreciation schedule completion, or unproductive incremental capital investment requirements deemed unnecessary to overall strategy.
In terms of data centers themselves, drivers to change data center occupancy include all of the above commercial real estate drivers, but have some unique twists as well.
The massive growth in enterprise computing often causes companies to have to consider building new data centers; likewise, the rapid evolution of computing form factors often force data center upgrades before existing depreciation is complete. On the other hand, many enterprises are pursuing data center consolidation exercises to reduce the proliferation of too many locations; this often requires upgrades to the consolidation sites, so incremental capital investment is necessary even in a consolidation initiative.
Certainly, these life events will provide catalysts for many companies to decide to get out of the data center business. However, it’s likely many companies will look at the whole upgrade/invest/data center add dynamic and decide to get out of the data center business now, even if not at a convenient life event.
So the question of how soon we’ll see these data centers empty is uncertain. One driver — life events — is fairly predictable. The other — an emotional decision to stop spending time and money on a low value-add activity — is less so. In my view, if we start seeing evidence of the emotional decision variety at any scale we’ll know we’re at a conceptual tipping point of enterprise data center exit, where the conventional wisdom will become “everyone should exit their data centers” and the phenomenon will accelerate rapidly.
That leaves a key question, of course: where will enterprises migrate those applications to? I discussed this question from the perspective of the applications themselves recently in my post on Cloud Computing and Application Inertia. In the post I described four approaches to managing legacy applications in a world of cloud computing:
One important thing to keep in mind is that enterprises are not monolithic, with a single approach to the entirety of their application portfolio. They will typically have a mix of these four approaches within the portfolio, and the makeup of that mix will probably change over time as circumstances and budget dictate. Consequently, a critical question about where to move applications is what best supports a mixed (and changeable) portfolio distribution?
Given that, what are the options?
One attractive option is to move applications out of one’s own data center into a near mirror computing environment. The obvious solution is a colocation provider. Colo delivers computing infrastructure (facilities and power, networking connectivity, and customer-segregated racks) into which customers place their own servers and storage.
Colo makes it possible to transfer sleeping dogs and lift and shift application approaches with little change needed. This is the lowest friction solution to move from capital-intensive on-prem data centers to an operating expense infrastructure basis.
And certainly this is a popular choice. Strategy (yes, that same analyst firm) notes that the three largest colo providers — Equinix, Digital Realty, and NTT — are growing rapidly, clearly a sign of customer adoption.
However, this choice has a significant limitation. It is poorly suited to support wrap and extend and caterpillar into butterfly approaches. Just like on-prem infrastructure, colo deployment is difficult to change and challenging to support application elasticity.
Consequently, pursuing a colo-only strategy is an evolutionary deadend. It is a low-cost, low-disruption approach to getting out of one’s own data centers, but imposes limitations upon more sophisticated application topology architectures. For many companies this approach will suffice, but those with more ambitious plans regarding their application portfolios may find this too restrictive.
Another approach to emptying one’s data center is to move all of the applications in it into a cloud provider’s environment. Certainly it is possible to take an inflexible monolithic application and move it into a cloud environment, where it can operate much as it did on-prem.
Pursuing this approach also allows for flexibility within the deployment choices, since AMG definitely support sophisticated application topologies that are dynamic and frequently require differing amounts of computing resources. Moreover, AMG also support the transition from capex to opex application spend.
I’ve worked with many companies that pursue a lift and shift strategy and it can be highly satisfactory. However, many of them find the migration costs more than they expect, given the implication that lift and shift imposes few changes on the applications.
This is because most organizations focus on changes to the application code itself and fail to appreciate the changes required in application environment and operation practices; these changes can be fairly extensive and raise the cost of migration. Areas commonly overlooked regarding environment and operation practices include:
None of these issues is insurmountable, for sure. Many organizations have addressed them as part of a lift and shift initiative. It’s just important that the cost and time delays imposed by what sounds like a simple approach not be underestimated. I believe that many enterprises will find the delays particularly troublesome when they confront data center exit deadlines. Nothing is more frustrating (and challenging) than the unexpected extra work that turns up when executing a seemingly simple task — there’s an industry catchphrase for it: yak shaving.
Both colo and lift and shift data enter exit strategies are workable. Both carry costs that may not be fully appreciated until encountered. Over the past two years another approach has come forward that combines the best of each: Using enterprise-ubiquitous VMware as an onramp to AMG environments.
I discussed the VMware strategy recently in my 2020 cloud predictions, when I numbered VMware among the four most important vendors in the industry. I called its strategy as providing “the connective tissue between existing on-prem environments and all three of the big cloud providers.”
What VMware has done is quite ingenious. It has struck deals with all three of AMG to host VMware-native environments in their infrastructure. This allows users to move applications from their on-prem environments to identical VMware environments. This offers the low-friction, low-cost colo-type approach. No change is required to install or configure the applications. Implementing new identity management or security tooling is unnecessary. And users can continue to operate their applications in the same way as they did when they resided in an on-prem data center, so little or no staff training is needed to support the migration process. Importantly, this approach allows enterprises to continue to manage costs under existing VMware enterprise license agreements, sparing them the hassle of negotiating yet another vendor agreement.
Unlike the colo approach, however, this migration option does not preclude evolving individual applications into more cloud-native architectures, and allows enterprises to support a varying mix of portfolio application architectures. The VMware migration option allows the most continuity with existing on-prem computing patterns while not imposing limitations on portfolio evolution or application modernization.
The more I think about it, the more canny I regard VMware’s strategy as being. Enterprises are stuck between a rock and a hard place (or trying to navigate between Scylla and Charybdis, for those more classically inclined): reducing spend in on-prem data centers while avoiding massive migration bills in moving to AMG environments.
The magnitude of the change coming to enterprise IT spend should not be underestimated. Total investment in enterprise data centers is certainly in the hundreds of billions, if not well over one trillion dollars. Ultimately, most enterprises will come to agree with Netflix that operating data centers is not a core competence. Furthermore, more will recognize that, unlike Netflix, they don’t have high PE stocks to support data center capital investment even should they desire to run them.
VMware is positioned to be the beneficiary of this mass migration. While many enterprises will choose one of the other approaches outlined above, a very large number will come to embrace the low-friction choice offered by VMware’s AMG partnerships. It’s still early days for this approach, but I predict it will grow rapidly as it’s proven out. The Bottom Line: VMware is poised to reap hundreds of millions, if not billions, of dollars as the enterprise data center exit accelerates.
In a way, recognizing the extended continuity of enterprise application portfolios is a climbdown for me; I’ve long been an advocate for direct AMG adoption. However, over the past couple of years I’ve come to appreciate the constraints enterprise IT operates under. Simply put, most enterprise IT organizations suffer severe shortages in budget and staffing; the only place they’re rich is in the large melange of their application portfolios. Any method that allows them to move toward a more cloud-oriented future while reducing immediate disruption will be a winner.