What to Expect in Cloud Computing in 2020

Cloud Computing and Application Inertia
December 19, 2019
Cloud Computing: The Next Trillion Dollars
January 27, 2020

Every year I make a set of predictions of what will happen over the next 12 months in the world of cloud. It’s always good to stretch one’s mind a bit and think about the likely implications of the energy and innovation going on in this space.

The Decade Past

This year is a bit special, marking the end of the decade (yes, I know the decade actually ends when 2020 closes out, but I prefer to go with the crowd on this one). I’ve seen a number of decade retrospectives in other fields, and I think a short recap comparing cloud computing in 2010 versus 2020 will illustrate just how powerful the trend has been, and how much progress there has been over the past 10 years. 

Don't miss a single blog!

Don't depend on Twitter, Facebook, or Linkedin notifications. Sign up and get the latest blogs and videos delivered to your inbox asap!

I agree to have my personal information transfered to MailChimp ( more information )

I will never give away, trade or sell your email address. You can unsubscribe at any time.

In 2010, AWS was fairly well-established with an annual run rate of perhaps $2 billion. Most of its users were startups, SMBs, and small groups within enterprises who turned to AWS out of frustration with the tardy responsiveness of their in-house IT orgs (for which they were given the traitorous label of “rogue IT”). Microsoft and Google had negligible, if any, presence in the cloud computing industry, the former handicapped by an ongoing internal battle between cloud versus on-prem camps, and the latter by a failure to view cloud as an important priority, understandable in a company whose coffers were overflowing with search advertising revenue. It’s instructive to remember that at this time Satya Nadella’s appointment as Microsoft CEO was still four years in the future.

Instead, names like GoGrid and Rackspace were thought of as AWS’s primary competition. Today, GoGrid has been lost in the sands of time, and Rackspace, after attempting to supplant AWS with the OpenStack open source offering, has reinvented itself as an MSP-cum-hosting company.

The services from AWS were quite limited. It offered basic computing services, primarily oriented around automated virtualization. It had only launched Elastic Load Balancer the year before, and many important enterprise-oriented features (e.g., audit trails of account activity) were still in the future. Running a significant production application required the user to work around the limitations of the offering and take on the burden of operating many important services itself (e.g., logging).

Moreover, even those limited services were offered in a limited number of places. AWS had a few regions, and users frequently would find that calls to launch new EC2 instances would fail due to a lack of computing resources.

In 2020, the cloud computing picture is so different as to be astonishing. AMG have firmly placed themselves as dominant within the industry, with everyone else trailing far behind. They offer an enormous array of services that help accelerate application delivery and reduce the burden of operating application components. Just to pick a few examples, Google offers the very impressive Spanner, an SQL database for the cloud age. Microsoft just announced AI-enabled developer assistants to help create code more quickly and refactor code bases for better structure and maintainability. And, from AWS, a complete satellite ground station service, allowing cheaper creation of space-based services. 

Enabled by tens of billions of dollars in capital investment, AMG have also built out a global presence, with regions and computing facilities spread across the earth, thereby spreading the cloud computing revolution to users everywhere. Over the past four or five years, they’ve launched a plethora of services like those just cited, enabled by and designed for a world of infinite computing capacity available at a moment’s notice. 

It’s remarkable to think how far cloud computing has come in ten short years. It’s particularly important to emphasize what scale has brought to the field. Because of the cascade of dollars AMG have placed into their infrastructure, they now operate in a different plane than all other vendors in the tech industry. Hearkening back to dialectical materialism, the difference in quantity has now become a difference in quality.

It’s not like the progress we’ve seen over the past decade is going to stop, either. We’ll continue to see more change in the industry as cloud computing is more widely adopted and the portfolio of cloud services becomes ever broader. 

There are five areas you should keep an eye on in 2020. Here are my predictions for what will come to pass in the world of cloud computing in the first year of the next decade.

Digital Transformation Pressure Ratchets Up

As the role of IT continues to shift from “support the business” to “be the business,” the pressure on IT organizations to accelerate delivery of applications will only increase in 2020. 

In the past, IT used to operate what were referred to as “back office” applications: ERP, accounting, payroll, and so on. Today, information processing is being infused into front-line, customer-facing applications and products, and the evolution of these offerings moves much faster than traditional back-office applications. As company offerings become more IT-centric, pressure on IT is coming from business leaders for it to move faster and deliver next-gen applications and products.

The cloud-native revolution has also changed the dynamics of many company’s markets: a new set of companies joined the traditional cohort viewed as market competitors. These new competitors are digital-first, using tech to reenvision products and services, and for many companies these new competitors are feared more than traditional rivals.

The trend of new competitors entering markets and disrupting them via cloud-native technologies is only going to grow. One has only to look at the explosion of online mattress companies to recognize how profoundly rethinking an industry via the application of information technology can change its dynamics, to the disadvantage of incumbents.

I wrote about one such example a few weeks ago — MTailor uses a smartphone app, machine learning, and a global supply chain tuned for customization rather than mass production to deliver tailored clothes at an attractive price point. 

For a company like Levis, this is a completely different type of competitive attack. In the past, it vied with companies like Lee and Hagar. The basis of the battle were factors like mass production, global logistics, and on-site retailer promotions. Levis could win by building more factories, outinvesting competitors in a mass logistics system, and hiring more sales reps to work with retailers. 

MTailor competes based on convenience and customized fit, aspects against which Levis traditional strengths are a handicap. Of course, Levis retains many advantages — far larger revenues and a global brand among them. It’s not an equal battle — it’s David versus Goliath. Levis may win just by being so much larger.

No such size advantage is present against what many companies are beginning to see as a likely competitor: Amazon. The chart below illustrates an astonishing stat: how many times Amazon is mentioned during earnings calls of other companies. Those mentions are on a steep curve, and would undoubtedly be even higher if the data extended through 2019.

Digital transformation is a catchphrase for redesigning products and services to incorporate compute and data and better serve the changing preferences of consumers. And the pressure to become a cloud-native company is only going to build in 2020. 

Application Inertia and the IT Budget Crunch

There’s one fly in the ointment of digital transformation, though: legacy systems. Every company has them. Designed for vastly different infrastructure characterized by static configurations, unwieldy change practices, and protracted deployment timeframes, these applications are the very opposite of cloud-native. And they soak up 80-90% of every IT organization’s budget.

I wrote about application inertia recently, and provided guidance on how to manage a transition to cloud-native nirvana. It requires examining existing applications at a portfolio level and creating customized plans for each application based on its role in a new cloud-native environment.

One thing I didn’t write about was the economics of this process: If a huge proportion of IT budgets are tied up with just keeping the legacy applications running as-is, how can IT organizations fund this portfolio analysis and transformation work?

In my view, this will be an enormous issue in 2020. Simply put, there’s no way for companies to migrate to a cloud-native future based on traditional IT financing — where IT is viewed as a cost center and the most frequently voiced guidance is “do more with less.”

The need for digital transformation is clear. And CEOs are laser-focused on avoiding getting “Amazoned” (or Ubered, or Airbnbed, or choose your favorite example of a cloud-native entrant disrupting an industry). But simply increasing pressure on IT leaders to move faster within the constraints of current budgets won’t solve the problem.

Unlike traditional companies, cloud-native companies view software development as a core competence and fund them like a line organization, not an admin group. To cloud-native companies, software development is the product factory, and requires treating that organization the same way auto companies treat assembly — a key activity requiring significant investment.

Next year will see this IT funding issue rise to the fore. Frankly, I think it will be challenging for most companies to break free of the IT-as-cost-center mentality. The old cliche is what you measure shows what’s important for an organization, but what you spend on is even more telling. Successfully making the transition to viewing IT as a core competence and funding it appropriately will mean the difference between cloud-native success and failure.

Four Companies Set the Pace for the Technology Industry

Wait a minute! Didn’t the Decade Past section above talk about how thoroughly AMG have broken clear of the pack in technology? Yes, it did. These three companies have cleared the field in cloud computing and are now firmly placed as the locus of tech innovation. 

They provide plenty of innovation themselves, of course, as the examples cited earlier indicate. But just as important, they also serve as the foundation of innovation ecosystems; they are the place that startups target their offerings toward. And this isn’t just companies filling in functionality gaps (until the cloud provider gets around to its own offering, killing the startup which very kindly provided guidance about what customers needed from the platform). It’s startups creating products that couldn’t exist absent the scale and agility of the underlying platform.

To take one example, Snowflake is a vastly-scaled analytics engine. Traditional infrastructure environments didn’t need Snowflake — their limitations meant analytics tools scoped to single machines and centralized storage were perfectly workable. But with the huge growth in data made possible by cloud computing, a new architectural approach is needed. The legacy vendors weren’t going to make that transition; they were locked into the architecture assumptions tuned for legacy infrastructure. Analytics needed a new approach suited for cloud scale, and Snowflake created the offering. AMG will continue to serve as the innovation locus for the industry and we’ll see lots more innovation on these platforms.

The fourth company serving as a pacesetter for the tech industry is interesting. I speak, of course, of VMware. Its journey to becoming a key part of every company’s tech strategy in the cloud era was by no means a certainty. 

Five years ago, VMware took the position that customers needed to choose between it and the hyperscale cloud providers. Most notoriously, at a major event, it characterized AWS as nothing more than a bookseller. And it viewed application deployment decisions as either-or — either placed in AWS and lost to VMware forever, or placed in VMware and completely disconnected from the capabilities AWS offers. 

Forcing a loyalty contest on customers is never a good decision, and fortunately VMware has done a 180 on its strategy. Instead of treating the deployment decision as an either-or, VMware has reinvented itself as the connective tissue between existing on-prem environments and all three of the big cloud providers. It has gone further and actually placed VMware infrastructure on the cloud providers, enabling customers to move away from running VMware in their own data centers to running it on the advanced data center technologies AMG operate.

The just-completed acquisition of Pivotal will help in the connective tissue role as well. Pivotal brings application-level capabilities to the party, and allows VMware to extend its on-ramp all the way up the stack. And Pivotal also brings a large cloud-native professional services organization to the mix as well.

Overall, this is a big win for users. As the previous two predictions noted, digital transformation via cloud-native technologies is the key objective for IT organizations, but significant challenges stand in the way of that transformation. VMware’s strategy reduces the friction of migration and gives it a key role in IT organization future technology strategy. A vast change from five years ago, and one welcome to IT organizations strapped for cloud-native skills.

Multi-cloud Hype Grows, but Hybrid is Where the Action is

There’s a tremendous amount of chatter in the industry about multi-cloud — the vision that applications can be easily ported from one cloud provider to another, thereby avoiding lock-in. Typically central to this vision is the use of containers and, possibly Kubernetes. A number of vendors have offerings designed to aid users in achieving the nirvana of multi-cloud.

Frankly, this is a solution in search of a problem. As my application inertia post noted, the biggest problem for most IT organizations is getting to the cloud, not moving applications around the cloud.

Users who pursue this multi-cloud vision will confront issues unsolved by adopting containers and Kubernetes. While your business logic can be containerized, what will you do about other functionality your application draws on, like databases or core operational services (e.g., logging)? AMG have long left behind automated virtualization and offer an amazing panoply of application-supporting services. Those services are quite provider-specific, and attempting to migrate an application that incorporates them runs into significant difficulty.

Making a cloud application portable requires the organization to make decisions on these issues:

  • Avoid the use of provider-specific services to avoid lock-in. This enables portability, but at the cost of high-productivity services, and extends initial application deployment timeframes, which is undesirable
  • Use open source products in place of provider-specific services. So, instead of using CosmosDB’s Cassandra interface, the development group could run its own Cassandra cluster. This assists portability, but at the cost of taking on more operational responsibility, which raises the cost of running the application. In addition, it forces the IT organization to bet on its ability to run the software as a highly-reliable service, pitting it against providers whose operational expertise is legendary

Another aspect of application portability resides outside of the application itself. People tend to underestimate how embedded an application becomes in the overall design and operations of a specific cloud provider. Examples of these kind of meta-application factors include

  • Identity management
  • Security controls and configuration
  • Networking configuration
  • Backups
  • Customized commercial terms gained by negotiation
  • Resource billing practices resulting in uncertainty of what costs will be post-migration

These issues have been recognized for at least half a decade, and the choices available to organizations seeking portability haven’t changed over that time. Portability requires making a choice between speed and money, with the benefits for implementing the portability-focused approach uncertain and in the future, with the costs immediate and ongoing.

Hybrid computing will be where the action is in 2020, not multi-cloud. Hybrid is, admittedly, a messy term, with lots of ambiguity about what it specifically entails. However, I interpret it to mean a mixed application deployment environment comprised of existing on-prem and AMG infrastructure with the ability to migrate applications and have applications deployed in the two environments communicate with one another to exchange data or kick off transactions across the on-prem/AMG boundary.

As the discussion of application inertia indicates, achieving hybrid is no small challenge. There is plenty of work to integrate on-prem and cloud environments. Unlike the theoretical benefits of multi-cloud, however, hybrid is right in line with the immediate trajectory of company technology priorities. 

One can expect to see lots of hybrid activity around tooling, best practices, performance tuning, and the like during 2020. For most IT organizations, getting hybrid right will be a major priority during the year 

The Blush Goes Off Edge Computing

The mania for edge computing will deflate considerably in 2020. The number of articles recently published about edge computing and (especially) 5G is insane. Wired.com, for instance, recently published a piece breathlessly outlining where the “5G data storm” will first hit. Large parts of the industry insist that computing must move to distributed locations, driven by workloads that find the bandwidth and latency of communicating to a central cloud data center too burdensome. 

Validation for this point of view was on display at Reinvent when Verizon and AWS announced a partnership in delivering Wavelength, a metro-located infrastructure environment stocked with AWS kit, allowing 5G-based applications to connect with nearby computing resources. 

The reason for all this? IoT. The Internet of Things. All of those computing-infused products springing from companies undergoing digital transformation. I am a believer in the bright future of IoT devices. However, I’m skeptical that we’re about to enter a brave new world of edge computing. Most people overestimate the proportion of IoT systems that require low latency, high bandwidth, and mobility. Typically cited as justifying the need for edge computing are things like autonomous vehicles, with the implication that there will be so many of these use cases that, by golly, edge computing is going to take over the world.. 

While there are undoubtedly use cases that require this profile of computing availability, they represent perhaps no more than 10% or 15% of overall IoT use cases. Just as common are use cases that transmit a few hundred kilobytes once a second from a fixed location. And even scenarios where lots of data is sent don’t justify involving 5G.

One of the examples Wired puts forth is a factory with smart machines forwarding operational data to a computing arrangement. In its mind that requires 5G to be present to transfer the data. More likely is that the setup will incorporate wifi for transfer purposes — no one is going to implement a metered service in place of a free alternative if one is available.

There’s no doubt that IoT is and will be a huge phenomenon; it’s just that proponents of the distributed computing/5G solutions will find IoT use cases requiring such powerful technology to be the exception rather than the rule. By the end of 2020, edge computing will begin to recede from vendor presentations and press promotions; within a couple of years edge computing will be one of those “yeah, we used to talk about that a lot” topics, a la private cloud computing

More interesting than Wavelength, to my mind, was AWS’s announcement of Local Zone. This is an Availability Zone placed in a metro computing environment to allow users to connect to computing resources with single-digit latency. A Local Zone offers a restricted set of AWS services, focusing on EC2 compute and EBS storage. Access to additional AWS services travels on AWS’s high-performance global network backbone rather than over the public Internet.

The Local Zone announcement carried endorsement quotes from Netflix, FuseFX, and Luma Pictures, all entertainment-oriented companies. What makes this offering interesting is that it implies that AWS has a set of customers who need to transfer very large amounts of data with little latency; clearly AWS has penetrated into a highly demanding industry sufficiently to warrant creating a new offering. This symbolizes just how embedded AWS (and by extension, Microsoft and Google) are in user technology strategies, which reinforces the prediction that a few platform companies are dictating the direction of the technology industry.

Conclusion

The great economist Carlotta Perez theorizes that technology revolutions go through two phases. 

The first occurs when the technology is developed and its potential becomes evident. A mania for investing in it springs up, with large losses when the overinvested segment fails to deliver sufficient revenues for all investors to achieve payoff. This phase is associated with speculation bubbles, and heartache and bankruptcy are inevitable by-products.

The second, more interesting phase occurs when the new technology gets sufficiently mature and is absorbed into industrial practices. When the majority of users can apply the technology effectively and realize its benefits easily, it takes off and experiences widespread and rapid growth — in Silicon Valley-speak, it crosses the chasm

Comparing cloud computing of 2010 and 2020 seems to indicate that it has reached the second phase of technology revolution. Companies now understand it, build their enterprise architectures around it, and envision a new generation of applications and products/services based on it. 

Naturally, it’s a more complicated picture than that. This year’s prediction focus on some of the challenge details hidden within the big picture. It’s not easy for most users to transition to cloud-native, but making the shift is a prerequisite for computing in the 21st century. 

1 Comment

  1. […] What to Expect in Cloud Computing in 2020 – Bernard Golden […]

Leave a Reply to Știri #24 - BreakingPoint.ro Cancel reply

Your email address will not be published. Required fields are marked *