Why the Harvard Business School is Wrong About Blockchain
May 31, 2017
Can Blockchain Solve Its Scalability Problem?
June 21, 2017
Show all

Cloud computing has hit a tipping point.

What do I mean by that?

Don't miss a single blog!

Don't depend on Twitter, Facebook, or Linkedin notifications. Sign up and get the latest blogs and videos delivered to your inbox asap!

I will never give away, trade or sell your email address. You can unsubscribe at any time.

Cloud computing is now established as the default choice for application deployment. More important, the IT disciplines that accompany application design and operation are now cloud-centric.

This means application architects assume infrastructure transience, horizontal scaling, and topology partitioning (microservices). It means operations groups recognize DevOps is table stakes, now that infrastructure availability is measured in seconds or minutes, not weeks or months. And it means that IT groups are using services rather than implementing components, e.g., Redshift rather than Teradata. They want the function, not the support.

In short, cloud has won the hearts and minds of IT users. The significance of this tipping point is profound.

It will destroy the legacy data center-focused vendors.

The cloud giants will generate hundreds of billions of dollars of annual revenue.

And we will see the penetration of IT into every product and service — so much so that information technology will be invisible to us, as it will pervade every aspect of daily life so thoroughly that it will unremarkable.

The Rise of the Cloud Giants

Everyone is familiar with technology S curves — innovation follows a pattern of slow adoption at introduction, very rapid growth after proof of value, and deceleration when a market is saturated. Here is an example of an S curve mapped onto Geoffrey Moore’s well-known chasm model.

Cloud computing has now entered the steep growth phase of its lifecycle. I recently analyzed the latest quarterly results of the big three providers (AWS, Microsoft Azure, and Google Cloud, aka AAG). The chart below summarizes my revenue estimate:

 

 

As to the AAG growth rate, I wrote:

“The blended growth rate of the cloud providers, adjusted for revenue percentages of the individual providers, is on the order of something like 60 percent. This indicates the big three revenues might achieve something around $23 billion of revenues for 2017, $39 billion for 2018, and $62 billion for 2019.”

Some in the industry believe the big providers are nearing a rapid deceleration. One pundit predicted a dramatic drop in AWS’s growth in Q318 when it hits $18 billion in annual revenues. Not sure how he came to that conclusion, since he offered nothing more than an assertion as evidence for his opinion.

Frankly, that’s wishful thinking. There is huge current TAM (Total Addressable Market) for AAG to go after (more on that in the next section). There is no reason to believe that customer adoption is, for some reason, going to slow down.

In fact, it’s just the opposite. Google believes its cloud could run bigger numbers than its ad business. AWS believes that its business could be larger than the e-commerce side of the house. I know from personal conversations with AWS executives that they believe it can be a multi-hundred billion dollar business.

Over the next few years, we can expect to see the big providers grow to huge revenues. More important, the big provider clouds will become the default computing substrate for information technology, with the vast majority of applications deployed into these environments.

The Incumbents Erode

The second quarter of 2017 was terrible for the incumbent vendors:

While IBM and HPE both aver that they are in the midst of turnaround plans that will soon see their revenues turn up, one has to be skeptical. There is no evidence of this, and their presentations of their strategies are risible. “Enterprise strong”?

Dell is in a somewhat different position. Michael Dell bought both the eponymous Dell and EMC on the cheap and only has to manage for cash flow to pay off the loans that funded the purchase. A brilliant strategy … for Michael Dell. It’s not clear, however, how well a “grow share in a shrinking market” strategy plays out long-term. Industry observers recognize the horrible state of the incumbents in a tipping point world.

The Response of the Industry

To me, the response of the industry is puzzling, and in this I include both vendors and end users.

Instead of recognizing the obvious, and aggressively planning for a cloud-first world, most seem to be beating a stubborn retreat, clinging on to the rapidly disappearing world of physical kit, and grudgingly acknowledging that “some” workloads will be deployed into cloud environments.

The mantra of the day is “hybrid IT,” meaning a mix of on-premises and cloud-deployed applications. The attitude seems to be “there’s a trillion dollar’s worth of IT equipment sold every year — so there’s a bright outlook for the future of on-premises applications.”

But people who push hybrid IT confuse today’s reality with tomorrow’s expectation. As we’ve just seen, the big vendors are hemorrhaging revenues. So they’re not going to contribute to that trillion dollars.

Others accept that the old hardware approach is dead, but believe the on-premises concept will be saved by hyper-converged infrastructure — in other words, there will still be a trillion dollars spent each year, just on different stuff, and maybe from different vendors.

The reality is that on-premises is in terminal decline and nearing collapse. Their revenue drops will accelerate as users continue to climb the steep part of the cloud S curve and IT spend shifts to AAG. The only question remaining is what the endgame is for these companies.

Looking at the user side of the equation, it’s a muddle. There are some companies all-in on public cloud. There are some that emphatically state that they couldn’t possibly use public cloud.

The rest maintain that, just like the vendors, they are pursuing a hybrid cloud strategy. The only thing is, they’re not. They use public cloud but their on-premises infrastructure is the same old legacy stuff. Private clouds are so rare they might as well be on the endangered species list. Cloudbursting is much talked about, but never seen.

All of this end user hybrid talk makes for better feelings all around. It doesn’t offend the legacy vendors who still need to be talked to about on-premises kit. It makes legacy application-focused employees less anxious about their job security. It probably serves as a talking point for the CIO when he or she is pointedly asked by the CEO about public cloud use and why things aren’t moving faster.

But it doesn’t solve the core problem: directing IT toward the future and focusing effort on the only thing that matters: applications. All of this hybrid talk masks the real issue: how quickly the organization can shift toward delivering business value rather than managing plumbing.

The Future of IT

In a sense, the future of IT has never been brighter. Driven by the ongoing shift from atoms to bits (aka digitization), IT is moving from a role of cost center support organization to one of core product/service functionality provider.

But to succeed in this world, IT needs to become cloud-centric and drop the hybrid cloud subterfuge.

IT leaders need to make the following changes:

Recognize the future for what it is

Cloud computing is the future. Any decision you make that does not take that as a given is a mistake. AAG will come to dominate the infrastructure world, and it’s crucial you recognize that and orient your strategy around it.

The cloud tipping point is a nexus between the past and the future just like the PC, X86, and tcp/ip were. They became the foundation upon which IT operated. Cloud is the next one, and it will be just as dominant as they were.

If every part of every application design, deployment, and operation isn’t centered around using cloud, you’re doing it wrong.

Don’t fool yourself that you can operate infrastructure cheaper than AAG

They invest $30 billion a year in data centers. They use machine learning to run them more efficiently. They have a history of cutting prices against competitors, who invariably lose, go out of business, or spiral into desperate financial conditions: Borders, Circuit City, Macy’s, et al.

Now you’re the competitor. Think you’re smarter than Macy’s? Have deeper pockets than Walmart? Understand your customers better than Borders? You’re fooling yourself.

And when someone in the organization comes to you and “proves” they can operate an application cheaper on-premises than in the cloud? Keep in mind that your job is overall cost management and that’s very different than comparing the purchase of a single server versus running a virtual machine on AWS.

Let me share an anecdote to illustrate the difference. During the period I worked at Dell (after it acquired the company I was with), I attended the inaugural Technology Business Management conference, which focuses on how IT organizations can operate like a, well, like a business.

I attended a session featuring a speaker from Dell discussing their adoption of Apptio, which sponsors TBM. He described how they started an initiative to track costs. The initial step was performing an inventory. They found 5,000 servers — 5,000 servers — no one would claim. That CIO was paying to depreciate the machines, power them, place them in a rack and connect them to a network, use a maintenance contract for break/fix, back them up, and monitor them.

All of that went into the IT budget. At Dell. A $50 billion revenues company. So don’t fool yourself you can run infrastructure better than AAG. You can’t. Don’t even try.

Stop using hybrid as an excuse

Yes, of course you have existing applications and infrastructure. Of course you can’t move everything right away. Some applications may never be moved because of constraints like contracts, licenses, security, or compliance.

But I see way too many IT organizations trotting out these issues like a security blanket to justify a go-slow approach. Hybrid is a constraint, not an ideal. Your job is to figure out how to reduce the on-premises legacy portion of the hybrid as much as possible, as fast as possible.

And don’t look to analysts, SIs, or your current on-premises infrastructure vendors. They have a vested interest in dragging out your hybrid strategy for as long as possible.

Make an aggressive plan to go all-in on public cloud

Then make it more aggressive. Your company will be fighting for its survival in the brave new digital world. The cliche of the day is to avoid being Uber-ed. The truth in that statement is that new competitors can pop up at any time and from unexpected directions.

Are you planning to have speed, agility, scale, and flexibility in your bag of tricks? Or are you going to be mired in the need to upgrade the network switch, or discussions of why it’s not possible to create a production mirror for testing purposes, or all of the thousands of details associated with running your own infrastructure?

You only have so much time to run your organization. Where do you want to focus it?

Evaluate your staff for cloud readiness

There’s a fair amount of discussion in the cloud world about the critical need for skills (and in fact that’s why I work in partnership with Simplilearn). Without advanced skills in cloud architecture and operation, you’ll just end up building the same old inflexible systems and placing them in the whiz-bango cloud for no net benefit.

The question is how to obtain those skills. One school of thought, which can be summed up as the Gartner Bi-modal IT school, advocates hiring incremental staff with needed skills. Another school advocates building skills internally, with a strong emphasis on maintaining morale — after all, if existing staff sees new employees brought in to work on new systems, won’t they feel bad?

The challenge is to build skills as quickly as possible. It makes sense to train existing staff because they have valuable system, organization, and company knowledge.

But make sure they’re willing to learn new skills. Another anecdote: I gave a presentation at a customer workshop on moving to a cloud-forward, DevOps-based development lifecycle. One attendee raised his hand and made the observation that his current job consisted of manually installing WebSphere, he was happy doing it, and he had no plans to make any changes in his tasks going forward, thank you very much.

You can’t live with that. The entire organization needs to move at the pace of the digital economy. And if existing staff can’t or won’t make the shift, they need to go.

That sounds harsh. But it’s your job as a leader. If you can’t make hard decisions about skills and staffing, you’ll deserve the pink slip that eventually lands on your desk.

Conclusion

Cloud computing has arrived at a tipping point. Technology tipping points are nothing new for IT. IT organizations have weathered technology transitions many times before: mainframes to PCs, LANs to the internet, RISC to X86.

The difference this time is what’s at stake. Previous transitions were internal to IT. If it took a couple of extra years to get off of the mainframe, well, maybe there were some budget hits, but it was not a big deal.

Cloud computing is very different, because IT is different now. Today IT is the core of a company’s products or services, along with how it markets and sells them. Not to mention how it engages with customers and partners.

In other words, IT is the most critical resource within companies. And missing the cloud tipping point isn’t an inconvenience. It means death in the marketplace for the company.

Any IT leader worth his or her salt that hasn’t got a high-priority all-in cloud plan, who is satisfied with a go-slow hybrid approach, who lets keeping a happy workforce take priority over moving faster, is a failure. Don’t let that failure be you.

22 Comments

  1. […] cloud tipping point: Cloud is now the default choice for forward-looking organizations. Writes Bernard Golden, “More important, the IT disciplines that accompany application design and operation are now […]

  2. Bill McColl says:

    Great article Bernard. Powerful, accurate and very well argued. I totally agree with the predictions. Those in the non-cloud IT world and advocates of hybrid and on-premise just don’t get how rapidly our industry is changing. Amazon have done a great job of getting us to this tipping point. Now Azure and Google are competing seriously too. And one or two others have the global scale to join in. Among them my own company Huawei that earlier this year formed its new Public Cloud Business Unit. We believe that our current global scale in communications and IT is just the start. Like the other handful that will win this market, cloud could end up being our biggest business.

    • Bernard Golden says:

      Thanks for your comment, Bill. For sure the cloud battle is now one for the giants. Table ante stake is upward of $5 billion/year. Huawei is one of a handful of companies in the world that can be a player. The main question is how quickly it can build out a portfolio of services competitive with the big three.

  3. G Hill says:

    Thoughtful article Bernard. I agree on shifting energy/passion on moving to the public cloud, not fighting it. Resistance is futile 🙂

  4. mariano ammirabile says:

    Hi, I like the article, I do not agree with all the points. In my opinion IT evolution is not black and white, and Public Cloud will not be the answer to all the questions, so Hybrid Cloud will have an important role. I like one of the comments you received about moving from an Infrastructure to an Application point of view. I think this is driving the choice. if you were right, I would be very worried about to see just two or three firms that monopolize the world

    • mariano ammirabile says:

      I forgot to mention that in the early 1940s, IBM’s president, Thomas J Watson, reputedly said: “I think there is a world market for about five computers.”. I think the idea of three Cloud Providers only is not new, the IT market has expanded because he was wrong !

  5. Josh Hurley says:

    Good article Bernard. One note, the Azure numbers are likely understated. Azure will likely surpass AWS by the end of the year, more at:
    http://www.investopedia.com/news/microsoft-could-surpass-amazon-cloud-computing-year-amzn-msft/

    • Steve says:

      Microsoft Cloud Computing Revenue is made up of Windows Server, Server Products (such as SQL) and Azure. Microsoft does not break this out into segments, so its impossible to say what Azure Revenue is.

      Microsoft often sells Azure credits as part of large Enterprise Agreements, which will count towards Azure Revenue, regardless of whether a customer is using this or not. I have heard from a very large Microsoft software re-seller, but I cant quantify this personally, is that 1 in 4 companies have Azure credits that they do not use. Given this it could be that JP Morgans estimate of 1.2 Billion in Azure is an over estimate. They certainly are not any where near the scale of AWS, not surprising when you think that Azure VMs only came out of preview in 2013 and AWS has been running 10 years +. I am sure they will grow in percentage terms faster than AWS, but then its easier to grow a 3 Billion a year company than a 15 Billion a year company in percentage terms.

  6. Josh Hurley says:

    Good article. One note, the Azure numbers actually low. It is believe that Azure will surpass AWS this year, more at:
    http://www.investopedia.com/news/microsoft-could-surpass-amazon-cloud-computing-year-amzn-msft/?_lrsc=cbea067c-8334-45c2-adc1-a4181d0898e4

  7. Andrew Duggan says:

    Great article thanks Bernard

    I do think there is still a “tipping point” where it makes sense to run hybrid cloud rather than all-in. This is on the basis that the on premises portion is indeed private cloud and includes all the cloud practices of orchestration and automation that goes with it.

    This is probably going to be in the order of 1000+ VMs, in a well managed efficient environment and these need to be non-elastic workloads. At this scale it may be possible to run lower infrastructure costs than the buy price from AAG. So it may make sense to keep part of your core fleet in a private cloud. Of course if you have a properly implemented private cloud you are probably already using that in concert with public cloud consumption anyway.

    Fully agree that organisations running a few hundred VMs or those that that have just declared their VM farm to be private cloud need to seriously consider all-in transition to cloud.

    I have seen all the objections and architected a number of all-in migrations. Most of the objections are based on lack of knowledge and/or the lack of motivation to innovate. Perhaps you don’t understand the applications, the interconnections, or the capacity requirements etc. I remember having similar objections about how applications could never possibly be run in virtual machines on VMWare.

    Only one issue remains with your article, the hottest thing in IT 5 years from now probably hasn’t been invented yet. Highly likely it will be based in the cloud but hard to rule out anything in this industry – that’s what makes it fun to work in this space.

    • Bernard Golden says:

      Hi Andrew:

      Thanks for the comment. I guess we’ll have to agree to disagree on on-premises cloud. I understand that with the right circumstances and with the right workload an on-premises cloud could be less expensive. The problem is that pesky circumstances never seem to cooperate, and then the elegant model falls apart. To quote the great philosopher Yogi Berra: “In theory, there is no difference between theory and practice. In practice there is.”

      As to your last paragraph, we are in complete alignment. The pace of technology innovation is staggering and it is accelerating, a la Ray Kurzweil. I believe that the cloud will be the computing substrate for that innovation going forward.

      Thanks so much for commenting and being part of the conversation. I really appreciate it.

  8. Techguy says:

    Thought proving article, just like to point out Premise and Premises are two different words which mean two different things:
    •premise – something assumed or taken for granted
    •premises – (1) a tract of land with the building thereon, or (2) a building or part of a building

    So if you say, “I like VDI on premise,” what you’re saying is “I like the idea of VDI.” If you say, “I like VDI on premises,” you’re saying, “I like VDI inside my building.

  9. Brian Cox says:

    (Disclosure: I work for Nutanix who provides infrastructure to both on-premises data centers and off-premises service providers).

    I agree with Steve Chalmers comment that IT needs to be viewed from the lens of the application services IT delivers. The infrastructure is simply a means to an end.

    All the business leaders want is to provide superb customer service, deliver new products and services more quickly, free up cost to invest elsewhere in the business and have the applications that support these activities be constantly available whenever needed. IT must enable these activities at the pace business needs and not be a hurdle or roadblock.

    The business does not fundamentally care whether the underlying infrastructure is on-premises or off-premises–just that it meets those business objectives.

    Thus, the business leaders don’t love public cloud because it is the public cloud. Rather, they like the public cloud in many cases because it does a better at delivering the application services that meet the business objectives than legacy IT infrastructure has done. They are looking for an experience, fundamentally, rather than being religious about an architectural approach.

    If IT can deliver applications on-demand when needed, free up costs and minimize business risk, then the business is happy. It’s not solely a question about IT being off-premises or on-premises architectural model. Otherwise, Amazon would not have purchased Whole Foods yesterday. Different retail customers have different needs in regards to their desired mix of price, selection and availability.

    Similarly, business consume applications with differing objectives in regards to latency, data governance, security, cost and uptime. There is not just one answer as there are different mixes of needs. Some are fulfilled better by an off-premises public cloud and some by an on-premises data center/private cloud. The ideal would be to have policy engines which deploy applications across a mix of off-premises and on-premises infrastructure based on the objectives the business sets.

    • Dan Orton says:

      Bernard,

      Excellent and concise content!

      Brian,
      The business will absolutely care. Critical resources need to be available for on-premise business objectives. The make/buy decisions revolve around data center square footage (opp. cost of space), cooling/electricity costs (generators/battery backups for black/brownouts), hardware racking and stacking (3 year hardware refresh cycles), DR\backups, HA (with tested failover configurations), off-site replication scenarios, etc. But if we can outsource these commodity services and focus on proprietary development (our business), then that’s an easy win. Pay a specialist, rather than specialize in it ourselves. If a business doesn’t fundamentally care, they will fundamentally hemmorage revenues.

      “The business does not fundamentally care whether the underlying infrastructure is on-premises or off-premises–just that it meets those business objectives.”

  10. Bernard, I really like the way you captured the key guidance for Enterprise IT today.

  11. Steve Chalmers says:

    Exceptional and well written discussion, and I highly recommend this piece.

    However, there is a basic framing of the problem which I think needs more discussion. (That’s code for I disagree strongly with at least one fundamental assumption underlying the analysis underlying the article.) Disclosure: I worked for HP and then HPE for almost 37 years, a career for which I am very grateful, before prematurely retiring when the HPE corporate chief technologist’s office was dissolved last fall. And even though I have nothing bad to say about HPE, I am bound by a nondisparagement clause in responding to one of the points you make.

    Here’s my concern: I think the entire IT infrastructure discussion should be framed, not in terms of the infrastructure, but instead from the viewpoint of the applications the business needs run. If you look back at the evolution of applications, after a tower of babel era in the 1950s and early 1960s when apps were custom written to specific machines, the first dominant environment to write portable applications to was the IBM 360 mainframe family. After a similar babel era of lower priced minicomputers the c/Posix/Unix era occurred, followed by the Windows APIs and libraries in all their forms, and finally by the current Linux-and-500-open-source-packages era. Notice that I’m not talking about how mainframe hardware gave way to collections of minicomputers gave way to client-server gave way to the Internet, I’m talking about the APIs to which the data center side apps themselves are written. For this discussion take APIs as broadly as possible, to include not just library and kernel calls, but even the scripting surrounding install/deploy/tune of the app.

    Those APIs (those whole application development environments) are a natural monopoly. There should only be one so app developers don’t have to duplicate work. Having three just prevents the one from extracting monopoly rents from customers the way IBM did with mainframes, or Microsoft did after that. Do not ever forget that for all its market power, IBM OS/2 failed simply because Microsoft had soaked up all the available software company investment first, and there was no capacity left to port everything from Windows to OS/2.

    Back in the late 1990s I was working on a very early server consolidation effort — we even had a manufacturing line to build regular arrays of our Unix boxes for bulk sale and installation at customer sites — and in discussions we realized that a big part of solving the customer’s whole problem was that the surface the application was written to was too intricately intertwined with the OS configuration and tuning, making it really hard for apps to share an OS instance. What we were describing was what “containers” are starting to address today…and yes, VMware and VM’s dominated what customers actually bought in the two decades after that strategy session. Theoretical best doesn’t always win in this business.

    Over the last 5 decades, when a non captive software developer goes to write an application, they are forced to choose whose infrastructure to deploy it on first. To use that application, the customer has to use that infrastructure or another to which the app has been ported. 40 years ago that meant tribes of people aligned with IBM mainframes, or IBM minis, or DEC, or HP, or any of 10 other companies no one remembers. When someone writes an app today, absent compelling need for Oracle or similar infrastructure, they would be stupid not to write it to the current open source ecosystem for deployment on AWS first, and to figure out how to port it to Azure and maybe Google before they run out of ability to invest. It is this conclusion, about app development rather than about choosing infrastructure, which I think should have been reached first.

    The natural monopoly of application runtime/deploy environments means there’s room for AWS, Azure, and maybe Google. No one else has any more chance of success than Wang and Prime and Data General and Pyramid and Elxsi (sp?) did of attracting a critical mass of app developers in the 1980s. Or than the BUNCH (Burroughs, Univac, NCR, CDC, and Honeywell mainframes) did after IBM mainframes won in the 1960s. Although I cannot speak as an insider on this topic, as an investor I think HPE made a rational choice to abandon its non-top-3 market presence here.

    If you look at AWS, Azure, and the like today, they are extremely good at deploying server tasks, mediocre at supporting those with tuned network infrastructure, and really bad at supporting those with tuned storage. Remember all that intertwining of application development with OS configuration on a per-app dedicated system 20 years ago? That wasn’t just about optimizing deploying threads and processes and scheduling within a CPU or a server for transaction throughput: a lot was about tuning storage access. I think AWS and Azure will get there (Google already knows how but hardly anyone writes to their back end services), they’re just not there yet. This means there are a lot of legacy apps which won’t port cleanly to those environments — either they won’t perform or will be very expensive to run at scale. Or are just cheaper to run legacy than to port (which is why there are still IBM mainframe apps running 40 years after that technology peaked).

    So an IT team today should be (1) thinking, on a case by case basis, rather than staying with the past or leaping into the future; (2) always selecting new apps architected for the web, not newly deploying legacy where there’s a choice; (3) looking seriously at shifting the bulk of applications to the web, absent security constraints like banking or medical (HIPAA) have. Whether a business itself is growing like gangbusters or maturing into decline makes a big difference on the appropriate level of investment in porting the past to the future.

    Closing observation: what the web giants did was disintermediate the classic server, storage, and now network companies. They went back to the suppliers to those companies and bought the underlying technology, partly because they didn’t need the value which the legacy IT vendors add, and partly because they didn’t want to pay the cost of creating that value. Understand it for what it is: disintermediation, not obsolescence.

    Closing observation: the IT business evolves glacially. The most profitable path for a legacy IT equipment company, whether that’s Dell/EMC or HPE, is likely to serve its installed base well at traditional margins. It’s not to arbitrarily cut prices, which will not work as a response to disintermediation. It’s not to, as Bill Hewlett put it, attack fortified hills (enter businesses where there are already entrenched competitors).

    But there is a play to leapfrog the current generation of technology, to become a technology provider the web giants go to (like Intel, Samsung, etc) rather than an intermediary. That’s what memory centric computing (and HP Labs’ “The Machine” program, and the Gen-Z interconnect I spent 2 years on) are about, and why the Pathforward award we heard about in the last few days is important. Is there risk in transitioning from intermediary back to technology provider? Absolutely. Is it more rational than attacking a fortified hill? Absolutely.

    • Bernard Golden says:

      Wow. Longest comment I’ve ever seen! 🙂 I completely agree with your three part IT prescription. The problem is that many, many IT organizations, enabled by vendors, are pursuing a go-slow, keep on-premise infrastructure approach and calling it hybrid. And when you have that kind of mindset, it’s easy to find reasons/rationalize leaving current applications untouched — rather than the aggressive approach you outline.

      BTW, I also agree that the lens this all should be seen through is applications, and what enables applications best.

      • Steve Chalmers says:

        Yes, this should play out in the market as independent legacy infrastructure and cloud infrastructure companies compete with each other, complete with the expensive (20 cents on the dollar just in selling costs) legacy model competing against a self service, very low selling cost web model. Can’t blame the legacy sales reps for saying what they need to say to do their jobs.

        Will be interesting to see in 20 years whether Meg was right in separating into distinct companies the services people (who need to be making recommendations on cloud vs legacy, and helping implement) from the legacy product people (who only eat if legacy gear is sold, since the cloud providers disintermediated us).

        And yes, my wife says I’m too young to retire and should direct my long winded ness at someone, anyone, but her. 🙂

  12. A very strong statement but, spot-on. As a person who cut his teeth on what are increasingly legacy practices (with all the attendant enterprise IT drama and limitations), the cloud revolution feels like a cool shower on an oppressively hot day. I pivoted with joy to AWS and Azure.

    Even so, many of my colleagues aren’t so enthusiastic. I see a great shake-out in our industry’s future as cloud-native methods move across the landscape.

  13. John Laban says:

    A story well told.

Leave a Reply

Your email address will not be published. Required fields are marked *

Confused about smart contracts? Download this free white paper and get smart fast!

Enter your email for immediate access

Close this popup

Companies are excited about the potential for migrating business processes to blockchain smart contracts. They promise faster process execution, improved security, and lower costs. Unfortunately, smart contracts aren't well understood and many people are confused about how to launch a smart contract initiative.

Download this free white paper and learn about these four critical smart contract elements:

  • The four main benefits smart contracts provide
  • The key challenges organizations encounter with smart contracts
  • The best use cases for your early smart contract efforts
  • How smart contracts will help your company standardize its business processes