Snowball Edge with EC2: Computing Device or AWS On-ramp?

AMG Q2 18 Financials — Torrid Growth
August 9, 2018
AMG Earnings Q3 18: Time to Panic?
October 30, 2018

AWS just announced its latest version of the Snowball — a hardware appliance used to migrate data from on-prem locations to the device and then on to AWS, where it’s transferred to S3.

Except this version is different — it has computing on board. What? AWS is shipping an on-prem computing device? Predictably, a lot of pundits viewed this as AWS getting into the on-premise computing game. Many of them have long proclaimed that AWS would inevitably have to start offering hybrid computing via on-prem capabilities, because, well, enterprise.

Don't miss a single blog!

Don't depend on Twitter, Facebook, or Linkedin notifications. Sign up and get the latest blogs and videos delivered to your inbox asap!

I agree to have my personal information transfered to MailChimp ( more information )

I will never give away, trade or sell your email address. You can unsubscribe at any time.

So they greeted Snowball Edge with EC2 as proof that they were right all along, and that AWS was finally getting some sense into its head about the real way computing needs to be done.

That’s the wrong way to view this announcement. Frankly, a Snowball Edge with EC2 is a pretty lousy computing device — heavy, delivered in a proprietary form factor, and much more expensive than alternative solutions.

Instead, I see Snowball Edge with EC2 as an AWS initiative designed to reduce the friction of migrating data into AWS with the ultimate goal of enabling applications to migrate to AWS, not a way to run AWS applications on-prem. In other words, this new Snowball variant isn’t about processing data on-prem, it’s about making Snowball a better on-ramp to AWS.

Why do I say this? Let’s look at the details of Snowball Edge with EC2.

It delivers a Snowball device with EC2 AMIs pre-installed. The AMIs themselves are defined in AWS and then placed on the device by AWS.

Once the Snowball device is installed at the customer location, users can launch EC2 instances that run in the Snowball; those instances offer network access, which means the instance software can be used as an ETL appliance, facilitating the transfer of data onto the Snowball.

The Snowball AMIs can have an IAM role associated with them during creation; this allows the ETL software to interact with Snowball storage via the AWS SDK/CLI.

So why is this new Snowball version important?

Well, one of the biggest factors constraining application migration to the cloud is the data associated with applications. So constraining, in fact, Dave McCrory  (@mccrory) coined the term “data gravity” to describe how the sheer difficulty of transferring massive amounts of data from one environment to another retards otherwise-desirable migrations. Anything that makes it easier to transfer data from an on-prem environment to AWS accelerates application migration.

In fact, Snowball (and its way-bigger big brother Snowmobile) are directly aimed toward that end. The original Snowball was a storage appliance allowing data to be loaded in a remote location and then sent to AWS, where the data was automagically transferred into S3. This reduced the friction associated with data transfer and uploading.

However, it did little to reduce the friction of getting the data into the Snowball device in the first place. Snowball devices offer an S3 interface to the onboard storage, so one could write a program to slurp local data and write it to Snowball. But that requires an on-prem server, and that means the grand plan to get apps into AWS is bottlenecked behind the weeks-to-months provisioning timeframes typical in on-prem environments.

AWS added Lambda support in 2016 via Snowball-hosted Greengrass, but that only provides post-onboarding processing, triggered by object upload.

So what more could AWS do to reduce the friction of getting data from its on-prem location onto Snowball?

Well, it could shift the data processing executable from a customer-hosted server to Snowball, thereby avoiding the need for any on-prem processing. This removes that provisioning bottleneck, and thereby reduces the time to a customer deploying applications into AWS — which is AWS’s end goal.

So the resulting value chain of the Snowball Edge with EC2 is:

  1. Hardware device with large amounts of storage
  2. Configuration of device with AWS-originated AMI creation and transfer
  3. Non-user accessible device software to facilitate data storage, data encryption, and device transit
  4. User-accessible device instance(s) to enable data transfer via user ETL software
  5. Use of transport services to ship device to and from AWS
  6. AWS-located automated software to retrieve data from device and place into customer S3 storage

It’s clear there’s a bunch of different groups within AWS that have to coordinate these six steps in the value chain, all to deliver the device and associated functionality.

The length and complexity of this value chain illustrate how much work AWS is willing to put in to reduce customer adoption friction.

The important insight to take away from this example is not to be satisfied with the glib explanations proffered by tech punditry, but to look deeper to understand the true motive behind the offering. If you understand the importance AWS places on enabling customer adoption — simplifying the tasks the end user must do to deploy applications and reducing the friction of transacting with the company, then you’ll understand the true purpose of Snowball Edge with EC2.


  1. G. Hill says:

    Bernard, what are your thoughts on the recent AWS Outposts announcement? In my industry we still have some workloads that need to remain on-premise for the foreseeable future, even though we’re moving the majority of our workloads to the public cloud. Outposts is a welcomed announcement as an alternative for running those on-prem workloads, still many details unknown at this point. Happy New Year!

    • Bernard Golden says:

      Outposts is interesting, although it will be basic compute and storage for the near-term. While the majority of spend by most organizations is made up of those categories, it’s not clear to what extent organizations’ application deployment requirements include other services, which might tend to find Outposts insufficiently capable of supporting total application deployment requirements.

  2. Internet of things enables an easy and reliable machine-to-machine service. This allows the enterprises to gather real-time data through various sensors that are embedded in connected devices. The way in which businesses gather and share data can even be changed by this large network of connected devices that has enough potential to do this.

  3. John Laban says:

    Another story SIMply told.

    Thanks again Bernard for your continued efforts to educate the industry which is the first step towards changing intuitive mindsets that then allows behavioural change

    • Bernard Golden says:

      Hi John: Many thanks for your kind words. Coming more and more to your POV about mindsets. Amazes me how may people in tech can’t extrapolate from data. Hope all is well.

  4. Asif Khan says:

    Maybe I’m misunderstanding the point here. How is this different than a simplified AzureStack deployment? I believe AzureStack can perform all these functions and more. You still need AzureDataBox (their version of Snowball) to transport data for a like-for-like comparison. Maybe the simplicity is Snowball’s selling point? Because AzureStack is definitely not simple.

    • Bernard Golden says:

      Not sure I understand your comment, as I did not address Azure in the post. You’re right, however, that Azure Stack is much fuller-featured than Snowball, which to me reinforces the point that Snowball Edge with EC2 is not directed toward a hybrid use case as much as being an easier on-ramp for customer adoption of the cloud-based AWS.

      Thanks for your comment.

Leave a Reply

Your email address will not be published. Required fields are marked *