Coming Soon – Snowball Edge with More Compute Power and a GPU

I never get tired of seeing customer-driven innovation in action! When AWS customers told us that they needed an easy way to move petabytes of data in and out of AWS, we responded with the . Later, when they told us that they wanted to do some local data processing and filtering (often at disconnected […]

Spread the love

I never get tired of seeing customer-driven innovation in action! When AWS customers told us that they needed an easy way to move petabytes of data in and out of AWS, we responded with the AWS Snowball. Later, when they told us that they wanted to do some local data processing and filtering (often at disconnected sites) before sending the devices and the data back to AWS, we launched the AWS Snowball Edge, which allowed them to use AWS Lambda functions for local processing. Earlier this year we added support for EC2 Compute Instances, with six instances sizes and the ability to preload up to 10 AMIs onto each device.

Great progress, but we are not done yet!

More Compute Power and a GPU
I’m happy to tell you that we are getting ready to give you two new Snowball Edge options: Snowball Edge Compute Optimized and Snowball Edge Compute Optimized with GPU (the original Snowball Edge is now called Snowball Edge Storage Optimized). Both options include 42 TB of S3-compatible storage and 7.68 TB of NVMe SSD storage, and allow you to run any combination of instances that consume up to 52 vCPUs and 208 GiB of memory. The additional processing power gives you the ability to do even more types of processing at the edge.

Here are the specs for the instances:

Instance NamevCPUsMemory
sbe-c.small / sbe-g.small
12 GiB
sbe-c.medium / sbe-g.medium
14 GiB
sbe-c.large / sbe-g.large
28 GiB
sbe-c.xlarge / sbe-g.xlarge
416 GiB
sbe-c.2xlarge / sbe-g.2xlarge
832 GiB
sbe-c.4xlarge / sbe-g.4xlarge
1664 GiB
sbe-c.8xlarge / sbe-g.8xlarge
32128 GiB
sbe-c.12xlarge / sbe-g.12xlarge
48192 GiB

The Snowball Edge Compute Optimized with GPU includes an on-board GPU that you can use to do real-time full-motion video analysis & processing, machine learning inferencing, and other highly parallel compute-intensive work. You can launch an sbe-g instance to gain access to the GPU.

You will be able to select the option you need using the console, as always:

The Compute Optimized device is just a tad bigger than the Storage Optimized Device. Here they are, sitting side-by-side on an Amazon door desk:

Stay Tuned
I’ll have more information to share soon, so stay tuned!

— Jeff;

Facebook Comments

More Stuff

Amazon EC2 Update – Additional Instance Types, Nit... I have a backlog of EC2 updates to share with you. We’ve been releasing new features and instance types at a rapid clip and it is time to catch up. He...
New – Provisioned Throughput for Amazon Elastic Fi... Amazon Elastic File System lets you create petabyte-scale file systems that can be accessed in massively parallel fashion from hundreds or th...
Amazon EKS – Now Generally Available We announced Amazon Elastic Container Service for Kubernetes and invited customers to take a look at a preview during re:Invent 2017. Today I am pleas...
New – Parallel Query for Amazon Aurora Amazon Aurora is a relational database that was designed to take full advantage of the abundance of networking, processing, and storage resou...
Spread the love

Posted by News Monkey