Coming Soon – Snowball Edge with More Compute Power and a GPU

I never get tired of seeing customer-driven innovation in action! When AWS customers told us that they needed an easy way to move petabytes of data in and out of AWS, we responded with the . Later, when they told us that they wanted to do some local data processing and filtering (often at disconnected […]

Spread the love

I never get tired of seeing customer-driven innovation in action! When AWS customers told us that they needed an easy way to move petabytes of data in and out of AWS, we responded with the AWS Snowball. Later, when they told us that they wanted to do some local data processing and filtering (often at disconnected sites) before sending the devices and the data back to AWS, we launched the AWS Snowball Edge, which allowed them to use AWS Lambda functions for local processing. Earlier this year we added support for EC2 Compute Instances, with six instances sizes and the ability to preload up to 10 AMIs onto each device.

Great progress, but we are not done yet!

More Compute Power and a GPU
I’m happy to tell you that we are getting ready to give you two new Snowball Edge options: Snowball Edge Compute Optimized and Snowball Edge Compute Optimized with GPU (the original Snowball Edge is now called Snowball Edge Storage Optimized). Both options include 42 TB of S3-compatible storage and 7.68 TB of NVMe SSD storage, and allow you to run any combination of instances that consume up to 52 vCPUs and 208 GiB of memory. The additional processing power gives you the ability to do even more types of processing at the edge.

Here are the specs for the instances:

Instance NamevCPUsMemory
sbe-c.small / sbe-g.small
12 GiB
sbe-c.medium / sbe-g.medium
14 GiB
sbe-c.large / sbe-g.large
28 GiB
sbe-c.xlarge / sbe-g.xlarge
416 GiB
sbe-c.2xlarge / sbe-g.2xlarge
832 GiB
sbe-c.4xlarge / sbe-g.4xlarge
1664 GiB
sbe-c.8xlarge / sbe-g.8xlarge
32128 GiB
sbe-c.12xlarge / sbe-g.12xlarge
48192 GiB

The Snowball Edge Compute Optimized with GPU includes an on-board GPU that you can use to do real-time full-motion video analysis & processing, machine learning inferencing, and other highly parallel compute-intensive work. You can launch an sbe-g instance to gain access to the GPU.

You will be able to select the option you need using the console, as always:

The Compute Optimized device is just a tad bigger than the Storage Optimized Device. Here they are, sitting side-by-side on an Amazon door desk:

Stay Tuned
I’ll have more information to share soon, so stay tuned!

— Jeff;

Facebook Comments

More Stuff

Check it Out – New AWS Pricing Calculator for EC2 and EBS The blog post that we published over a decade ago to launch the Simple Monthly Calculator still shows up on our internal top-10 lists from ti...
New – Provisioned Throughput for Amazon Elastic File System (EFS)... Amazon Elastic File System lets you create petabyte-scale file systems that can be accessed in massively parallel fashion from hundreds or th...
AWS Backup – Automate and Centrally Manage Your Backups AWS gives you the power to easily and dynamically create file systems, block storage volumes, relational databases, NoSQL databases, and othe...
Join me for the Camp re:Invent Trivia Challenge With less than 3 weeks to go until AWS re:Invent 2018, my colleagues and I are working harder than ever to produce the best educational event...
Spread the love

Posted by News Monkey