AWS Announces Nine New Compute and Networking Innovations for Amazon EC2
New Arm-based versions of Amazon EC2 M, R, and C instance families, powered by new AWS-designed Graviton2 processors, delivering up to 40% improved price/performance over comparable x86-based instances
New Amazon EC2 machine learning instance, featuring AWS-designed Inferentia chips, offer high performance and the lowest cost machine learning inference in the cloud
AWS Compute Optimizer machine learning-powered instance recommendation engine makes it easy to choose the right compute resources
AWS Transit Gateway now supports native IP multicast protocol, integrates with SD-WAN partners to enable easier and faster connectivity between customers’ branch offices and AWS, and provides a new capability to make it much easier to manage and monitor their global network from a single pane of glass
VPC Ingress Routing allows customers to easily deploy third-party appliances in their VPC for specialized networking and security functions
SEATTLE–(BUSINESS WIRE)–Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced nine new Amazon Elastic Compute Cloud (EC2) innovations. AWS already has more compute and networking capabilities than any other cloud provider, including the most powerful GPU instances, the fastest processors, and the only cloud with 100 Gbps connectivity for standard instances. Today, AWS added to its industry-leading compute and networking innovations with new Arm-based instances (M6g, C6g, R6g) powered by AWS-designed processors in Graviton2, machine learning inference instances (Inf1) powered by AWS-designed Inferentia chips, a new Amazon EC2 feature that uses machine learning to cost and performance optimize Amazon EC2 usage, and networking enhancements that make it easier for customers to scale, secure, and manage their workloads in AWS.
New Arm-based versions of Amazon EC2 M, R, and C instance families, powered by new AWS-designed AWS Graviton2 processors, deliver up to 40% improved price/performance over comparable x86-based instances
Since their introduction a year ago, Arm-based Amazon EC2 A1 instances (powered by AWS’s first version of Graviton chips) have provided customers with significant cost savings by running scale-out workloads like containerized microservices and web tier applications. Based on the cost savings, combined with increasing and significant support for Arm from a broader ecosystem of operating system vendors (OSVs) and independent software vendors (ISVs), customers now want to be able to run more demanding workloads with varying characteristics on AWS Graviton-based instances, including compute-heavy data analytics and memory intensive data stores. These diverse workloads require enhanced capabilities beyond those supported by A1 instances, such as faster processing, higher memory capacity, increased networking bandwidth, and larger instance sizes.
The new Arm-based versions of Amazon EC2 M, R, and C instance families, powered by new AWS-designed Graviton2 processors, deliver up to 40% better price/performance than current x86 processor-based M5, R5, and C5 instances for a broad spectrum of workloads, including high performance computing, machine learning, application servers, video encoding, microservices, open source databases, and in-memory caches. These new Arm-based instances are powered by the AWS Nitro System, a collection of custom AWS hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage, to reduce customer spend and effort when using AWS. AWS Graviton2 processors introduce several new performance optimizations versus the first generation. AWS Graviton2 processors use 64-bit Arm Neoverse cores and custom silicon designed by AWS, built using advanced 7 nanometer manufacturing technology. Optimized for cloud native applications, AWS Graviton2 processors provide 2x faster floating point performance per core for scientific and high performance computing workloads, optimized instructions for faster machine learning inference, and custom hardware acceleration for compression workloads. AWS Graviton2 processors also offer always-on fully encrypted DDR4 memory and provide 50% faster per core encryption performance to further enhance security. AWS Graviton2 powered instances provide up to 64 vCPUs, 25 Gbps of enhanced networking, and 18 Gbps of EBS bandwidth. Customers can also choose NVMe SSD local instance storage variant (C6gd, M6gd, and R6gd), or bare metal options for all of the new instance types. The new instance types are supported by several open source software distributions (Amazon Linux 2, Ubuntu, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Fedora, Debian, FreeBSD, as well as the Amazon Corretto distribution of OpenJDK), container services (Docker Desktop, Amazon ECS, Amazon EKS), agents (Amazon CloudWatch, AWS Systems Manager, Amazon Inspector), and developer tools (AWS Code Suite, Jenkins). Already, AWS services like Amazon Elastic Load Balancing, Amazon ElastiCache, and Amazon Elastic Map Reduce have tested the AWS Graviton2 instances, found they deliver superior price/performance, and plan to move them into production in 2020. M6g instances are available today in preview. C6g, C6gd, M6gd, R6g and R6gd instances will be available in the coming months. To learn more about AWS Graviton2 powered instances visit: https://aws.amazon.com/ec2/graviton.
Amazon EC2 Inf1 instances powered by AWS Inferentia chips deliver high performance and the lowest cost machine learning inference in the cloud
Customers across a diverse set of industries are turning to machine learning to address common use cases (e.g. personalized shopping recommendations, fraud detection in financial transactions, increasing customer engagement with chatbots, etc.). Many of these customers are evolving their use of machine learning from running experiments to scaling up production machine learning workloads where performance and efficiency really matter. Customers want high performance for their machine learning applications in order to deliver the best possible end user experience. While training rightfully receives a lot of attention, inference actually accounts for the majority of complexity and the cost (for every dollar spent on training, up to nine are spent on inference) of running machine learning in production, which can limit much broader usage and stall customer innovation. Additionally, several real-time machine learning applications are sensitive to how quickly an inference is executed (latency), while other batch workloads need to be optimized for how many inferences can be processed per second (throughput), requiring customers to choose between processors optimized for latency or throughput.
With Amazon EC2 Inf1 instances, customers receive high performance and the lowest cost for machine learning inference in the cloud, and no longer need to make the sub-optimal tradeoff between optimizing for latency or throughput when running large machine learning models in production. Amazon EC2 Inf1 instances feature AWS Inferentia, a high performance machine learning inference chip designed by AWS. AWS Inferentia delivers very high throughput, low latency, and sustained performance for extremely cost-effective real-time and batch inference applications. AWS Inferentia provides 128 Tera operations per second (TOPS or trillions of operations per second) per chip and up to two thousand TOPS per Amazon EC2 Inf1 instance for multiple frameworks (including TensorFlow, PyTorch, and Apache MXNet), and multiple data types (including INT-8 and mixed precision FP-16 and bfloat16). Amazon EC2 Inf1 instances are built on the AWS Nitro System, a collection of custom AWS hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage, to reduce customer spend and effort when using AWS. Amazon EC2 Inf1 instances deliver low inference latency, up to 3x higher inference throughput, and up to 40% lower cost-per-inference than the Amazon EC2 G4 instance family, which was already the lowest cost instance for machine learning inference available in the cloud. Using Amazon EC2 Inf1 instances, customers can run large scale machine learning inference to perform tasks like image recognition, speech recognition, natural language processing, personalization, and fraud detection at the lowest cost in the cloud. Amazon EC2 Inf1 instances can be deployed using AWS Deep Learning AMIs and will be available via managed services such as Amazon SageMaker, Amazon Elastic Containers Service (ECS), and Amazon Elastic Kubernetes Service (EKS). To get started with Amazon EC2 Inf1 instances visit: https://aws.amazon.com/ec2/instance-types/inf1.
AWS Compute Optimizer uses a machine learning-powered instance recommendation engine to make it easy to choose the right compute resources
Choosing the right compute resources for a workload is an important task. Over-provisioning resources can lead to unnecessary cost, and under-provisioning can lead to poor performance. Up until today, to optimize use of Amazon EC2 resources, customers have allocated systems engineers to analyze resource utilization and performance data, or invested in resources to run application simulations on a variety of workloads. And, this effort can add up over time because the resource selection process must be repeated as applications and usage patterns change, new applications move into production, and new hardware platforms become available. As a result, customers sometimes leave their resources inefficiently-sized, pay for expensive third-party solutions, or build optimization solutions themselves that manage their Amazon EC2 usage.
AWS Compute Optimizer delivers intuitive and easily actionable AWS resource recommendations so customers can identify optimal Amazon EC2 instance types, including those that are a part of Auto Scaling groups, for their workloads, without requiring specialized knowledge or investing substantial time and money. AWS Compute Optimizer analyzes the configuration and resource utilization of a workload to identify dozens of defining characteristics (e.g. whether a workload is CPU-intensive, or if it exhibits a daily pattern). AWS Compute Optimizer uses machine learning algorithms that AWS has built to analyze these characteristics and identify the hardware resource headroom required by the workload. Then, AWS Compute Optimizer infers how the workload would perform on various Amazon EC2 instances, and makes recommendations for the optimal AWS compute resources for that specific workload. Customers can activate AWS Compute Optimizer with a few clicks in the AWS Management Console. Once activated, AWS Compute Optimizer immediately starts analyzing running AWS resources, observing their configurations and Amazon CloudWatch metrics history, and generating recommendations based upon their characteristics. To get started with AWS Compute Optimizer, visit http://aws.amazon.com/compute-optimizer.
New AWS Transit Gateway capabilities add native support for the IP multicast protocol, and make it easier to create and monitor a scalable, global network
AWS Transit Gateway is a network hub that enables customers to easily scale and manage connectivity between Amazon Virtual Private Clouds (VPCs) and customers’ on-premises datacenters, as well as between thousands of VPCs within an AWS region. Today, AWS added five new networking capabilities for AWS Transit Gateway that simplify the management of global private networks and enable multicast workloads in the cloud:
- Transit Gateway Multicast is the first native multicast support in the public cloud. Customers use multicast routing when they need to quickly distribute copies of the same data from a single source to multiple subscribers (e.g. digital media or market data from a stock exchange). Multicast reduces bandwidth across the network and ensures that end subscribers get the information quickly, at roughly the same time. Up until now, there has been no easy way for customers to host multicast applications in the cloud. Most multicast solutions are hardware-based, so customers were forced to buy and maintain inflexible and difficult-to-scale hardware on premises. In addition, they had to maintain this hardware in a separate on-premises network, which added complexity and cost. Some have tried to implement a multicast solution in the cloud without native support, but those workarounds were often hard to support and scale. With Transit Gateway Multicast, customers can now build multicast applications in the cloud that can scale up and down based on demand, without having to buy and maintain custom hardware to support their peak loads.
- Transit Gateway Inter-Region Peering makes it easy to build global networks by connecting Transit Gateways across multiple AWS regions. Before Transit Gateway Inter-Region Peering, the only way resources in a VPC in one region could communicate with resources in a VPC in a different region was through peer-to-peer connections between those VPCs, which are difficult to scale and manage if you need many of them. Inter-Region Peering addresses this and makes it easy to create secure and private global networks by peering AWS Transit Gateways between different AWS regions. All traffic between regions that uses Inter-Region Peering is carried by the AWS backbone, so it never traverses the public Internet, and it’s also anonymized and encrypted. This provides the highest level of security, while ensuring that a customer’s traffic is always taking the optimal path between regions, delivering a higher and more consistent performance versus the public Internet.
- Accelerated Site to Site VPN is a new capability that provides a more secure, predictable and performant VPN experience for customers who need secure access from a branch location to services in AWS. Many customers have branch offices in smaller cities or international locations that are physically far away from the applications that are running in AWS. They use a VPN connection over the Internet to connect their branch offices to services in AWS, but their connection will typically traverse the Internet. This path involves hops through multiple public networks, which can lead to lower reliability, unpredictable performance, and more exposure to attempted security attacks. With Accelerated VPN, customers can connect their branch locations to an AWS Transit Gateway through the closest AWS edge location, reducing the number of network hops involved and optimizing the network performance for lower latency and higher consistency.
- Integration with popular Software Defined Wide Area Network (SD-WAN) vendors is a new capability that lets customers easily integrate third party SD-WAN solutions from Cisco, Aruba, Silver Peak, and Aviatrix with AWS. Today, many customers use SD-WAN solutions to manage the local network in their branch locations, and also to connect those locations to their datacenters or cloud networks in AWS. To set up these branch devices and network connections, customers had to manually provision and configure their branch and AWS resources, which can take significant time and effort. While Accelerated Site to Site VPN provides a secure, predictable, and performant connection from branch locations, this SD-WAN integration goes further by allowing customers to automatically provision, configure and connect to AWS using popular third party SD-WAN solutions. All this makes it easier for customers to set up and manage connectivity to their branch offices.
- Transit Gateway Network Manager is a new capability that simplifies the monitoring of global networking resources that are both in the cloud and on-premises, by pulling everything together in a single pane of glass. While customers are shifting more and more applications to run on their cloud network, they still maintain an on-premises network, and because of this they need to monitor both environments so they can quickly respond to any issues across either environment. Today, to manage and operate these networks, customers need to use multiple tools and consoles, between AWS and other on-premises providers. This complexity can lead to management errors, or delay the time it takes to identify and solve a problem. Network Manager simplifies the creation and operation of global private networks by providing a unified dashboard to visualize and monitor the health of a customer’s VPCs, Transit Gateways, Direct Connect, and VPN connections to branch locations and on-premises networks.
Amazon VPC Ingress Routing helps customers easily integrate 3rd party network and security appliances into their network
When enterprises migrate to AWS, they often want to bring the network or security appliance that they have used for years in their on-premises data centers to the cloud as a virtual appliance. While the AWS Marketplace today offers the broadest selection of networking and security virtual appliances in a cloud marketplace, customers lacked the flexibility to easily route traffic entering an Amazon VPC through these appliances. With Amazon VPC Ingress Routing, customers can now associate route tables with the Internet Gateway (IGW) and Virtual Private Gateway (VGW) and define routes to redirect incoming and outgoing VPC traffic to third-party appliances. This makes it easier for customers to integrate virtual appliances that meet their networking and security needs in the routing path of inbound and outbound traffic. This feature can be provisioned with a few clicks or API calls, making it easy for customers to deploy networking and security appliances in their network without creating complex workarounds that don’t scale. Partners with virtual appliances that currently support Amazon VPC Ingress Routing include 128 Technology, Aviatrix, Barracuda, Check Point, Cisco, Citrix Systems, FireEye, Fortinet, Forcepoint, HashiCorp, IBM Security, Lastline, NETSCOUT Systems, Palo Alto Networks, ShieldX Networks, Sophos, Trend Micro, Valtix, Vectra, and Versa Networks.
“Customers know that AWS already offers more breadth and depth of capabilities in Amazon EC2 than any other cloud provider. They also tell us that this breadth and depth provides great benefit because they are running more and more diverse workloads in the cloud, each with its own characteristics and needs,” said Matt Garman, Vice President, Compute Services, AWS. “As they look to bring more and more diverse workloads to the cloud, AWS continues to expand our offerings in ways that offer them better performance and lower prices. Today’s new compute and networking capabilities show that AWS is committed to innovating across Amazon EC2 on behalf of our customers’ diverse needs.”
Netflix is the world’s leading internet entertainment service with 158 million memberships in 190 countries enjoying TV series, documentaries, and feature films across a wide variety of genres and languages. “We use Amazon EC2 M instance types for a number of workloads inclusive of our streaming, encoding, data processing, and monitoring applications,” said Ed Hunter, Director of Performance and operating systems at Netflix. “We tested the new M6g instances using industry standard LMbench and certain Java benchmarks and saw up to 50% improvement over M5 instances. We’re excited about the introduction of AWS Graviton2-based Amazon EC2 instances.”
Nielsen is a global measurement and data analytics company that provides the most complete and trusted view available of consumers and markets worldwide. “Our OpenJDK based Java application is used to collect digital data, process incoming web requests, and redirect requests based on business needs. The application is I/O intensive and scaling out in a cost-effective manner is a key requirement,” said Chris Nicotra, SVP Digital, at Nielsen. “We seamlessly transferred this Java application to Amazon EC2 A1 instances powered by the AWS Graviton processor. We’ve since tested the new Graviton2-based M6g instances and it was able to handle twice the load of an A1. We look forward to running more workloads on the new Graviton2-based instances.”
Datadog is the monitoring and analytics platform for developers, operations, and business users in the cloud age. “We’re happy to see AWS continuing to invest in the Graviton processor and the significant momentum behind the Arm ecosystem,” said Ilan Rabinovitch, VP of Product and Community, at Datadog. “We’re excited to announce that the Datadog Agent for Graviton / Arm is now generally available. Customers can now easily monitor the performance and availability of these Graviton-based Amazon EC2 instances in Datadog alongside the rest of their infrastructure.”
Western Digital, a leader in data infrastructure, drives the innovation needed to help customers capture, preserve, access, and transform an ever-increasing diversity of data. “At Western Digital, we use machine learning with digital imaging for quality inspection in our manufacturing processes. In manufacturing and elsewhere, machine learning applications are growing in complexity as we move from reactive to proactive detection,” said Steve Philpott, Chief Information Officer, Western Digital. “Currently, we are limited in how many quality-inspection images can be processed on current CPU-based solutions. We look forward to using the AWS EC2 Inf1 Inferentia instances and expect to increase Western Digital’s image-processing throughput for this purpose, while simultaneously reducing the processing times to an order of milliseconds. Based on initial analysis, we expect to be able to run our ML-based detection models multiple times every hour, significantly reducing occurrences of rare events and upping the bar on our best-in-class product quality and reliability.”
Over 100 million Alexa devices have been sold globally, and customers have also left over 400,000 5-star reviews for Echo devices on Amazon. “Amazon Alexa’s AI and ML-based intelligence, powered by Amazon Web Services, is available on more than 100 million devices today – and our promise to customers is that Alexa is always becoming smarter, more conversational, more proactive, and even more delightful,” said Tom Taylor, Senior Vice President, Amazon Alexa. “Delivering on that promise requires continuous improvements in response times and machine learning infrastructure costs, which is why we are excited to use Amazon EC2 Inf1 to lower inference latency and cost-per-inference on Alexa text-to-speech. With Amazon EC2 Inf1, we’ll be able to make the service even better for the tens of millions of customers who use Alexa each month.”
Nubank is one of the largest fintech companies in Latin America, providing a wide range of consumer financial products to over 15 million customers. “We develop simple, secure, and 100% digital solutions to help customers manage their money easily. In order to support millions of transactions on our platform, we run hundreds of applications with a diverse set of requirements on Amazon EC2. In order to keep our infrastructure optimized, we dedicate an engineering team to continuously monitor and analyze the utilization of our EC2 instances,” said Renan Capaverde, Director of Engineering at Nubank.