This week's roundup of notable cloud news.
Hello cloud land, we've read all the cloud news again this week, so you don't have to.
The theme of the week seems to be scaling and performance with lots of new performance hardware available.
On the Hava front, lots of roadmap features are teetering on the brink of general release, so keep an eye on our social channels and the blog for the new major new features rolling out in the next few weeks.
Enhanced Amazon Macie now available with substantially reduced pricing
Amazon Macie is a fully managed service that helps you discover and protect your sensitive data, using machine learning to automatically spot and classify data for you.
Over time, Macie customers told AWS what they like, and what they didn’t. The service team has worked hard to address this feedback, and this week AWS shared that they are making available a new, enhanced version of Amazon Macie!
This new version has simplified the pricing plan: you are now charged based on the number of Amazon Simple Storage Service (S3) buckets that are evaluated, and the amount of data processed for sensitive data discovery jobs. The new tiered pricing plan has reduced the price by 80%. With higher volumes, you can reduce your costs by more than 90%.
At the same time, AWS have introduced many new features:
- An expanded sensitive data discovery, including updated machine learning models for personally identifiable information (PII) detection, and customer-defined sensitive data types using regular expressions.
- Multi-account support with AWS Organizations.
- Full API coverage for programmatic use of the service with AWS SDKs and AWS Command Line Interface (CLI).
- Expanded regional availability to 17 Regions.
- A new, simplified free tier and free trial to help you get started and understand your costs.
- A completely redesigned console and user experience.
Macie is now tightly integrated with S3 in the backend, providing more advantages:
- Enabling S3 data events in AWS CloudTrail is no longer a requirement, further reducing overall costs.
- There is now a continual evaluation of all buckets, issuing security findings for any public bucket, unencrypted buckets, and for buckets shared with (or replicated to) an AWS account outside of your Organization.
The anomaly detection features monitoring S3 data access activity previously available in Macie are now in private beta as part of Amazon GuardDuty, and have been enhanced to include deeper capabilities to protect your data in S3.
New EC2 M6g Instances ( powered by AWS Graviton2 )
Starting today, you can use AWS first 6th generation Amazon Elastic Compute Cloud (EC2) General Purpose instance: the M6g. The “g” stands for “Graviton2“, the next generation Arm-based chip designed by AWS (and Annapurna Labs, an Amazon company), utilizing 64-bit Arm Neoverse N1 cores.
These processors support 256-bit, always-on, DRAM encryption. They also include dual SIMD units to double the floating point performance versus the first generation Graviton, and they support
fp16 instructions to accelerate machine learning inference workloads. You can read this full review published by AnandTech for in-depth details.
The M6g instances are available in 8 sizes with 1, 2, 4, 8, 16, 32, 48, and 64 vCPUs, or as bare metal instances. They support configurations with up to 256 GiB of memory, 25 Gbps of network performance, and 19 Gbps of EBS bandwidth. These instances are powered by AWS Nitro System, a combination of dedicated hardware and a lightweight hypervisor.
For those of you running typical open-source application stacks, generally deployed on x86-64 architectures, migrating to Graviton2-based instances will give you up to 40% improvement on cost-performance ratio, compared to similar-sized M5 instances. M6g instances are well-suited for workloads such as application servers, gaming servers, mid-size databases, caching fleets, web tier and the likes.
Amazon Kendra is Now Generally Available
With just a few clicks, Amazon Kendra enables organizations to index structured and unstructured data stored in different backends, such as file systems, applications, Intranet, and relational databases. As you would expect, all data is encrypted in flight using HTTPS, and can be encrypted at rest with AWS Key Management Service (KMS).
Amazon Kendra is optimized to understand complex language from domains like IT (e.g. “How do I set up my VPN?”), healthcare and life sciences (e.g. “What is the genetic marker for ALS?”), and many other domains. This multi-domain expertise allows Kendra to find more accurate answers. In addition, developers can explicitly tune the relevance of results, using criteria such as authoritative data sources or document freshness.
Kendra search can be quickly deployed to any application (search page, chat apps, chatbots, etc.) via the code samples available in the AWS console, or via APIs. Customers can be up and running with state the art semantic search from Kendra in minutes.
AWS Inter-Region Data Transfer Price Reduction
If you build AWS applications that span two or more AWS regions, this post is for you. We are reducing the cost to transfer data from the South America (São Paulo), Middle East (Bahrain), Africa (Cape Town), and Asia Pacific (Sydney) Regions to other AWS regions as follows, effective May 1, 2020:
|Region||Old Rate ($/GB)||New Rate ($/GB)|
|South America (São Paulo)||0.1600||0.1380|
|Middle East (Bahrain)||0.1600||0.1105|
|Africa (Cape Town)||0.1800||0.1470|
|Asia Pacific (Sydney)||0.1400||0.0980|
Consult the price list to see inter-region data transfer prices for all AWS regions.
GCP and Nvidia accelerating computing workloads
Companies from startups to multinationals are striving to radically transform the way they solve their data challenges. As they continue to manage increasing volumes of data, these companies are searching for the best tools to help them achieve their goals—without heavy capital expenditures or complex infrastructure management.
Google Cloud and NVIDIA have been collaborating for years to deliver a powerful platform for machine learning (ML), artificial intelligence (AI), and data analytics to help you solve your complex data challenges. Organizations use NVIDIA GPUs on Google Cloud to accelerate machine learning training and inference, analytics, and other high performance computing (HPC) workloads. From virtual machines to open-source frameworks like TensorFlow, we have the tools to help you tackle your most ambitious projects. For instance, Google Cloud’s Dataproc now lets you use NVIDIA GPUs to speed up ML training and development by up to 44 times and reduce costs by 14 times.
To continue to help you meet your goals, GCP are excited to announce forthcoming support for the new NVIDIA Ampere architecture and the NVIDIA A100 Tensor Core GPU. Google Cloud and the new A100 GPUs will come with enhanced hardware and software capabilities to enable researchers and innovators to further advance today’s most important AI and HPC applications, from conversational AI and recommender systems, to weather simulation research on climate change. AWS will be making the A100 GPUs available via Google Compute Engine, Google Kubernetes Engine, and Cloud AI Platform, allowing customers to scale up and out with control, portability, and ease of use.
Google Cloud VMware Engine
This new service delivers a fully managed VMware Cloud Foundation stack—VMware vSphere, vCenter, vSAN, NSX-T, and HCX for cloud migration—in a dedicated environment on Google Cloud’s highly performant and reliable infrastructure to support enterprise production workloads.
With this service, you can migrate or extend your on-premises workloads to Google Cloud in minutes by connecting to a dedicated VMware environment directly through the Google Cloud Console. This allows you to seamlessly migrate to the cloud without the cost or complexity of refactoring applications, and run and manage workloads consistently with your on-premises environment.
By running your VMware workloads on Google Cloud, you reduce your operational burden while benefiting from scale and agility, and maintain continuity with your existing tools, policies, and processes.
Microsoft announce Azure Spot Virtual Machines
This week Azure announced the general availability of Azure Spot Virtual Machines (VMs). Azure Spot VMs provide access to unused Azure compute capacity at deep discounts. Spot pricing is available on single VMs in addition to VM scale sets (VMSS). This enables you to deploy a broader variety of workloads on Azure while enjoying access to discounted pricing compared to pay-as-you-go rates. Spot VMs offer the same characteristics as a pay-as-you-go virtual machine, the differences being pricing and evictions. Spot VMs can be evicted at any time if Azure needs capacity.
The workloads that are ideally suited to run on Spot VMs include, but are not necessarily limited to, the following:
- Batch jobs.
- Workloads that can sustain or recover from interruptions.
- Development and test.
- Stateless applications that can use Spot VMs to scale out, opportunistically saving cost.
- Short lived jobs which can easily be run again if the VM is evicted.
Spot VMs have replaced the preview of Azure low-priority VMs on scale sets. Eligible low-priority VMs have been automatically transitioned over to Spot VMs.
New Azure Cache for Redis capabilities
Microsoft and Redis Labs are partnering to bring new features to Azure Cache for Redis
This week they announced a new partnership between Microsoft and Redis Labs to bring their industry-leading technology and expertise to Azure Cache for Redis. This partnership represents the first native integration between Redis Labs technology and a major cloud platform, underscoring Microsoft's commitment to customer choice and flexibility.
For years, developers have utilized the speed and throughput of Redis to produce unbeatable responsiveness and scale in their applications. Azure have seen tremendous adoption of Azure Cache for Redis, thier managed solution built on open source Redis, as Azure customers have leveraged Redis performance as a distributed cache, session store, and message broker. The incorporation of the Redis Labs Redis Enterprise technology extends the range of use cases in which developers can utilize Redis, while providing enhanced operational resiliency and security.
AWS open sources cloud development kit to make Kubernetes easier to use
Amazon Web Services Inc. today launched Cloud Development Kit for Kubernetes, or cdk8s, an open-source development toolkit designed to make Kubernetes clusters easier to build and maintain.
Kubernetes has emerged as the go-to framework for managing software containers in the enterprise. It allows engineers to compose container clusters by defining configuration details in the relatively simple YAML data serialization language.
But YAML, while a popular choice for configuration tasks, lacks most of the advanced features of programming languages such as Python, which makes large-scale Kubernetes clusters difficult to manage. That’s the challenge cdk8s is aimed at addressing.
AWS Summit Online - Europe, UK, Middle East & Africa
Join the AWS Summit Online on June 17 and deepen your cloud knowledge with this free, virtual event.
Hear from your local AWS country leaders about the latest trends, customers and partners in your market, followed by the opening keynote with Werner Vogels, CTO, Amazon.com. After the keynote, dive deep in 55 breakout sessions across 11 tracks, including getting started, building advanced architectures, app development, DevOps and more. Tune in live to network with fellow technologists, have your questions answered in real-time by AWS Experts and claim your certificate of attendance. All sessions will be available in English with subtitles in French, Italian, German and Spanish.
So, whether you are just getting started on the cloud or are an advanced user, come and learn something new at the AWS Summit Online.
When: June 17 Online Starts 09:00 (UTC+1)
Virtual Masterclass: Cloud Practitioner Bootcamp with AWS
About this Event
This introductory-level course is intended for APN Partners who seek an overall understanding of the AWS Cloud. It provides a detailed overview of cloud concepts, AWS services, security, architecture, pricing, and support.
Delivered through an interactive online format, at the end of the course there will be an online assessment which will provide a certification upon successful completion.
Ran by AWS and Ingram Micro expert trainers, this course will teach you how to succeed both technically and commercially.
The tailored training will teach you how to:
- Define the AWS Cloud
- Describe the key services on the AWS platform using common use cases
- Describe basic AWS Cloud architectural principles
- Describe the AWS Shared Responsibility Model with reference to basic security and compliance
- Define pricing models
- Identify sources of documentation, including where to go for further information, how to describe the AWS Cloud value proposition, and the different ways to define characteristics of deployment/operation in the AWS Cloud
This course covers the following concepts:
Module 1: AWS Cloud Concepts
Module 2: AWS Core Services
Module 3: AWS Security
Module 4: AWS Architecting
Module 5: AWS Pricing and Support
Please note you will be required to follow the registration link in the confirmation email to secure your place.
When: May 28
If you need a fix of AWS goodness, there is an extensive program of online tech talks scheduled:
Join AWS for live, online presentations led by AWS solutions architects and engineers. AWS Online Tech Talks cover a range of topics and expertise levels, and feature technical deep dives, demonstrations, customer examples, and live Q&A with AWS experts.
Note – All sessions are free and in Pacific Time. Can’t join them live? Access webinar recordings and slides on the On-Demand Portal
Microsoft also has a full training and events calendar underway :
Some are going ahead, but we'd suggest contacting the organisers before putting any concrete plans in place.
Thanks for reading, we hope you found something useful. Talking of useful:
hava.io allows users to visualise their AWS, GCP and Azure cloud environments in interactive diagram form including unique infrastructure, security and container views. hava.io continuously polls your cloud configuration and logs changes in a version history for later inspection which helps with issue resolution and provides history of all configs for audit and compliance purposes.
If you haven't taken a hava.io free trial to see what auto generated cloud diagrams can do for your workflow, security and compliance needs - please get in touch.
You can reach us on chat, email firstname.lastname@example.org or book a callback or demo below.