Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday December 16th 2022.
Last week at Hava saw the release of self-hosted v2.1.522 which delivers new features and updates in response to package security recommendations. Hava customers with self-hosted deployments should upgrade to take advantage of the additional features.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
AWS Updates and Releases
Amazon Location Service expands support to four more regions, Asia Pacific (Mumbai), Canada (Central), Europe (London), and South America (Sao Paulo). Developers in these regions can easily add maps, search for addresses and points of interest, calculate routes, track their assets, and geofence regions of interest, while experiencing the lowest latency response.
By using local regions to store sensitive tracking and geofencing data, developers can ensure that their data remains in the country where it is collected and that it adheres to data residency restrictions.
Amazon Location Service is a location-based service that helps developers easily and securely add maps, points of interest, geocoding, routing, tracking, and geofencing to their applications without compromising on data quality, user privacy, or cost. With Amazon Location Service, you retain control of your location data, protecting your privacy and reducing enterprise security risks. Amazon Location Service provides a consistent API across a variety of location based service data providers (Esri, HERE, and Open Data), all managed through one AWS console.
Customers can now create file systems using Amazon Elastic File System (Amazon EFS) in the AWS Europe (Zurich) Region.
Amazon EFS is a serverless, fully elastic file system that is designed to make it easy to set up, scale, and cost-optimize file storage in the AWS Cloud. It is built to scale on demand to store petabytes of data, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
The AWS Pricing Calculator now offers a redesigned user interface that allows you to generate price estimates for Amazon Elastic Compute Cloud (Amazon EC2) instances in the shared tenancy model, the dedicated tenancy model, and Amazon EC2 dedicated hosts.
The calculator makes it easier for you to switch among the three tenancy options with a single click, by providing a common user interface to enter input parameters for cost estimation. The calculator will also pre-populate costs across the available pricing models (e.g. Savings Plans, Reserved Instances), removing the need to view costs for each price model separately and allowing you to do quick cost comparison.
In addition, the AWS Pricing Calculator for Amazon EC2 includes the option to estimate costs for EBS volumes and Data Transfer, and provides the ability to add monitoring costs of detailed CloudWatch metrics for your EC2 instances.
The AWS Pricing Calculator now supports the ability to bulk estimate costs for Amazon Elastic Compute Cloud (Amazon EC2) instances, Dedicated hosts and Amazon Elastic Block Store (Amazon EBS) volumes using a structured Excel template.
The new feature will allow you to estimate a fleet of EC2 instances with a single upload of the template file, reducing time to create estimates for large volume of EC2 instances. You can still add other services to your estimate and generate a cost plan for your workload.
AWS will validate your inputs after the upload is complete and provide you an error report in case of failures. The feature automatically organizes the estimate into groups by allowing you to define group structure within the template file.
AWS Backup adds support for VMware vSphere tags, enabling you to protect your VMware virtual machines (VMs) using tag-based policies, simplifying data protection across your on-premises vSphere and AWS environments. This allows you to import your VMware vSphere tags to AWS, where they can be automatically associated with a backup plan via tag-based policies.
AWS Backup support for VMware vSphere tags is a simple and automatic way for you to associate your VM resources to AWS Backup plans, extending your tag-based backup policies to VMs. AWS Backup provides VMware data protection experience that is consistent across your on-premises environments and AWS Regions, helping you meet your business and regulatory compliance needs.
You can use a single policy in AWS Backup to simplify data protection and automate lifecycle management of your on-premises VMware, VMware CloudTM on AWS, and VMware CloudTM on AWS Outposts environments alongside other AWS services supported by AWS Backup spanning compute, storage, and databases.
AWS Backup for VMware is available in the US East (Ohio, N. Virginia), US West (N. California, Oregon), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm), South America (São Paulo), Asia Pacific (Hong Kong, Mumbai, Seoul, Singapore, Sydney, Tokyo, Osaka), Middle East (Bahrain), Africa (Cape Town), AWS GovCloud (US) Regions
Amazon Timestream announces a fully managed data protection functionality through integration with AWS Backup. You can now protect your time series data through immutable backups, automate backup lifecycle management, copy your backup across AWS Regions and accounts, and restore your data with ease.
Amazon Timestream is a fast, scalable, secure, and purpose-built time series database for application monitoring, edge, and IoT workloads that can scale to process trillions of time series events per day. With features such as multi-measure records and scheduled queries, Amazon Timestream enables you to analyze time series data and derive business insights cost-effectively.
You can now protect your Timestream resources using a fully managed, policy-driven centralized data protection solution to create immutable backups of your application data, spanning Timestream and other AWS services supported by AWS Backup.
To get started, you need to opt-in to have AWS Backup manage your Timestream backups via Timestream console or AWS Backup console, API, or CLI (this is a one-time event). Once opted in, you can use on-demand backup to create a one-time backup of your Timestream data, or schedule a backup, using a backup policy, to create recurring backup of your data.
You can set retention policies that will automatically retain, expire, and transition backups to cold storage, minimizing backup storage costs, and copy backups to other AWS Regions and accounts for disaster recovery scenarios. You can also restore the entire table to the database with a few clicks, simplifying data recovery.
Amazon Timestream is HIPAA eligible, ISO certified, FedRAMP (Moderate) compliant, PCI DSS compliant, and in scope for AWS’s SOC reports SOC 1, SOC 2, and SOC 3.
AWS launches Amazon EKS managed node groups for Windows containers to automate the provisioning and lifecycle management of Windows nodes (running on Amazon EC2 instances) for EKS Kubernetes clusters.
With this launch, Windows-based applications will automatically provision and register the Amazon EC2 instances that provide compute capacity. Now customers can create, automatically update, or terminate the Windows nodes for their cluster with a single operation.
A managed node group can be provisioned for Windows nodes as a part of an Amazon EC2 Auto Scaling group that's managed for customers by Amazon EKS. Every resource including the instances and Auto Scaling groups run within customers' AWS account, while each node group runs across multiple Availability Zones that are defined by the customer.
There are no additional costs to use Amazon EKS managed node groups, you only pay for the AWS resources provisioned. To learn more about EKS Windows managed node groups and how to get started, visit the EKS public documentation.
AWS Trusted Advisor now helps customers improve fault tolerance with new checks for Amazon ElastiCache for Redis, Amazon MemoryDB for Redis, and AWS CloudHSM. AWS Trusted Advisor evaluates your AWS account with automated best practice checks and provides cloud optimization recommendations to reduce costs, improve performance, increase security, and monitor service quotas.
You can find more information about the checks here.
The new fault tolerance checks for Amazon ElastiCache Multi-AZ Clusters and Amazon MemoryDB Multi-AZ Clusters alert customers when they're running in a Single-AZ configuration and provide recommendations to customers on how to enable Multi-AZ with automatic failover in their ElastiCache or MemoryDB clusters.
By enabling Multi-AZ with automatic failover, customers benefit from minimal administrative intervention, improved fault tolerance, and enhanced availability of their Redis clusters. For more information, see Minimizing downtime in ElastiCache with Multi-AZ or Minimizing downtime in MemoryDB with Multi-AZ.
AWS CloudHSM clusters running HSM instances in a single AZ will alert customers when they are running clusters in a single AZ for more than an hour. We recommend customers run their production clusters in a multi-AZ configuration to run with high availability. For more information, see AWS CloudHSM. AWS Business Support and AWS Enterprise Support customers can access the new fault tolerance checks from the AWS Trusted Advisor Console, or via the AWS Support API.
The new fault tolerance checks for ElastiCache, MemoryDB, and CloudHSM are generally available in the following regions: US East (N. Virginia), Europe (Ireland), US West (N. California), Asia Pacific (Singapore), Asia Pacific (Tokyo), US West (Oregon), South America (São Paulo), Asia Pacific (Sydney), Europe (Frankfurt), Asia Pacific (Seoul), Asia Pacific (Mumbai), US East (Ohio), Canada (Central), Europe (London), Europe (Paris), Asia Pacific (Osaka), Europe (Stockholm).
AWS Backup now supports schedule-based network bandwidth throttling for VMware. This allows you to optimize network bandwidth used for backups and restores between your VMware environment and AWS. You can throttle the network bandwidth used by the Backup gateway for VMware backups, based on a schedule you set on the Backup gateway.
This enables you to regulate network bandwidth use during peak hours to minimize network congestion and prevent you from using all your available bandwidth during peak hours.
AWS Backup support for scheduled-based network bandwidth throttling for VMware provides you a simple way of optimizing network bandwidth use for your VMware backups. AWS Backup provides a consistent data protection experience from on premises to Cloud, helping you meet your business and regulatory compliance needs.
You can use a single policy in AWS Backup to simplify data protection and automate lifecycle management of your on-premises VMware, VMware Cloud on AWS, and VMware Cloud on AWS Outposts environments alongside the supported AWS storage, compute, and database services.
AWS Backup for VMware is available in the US East (Ohio, N. Virginia), US West (N. California, Oregon), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm), South America (São Paulo), Asia Pacific (Hong Kong, Mumbai, Seoul, Singapore, Sydney, Tokyo, Osaka), Middle East (Bahrain), Africa (Cape Town) and AWS GovCloud (US) Regions.
Amazon QuickSight is now available in Stockholm and Paris regions. New accounts are able to sign up for QuickSight with Stockholm or Paris as their primary region, making SPICE capacity available in the region and ensuring proximity to AWS and on-premises data sources. Existing users can switch regions with the region switcher and create SPICE datasets in those regions.
With this launch, QuickSight is now available in 16 regions globally, US East (Ohio and N. Virginia), US West (Oregon), Europe (Stockholm, Paris, Frankfurt, Ireland and London), Asia Pacific (Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), South America (São Paulo) and GovCloud (US-West). Learn more about available regions here.
With Amazon Athena, you can run SQL queries on data stored in relational, non-relational, object, and custom data sources without the need to pre-process or move data to another storage solution. Starting this week, you can use Athena to query real-time streaming data held in Amazon Managed Streaming for Apache Kafka (MSK) and self-managed Apache Kafka.
Federated queries in Athena allow you to use your SQL expertise to extract insights from multiple data sources and for use cases spanning interactive analysis, business intelligence dashboards, and more. This week’s release further expands the number and type of data sources that you can query with Athena’s standard SQL interface.
For example, you can now run analytical queries on real-time streaming data held in a Kafka topic and join it with data in additional Kafka topics or data in your Amazon S3 data lake. This reduces the friction of having to first configure MSK to write data to Amazon S3 before it can be analyzed with Athena.
Amazon Location Service adds a new data option to the Maps feature, Open Data, based on data from OpenStreetMap (OSM), a geospatial data source supported by a global community. Developers can now easily access reliable and up-to-date OSM data with no upfront investment or specialized geospatial knowledge.
In addition to the high-quality data provider options from Esri and HERE, Open Data Maps now provides developers with more choices to integrate maps into their applications, enabling developers to leverage OSM’s flexible licensing terms and continuously improving data quality.
With Amazon Location Open Data Maps, developers can easily integrate OSM-based maps into their web or mobile applications and overlay information on top. They can now rely on the availability, latency, security, and reliability of Amazon Location Open Data Maps.
Developers no longer need to setup and operate specialized OSM tools that may not scale and perform reliably. In addition, developers no longer need to be concerned with the freshness of their applications as Amazon Location refreshes the data without developer intervention.
Amazon Location Service is a location-based service that helps developers easily and securely add maps, points of interest, geocoding, routing, tracking, and geofencing to their applications without compromising on data quality, user privacy, or cost. With Amazon Location Service, you retain control of your location data, protecting your privacy and reducing enterprise security risks.
Amazon Location Service provides a consistent API across high-quality location-based service data providers (Esri and HERE), as well as popular open data source OpenStreetMap, all managed through one AWS console.
Open Data Maps is available in preview in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo).
This week, AWS Marketplace launched automated emails to communicate relevant details to buyers and sellers once a private offer is successfully created and available for subscription. With this launch, buyers can now directly receive a private offer deep link from a seller, allowing them to promptly review and accept a private offer.
ISVs and channel partners can now receive details like the offer ID when an offer is created, enabling them to initiate procurement workflows, internal order creation, and deal tracking.
At launch, buyers will be notified at the default email address associated with the AWS account ID targeted by the private offer. ISVs and channel partners can optionally update or add up to 10 email addresses for email notifications using AWS Marketplace Management Portal. Email recipients can opt-out of these notifications by using the unsubscribe path defined in the emails.
Starting this week, customers of AWS Cost Anomaly Detection will be able to define percentage-based thresholds when configuring their alerting preferences. AWS Cost Anomaly Detection is a cost management service that leverages advanced machine learning to identify anomalous spend and root causes, so customers can quickly take action to avoid runaway spend and bill shocks.
The percentage-based alerting configuration enables customers to capture the anomalous spend dynamically rather than having to estimate an absolute dollar amount.
To setup AWS Cost Anomaly Detection, AWS customers configure cost monitors and define when and how to receive alerts. Up until today, customers have only had the option to specify a fixed-dollar threshold (difference between actual and expected spend) for their alerts.
With percentage-based thresholds customers will have even more flexibility and choice when to be alerted based on their needs. Customers can now set rules like “send me an alert when the anomaly’s impact is 25% higher than the expected amount” or “send me an alert when the anomaly’s impact is over $40 AND/OR the impact is 30% higher than the expected amount”.
Percentage-based thresholds for AWS Cost Anomaly Detection is available in all AWS commercial regions, including China. You can get started with this new features using the AWS Cost Anomaly Detection console or programmatically via the public APIs at no additional cost. To get started with percentage based thresholds in AWS Cost Anomaly Detection, please visit our documentation.
Amazon AppFlow, a fully managed integration service that helps customers securely transfer data between AWS services and software-as-a-service (SaaS) applications, now supports Microsoft SharePoint Online as a source.
With this launch, you can now transfer your unstructured data from SharePoint Online to Amazon Simple Storage Service (S3) in just a few clicks. This SharePoint Online integration unlocks many use cases such as uploading documents (e.g. PDFs, Word documents, Excel files, etc.), images, and other information to a data lake or centralized storage built on Amazon Simple Storage Service (S3) for applications such as file backup and archival, and help desk management.
AWS are excited to announce support for concurrent account factory actions in AWS Control Tower, allowing you to create, update, or enroll up to five accounts at a time. AWS Control Tower account factory makes it easy for you to provision accounts and automate deployment of AWS resources, roles and policies within those accounts to support a range of business functions across your organization.
With this new feature enhancement, you now have the autonomy to create and manage your accounts on-demand, supporting greater automation and decreased operational workload.
When you need to submit an account action, there’s no longer a need to wait for each process to complete before submitting the next or re-registering your entire Organizational Unit (OU). You can now use the same process to submit up to five actions in succession and view completion status of each. As a result, this frees you up to focus on more important tasks while your accounts finish building in the background.
AWS Control Tower offers a streamlined way to set up and govern a new, secure, multi-account AWS environment based on AWS best practices. Customers will create new accounts using AWS Control Tower’s account factory and enable governance features such as guardrails, centralized logging and monitoring in supported AWS Regions.
Amazon AppFlow announces the release of 4 new data connectors for Software-as-a-Service (SaaS) applications. The new connectors for HubSpot, LinkedIn Pages, Productboard, and Recharge provide customers with even more data access capability for use cases such as data lake hydration, analytics and machine learning, and data retention.
See the Amazon AppFlow User Guide to learn more about how to use Amazon AppFlow.
The Amazon Chime SDK now offers a new resource for builders to get started with CodeSandbox. The Amazon Chime SDK lets developers embed intelligent real-time communication capabilities into their applications.
CodeSandbox is an online code editor and prototyping tool that makes creating and sharing web apps faster. In one-click, builders using the Amazon Chime SDK for virtual meetings can review ready-made code required to set up a meeting, at the same time, seeing a prototype of the meeting experience in real-time.
With Amazon Chime SDK on CodeSandbox developers start with a pre-built meeting; they can then add code to build additional features on top of the pre-built meeting experience, seeing the new meeting experience they have created.
CodeSandbox gives developers the opportunity to test out what is possible and get familiar with the Amazon Chime SDK code base. Then, when they are ready to build in their own environment, they can easily extract the code to continue development for their application.
This week, AWS announced enhancements to AWS Mainframe Modernization service including an expansion to 5 new regions and support for AWS CloudFormation, AWS PrivateLink, and AWS Key Management Service (KMS) resulting in repeatable deployment, and greater security and compliance.
The service is now available in the following regions: Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (London), Europe (Paris).
The support for CloudFormation helps customers, especially those treating infrastructure as code, provision AWS Mainframe Modernization service resources such as environments and applications in a secure, efficient and repeatable way.
With the support for AWS PrivateLink, customers can privately access and manage those resources from their virtual private cloud (Amazon VPC) with security and compliance. The support for AWS KMS enables creation and management of customer owned/managed server-side keys to encrypt the data stored by AWS Mainframe Modernization service.
Starting this week, AWS Cloud WAN supports Appliance Mode feature, giving you the ability to deploy stateful network appliances in an Amazon Virtual Private Cloud (VPC) and forward network traffic to the correct appliance for security inspection. Appliance Mode simplifies centralized deployment of security appliances in a VPC and allows using multiple Availability zones (AZs) for highly availability.
AWS Cloud WAN is a managed service that lets you build, monitor and manage a unified global network interconnecting Amazon VPCs, data centers, branches and remote users. Customers deploy security appliances in a VPC to inspect VPC-to-VPC and VPC to on-premises network traffic. Security appliances are typically stateful and need to process both forward and return traffic for a network flow.
Until now, customers needed to analyze their traffic patterns and carefully configure subnet routes to the appropriate security appliance for stateful inspection. With Appliance mode, Cloud WAN selects a single network interface in the appliance VPC to send both forward and return traffic for the life of the flow thus eliminating the need for special routing configuration.
For multi-AZ deployments, Cloud WAN symmetrically routes flow traffic through the same AZ and as a result via the same appliance for stateful inspection. Appliance mode also supports deployment of AWS Network Firewall (ANFW), an AWS managed network firewall service, and AWS Gateway Load Balancer (GWLB), a service that allows customers to deploy and manage third-party network appliances in a horizontally scalable manner.
To get started, simply enable Appliance mode on the VPC attachment that contains your security appliances. You can enable this feature via the AWS Management Console, the Amazon Command Line Interface (Amazon CLI), and the Amazon Software Development Kit (Amazon SDK).
Appliance mode support is available in all AWS regions where Cloud WAN is available. There are no additional charges to use this feature.
Amazon CloudWatch Metrics Insights alarms enables customers to alarm on entire fleets of dynamically changing resources with a single alarm using standard SQL queries. CloudWatch Metrics Insights offers fast, flexible, SQL-based queries.
By combining CloudWatch alarms with Metrics Insights queries, customers can now set up dynamic alarms that consistently monitor fast moving environments and alert when anomalies are detected.
With Metric Insights alarms you can set alarms using Metric Insight queries that monitor multiple resources without having to worry if the resources are short lived or not. For example, you can set a single alarm that alerts when any of your EC2 instances reaches a high threshold for CPU utilization and the alarm will evaluate new instances that are launched afterwards.
To get started, click on the All Metrics link under Metrics on the left navigation pane of the CloudWatch console and choose the Query tab where you will then be able to create your alarms using standard SQL queries. You can also use CloudWatch API, AWS CloudFormation, and AWS Cloud Development Kit.
Amazon EKS now supports advanced configuration of cluster add-ons, enabling you to customize add-on properties to help you meet performance, compliance, or additional requirements not handled by default settings.
Configuration can be applied to add-ons either during cluster creation or at any time after the cluster is created. Amazon EKS now supports configuration for the following add-ons: Amazon VPC CNI, CoreDNS, kube-proxy, EBS CSI driver, and AWS Distro for OpenTelemetry. As an example, you can now install and configure Amazon VPC CNI to leverage EC2 prefixes for increased pod density on cluster worker nodes.
Amazon EKS add-ons configuration is available in all commercial and AWS GovCloud (US) Regions.
An add-on is software that provides operational capabilities to Kubernetes applications, but is not specific to the application. This includes software such as observability agents or Kubernetes drivers that allow the cluster to interact with underlying AWS resources for networking, compute, and storage.
Amazon EKS add-ons provide lifecycle management of a curated set of software which include the latest security patches, bug fixes, and are validated by AWS to work with your Kubernetes clusters. Amazon EKS add-ons helps you keep your clusters secure and stable and reduce the amount of work that you need to do in order to install, configure, and update operational software.
This week, AWS announced the release of automatically generated feature-level visualizations in Amazon SageMaker Data Wrangler. Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare data for machine learning (ML) from weeks to minutes.
With Data Wrangler, you can simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow, including data selection, cleansing, exploration, and visualization from a single visual interface.
Data Wrangler offers a variety of configurable visualization options from general data visualizations such as histogram, scatter plot or table summary to advanced visualizations such as anomaly detection or seasonable-trend decomposition for time series data, data leakage and feature bias for machine learning needs.
Starting this week, SageMaker Data Wrangler automatically generates visualizations for each feature in the dataset. You’ll see these visualizations at the top of each column in the dataset after your dataset is imported. This automation further cuts the undifferentiated heavy lifting for data scientists by automatically generating insights related to data distributions and data quality at feature level.
With the automatically generated visualizations, you can immediately get insights related to data distributions and data types without writing a single line of code. The insights help you easily detect data quality issues such as outliers, missing or invalid values, etc, for each column in the dataset. Further, you can also hover on the visualizations to see detailed statistics such as count and percentage.
This feature is generally available and automatically activated in all AWS Regions that Data Wrangler currently supports at no additional charge.
Amazon SageMaker Ground Truth synthetic data, a premium tailored service to generate labeled synthetic data, now supports dynamic 3D environments. We now provide support for full 3D scenes, 3D depth maps, multiple cameras in a scene, moving objects on a conveyor belt, and generation of auto-labeled video data.
These features enable synthetic data generation for dynamic 3D environments for customer use cases in manufacturing, warehouse robotics, food packaging, retail, autonomous mobility and smart home.
Amazon SageMaker Ground Truth synthetic data is generally available in the US East (N. Virginia) Region.
Amazon Neptune version 126.96.36.199 now supports the W3C RDF CBD (Concise Bounded Description) for the SPARQL query language. The Concise Bounded Description of an RDF resource (that is, a node in an RDF graph) is the smallest subgraph of leaf nodes that are not blank (anonymous resource, a node with no URI). A CBD query starts from a node, then recursively traverses the graph beyond blank nodes until it finds a node with an identifier.
For example, a CBD in a patient 360 healthcare application can be used to describe all the information related to a patient (a node). Here, the CBD query is a way to break the graph into small subgraphs that could be handled as 'objects', to facilitate the retrieval or to transactionally change its content.
To get started with CBD queries, visit the Amazon Neptune SPARQL DESCRIBE documentation.
Starting this week, AWS customers can use Amazon Kendra to build intelligent search applications in the Asia Pacific (Mumbai) AWS Region.
Amazon Kendra is a highly accurate intelligent search service powered by machine learning. Kendra reimagines enterprise search for your websites and applications so your employees and customers can easily find the content they are looking for, even when it’s scattered across multiple locations and content repositories within your organization.
You can now bring machine learning (ML) models built anywhere into Amazon SageMaker Canvas and generate predictions, to address a wide range of business problems. SageMaker Canvas is a visual interface that enables business analysts to generate accurate ML predictions on their own — without requiring any ML experience or having to write a single line of code.
Today, hundreds of ML models are built and trained using different tools and in heterogeneous environments. Quite often, business teams could benefit from ML models already built by data scientists to solve business problems, rather than starting from scratch.
However, it is not easy to use these models outside the environments they are built in due to stringent technical requirements, rigidity of tools, and manual processes to import models. This forces users to often rebuild ML models resulting in duplication of efforts, spending additional time and resources, and limiting democratization of ML.
Amazon SageMaker Canvas removes these limitations and the heavy lifting needed to import models between environments. Starting today, data scientists can now share ML models built anywhere with business analysts in SageMaker Canvas, so predictions can be generated on those models directly in SageMaker Canvas.
ML models using tabular data and built anywhere can be imported into SageMaker Canvas once they are registered in the Amazon SageMaker Model Registry. Additionally, data scientists can share models trained in Amazon SageMaker Autopilot and Amazon SageMaker JumpStart so business analysts can generate predictions on those models in SageMaker Canvas.
Finally, you can now share models built in SageMaker Canvas with data scientists using SageMaker Studio for review, update, and feedback. Data scientists can then share their feedback or updates with you, so you can analyze and generate predictions on updated model versions within SageMaker Canvas.
The ability to generate predictions in Amazon SageMaker Canvas on imported models built anywhere is now available in all AWS regions where SageMaker Canvas is supported.
Amazon Kinesis Video Streams now offers a simple, efficient and cost-effective way to connect to IP cameras on customer premises, locally record and store video from those cameras, and stream videos to the cloud on a customer defined schedule for long term storage, playback and analytical processing.
You can download the KVS edge runtime agent and deploy it at your on-premise edge compute devices. Alternatively, you can easily deploy them in docker containers running on Amazon EC2 machines. Once deployed, you can use the new Amazon Kinesis Video Streams APIs to update video recording and cloud uploading configurations. The feature works with any IP camera that can stream over RTSP protocol, and requires no additional firmware deployment on the cameras.
AWS offer the following Amazon Kinesis Video Streams installations for the edge runtime agent:
- On Snowball Edge: You can run the Amazon Kinesis Video Streams edge runtime agent on AWS Snowball Edge devices
- As a Greengrass component: You can install the Amazon Kinesis Video Streams edge runtime agent as a Greengrass component on any Greengrass certified device
- On a native IoT deployment: You can install the Amazon Kinesis Video Streams edge runtime agent natively on any compute instance: Edge SDK leverages AWS IoT core for managing edge configuration through the Edge cloud APIs
KVS edge runtime and APIs are available in the US West (Oregon) Region. AWS will not charge for the use of this feature during the preview period, though you may still incur KVS or other AWS costs associated with usage.
Contact Lens for Amazon Connect now provides conversational analytics in Africa (Cape Town) region. This launch adds to the list of regions that Contact Lens’ conversational analytics already supports: US East (Northern Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo).
Contact Lens for Amazon Connect provides conversational analytics capabilities for both voice and chat channels, enabling businesses to easily access contact transcripts, better understand customer sentiment, redact sensitive customer information, and monitor agent compliance with company guidelines to improve agent performance and customer experience.
Amazon EMR on EKS now supports accelerated computing over graphics processing unit (GPU) instance types using Nvidia RAPIDS Accelerator for Apache Spark. The growing adoption of artificial intelligence (AI) and machine learning (ML) in analytics has increased the need for processing data quickly and cost efficiently with GPUs.
Nvidia RAPIDS Accelerator for Apache Spark helps customers leverage the benefit of GPU performance while saving infrastructure costs. With this release, EMR on EKS customer can use the RAPIDS accelerator by simply specifying the Spark-RAPIDS release label when calling EMR on EKS API.
Until now, EMR on EKS customers had to create a custom image to use Nvidia RAPIDS Accelerator. This requires engineering and test effort. In addition, with every new Nvidia RAPIDS release, bug fixes or security updates, customers had to rebuild the custom image and go through the testing again.
Starting with EMR 6.9, EMR on EKS is introducing a new Nvidia RAPIDS Accelerator for Spark image. Customers can use the same StartJobRun API to run their Spark jobs, and simply specify a new Spark-RAPIDS release label to leverage RAPIDS Accelerator on an EKS cluster with GPU supported instance type.
Amazon EBS direct APIs now support the IPv6 protocol, so that applications to connect through EBS direct APIs over IPv6. EBS direct APIs enable customers to simplify their backup and recovery workflows by directly creating and reading EBS snapshots via APIs.
With this change, customers can meet their IPv6 compliance needs, integrate with existing IPv6-based on-premises applications, and remove the need for expensive networking equipment to handle the address translation between IPv4 and IPv6.
To use this new capability, you can configure your applications to use the new Amazon EBS direct API dual-stack endpoints which support both IPv4 and IPv6. These endpoints have the format ebs.region.api.aws or ebs-fips.region.api.aws.
For example, the dual-stack endpoint in the US-East-1 (N. Virginia) Region is ebs.us-east-1.api.aws. When you make a request to a dual-stack Amazon EBS direct APIs endpoint, the endpoint resolves to an IPv6 or an IPv4 address, depending on the protocol used by your network and client.
You can use Amazon EBS direct APIs with the IPv6 protocol in all AWS regions where EBS direct APIs are available.
Starting with Amazon Neptune version 188.8.131.52, customers can now use real-time inductive inference with Amazon Neptune ML to enable machine learning (ML) predictions on nodes, edges and properties (entities) that were added to the graph after the ML model training process.
With this launch, customers can make predictions on new data without requiring an update to their ML models. Amazon Neptune ML is powered by Amazon SageMaker, and uses Graph Neural Networks (GNNs), a machine learning technique purpose-built for graph that can improve the accuracy of most predictions for graphs by over 50% when compared to making predictions using non-graph methods based on published research from Stanford University.
Customers often need real-time predictions for use cases like fraud detection, product recommendations, and identity resolution. For example, when a new user attempts to create an account on an e-commerce platform, the business may want to generate a fraud prediction score using machine learning and take risk mitigation actions like blocking the account creation or holding it for manual review.
With real time inductive inference for Neptune ML, customers can get near real-time predictions on new data by using existing Neptune ML models without retraining their ML models each time. Additionally, customers can now train and deploy Neptune ML models faster and save costs by training on a representative sample of their graph data and then deploying it to make predictions on any entity in the graph.
For more information on pricing and region availability, refer to the Neptune pricing page and AWS Region Table. There are no additional charges for using Amazon Neptune ML real-time inductive inference. You only pay for the resources provisioned such as Amazon Neptune, Amazon SageMaker, Amazon CloudWatch, and Amazon S3.
Amazon Relational Database Service (Amazon RDS) for Oracle now supports memory optimized X2idn, X2iedn, and X2iezn instances designed to deliver fast performance for workloads that process large data sets in memory.
X2idn instances are powered by Intel Xeon Scalable processor (Ice Lake) up to 3.5 GHz. X2idn instances are ideal for high compute (128 vCPUs), large memory (up to 2 TB) and storage throughput requirements (up to 256K IOPS), making it a good choice for customers running memory-intensive workloads with a 16:1 ratio of memory to vCPU.
X2iedn instances are powered by Intel Xeon Scalable processor (Ice Lake) up to 3.5 GHz. X2iedn instances will be targeted for enterprise-class high performance databases with high compute (up to 128 vCPUs), large memory (up to 4 TB) and storage throughput requirements (up to 256K IOPS) with a 32:1 ratio of memory to vCPU. X2iezn instances are powered by the fastest Intel Xeon Scalable processor (Cascade Lake) in the cloud up to 4.5 GHz. X2iezn instances are ideal for both Oracle Database Standard Edition 2 (SE2) and Enterprise Edition (EE) Bring Your Own License (BYOL) customers who do not require large memory foot print but want to optimize licensing costs with a 32:1 ratio of memory to vCPU on available sizes.
Both X2idn and X2iedn instances support the instance store for temporary tablespace and the Database Smart Flash Cache. For more information, read the Storing temporary data in an RDS for Oracle instance store documentation.
X2idn and X2iedn instances are available today in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), and South America (São Paulo) regions.
X2idn instances are available in 3 sizes with 64, 96, and 128 vCPUs. X2iedn instances are available in 7 sizes with 4, 8, 16, 32, 64, 96, and 128 vCPUs. X2iezn instances are available today in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) regions. X2iezn instances are available in 5 sizes with 8, 16, 24, 32, and 48 vCPUs.
Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. Amazon MQ reduces your your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you.
Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can more easily migrate to AWS without having to rewrite code.
You can now enable granular access controls for users by configuring resource tags within the Amazon Connect admin website. Tagging a user allows you to configure granular permissions and define who is able to access a specific user record.
For example, you can now tag users with Team:Innovators and then only let the Innovators team manager see or edit these users. To learn more about tagging users and tag-based access controls, see the Connect Administrator Guide.
Amazon Relational Database Service (RDS) Proxy now supports Amazon Aurora with PostgreSQL-compatibile edition and Amazon RDS for PostgreSQL running major version 14. PostgreSQL 14 consists of performance improvements for parallel queries, heavily-concurrent workloads, partitioned tables, logical replication, and vacuuming.
PostgreSQL 14 also improves functionality with new capabilities. For example, you can cancel long-running queries if a client disconnects and you can close idle sessions if they time out. With this launch, you can enforce SCRAM (Salted Challenge Response Authentication Mechanism) password-based authentication for proxy, making connections from your applications more secure.
RDS Proxy is a fully managed and highly available database proxy for Aurora and RDS databases. RDS Proxy helps improve application scalability, resiliency, and security.
Starting in April 2023, Amazon S3 will introduce two new default bucket security settings by automatically enabling S3 Block Public Access and disabling S3 access control lists (ACLs) for all new S3 buckets.
Once complete, these defaults will apply to all new buckets regardless of how they are created, including AWS CLI, APIs, SDKs, and AWS CloudFormation. These defaults have been in place for buckets created in the S3 management console since the two features became available in 2018 and 2021, respectively, and are recommended security best practices. There is no change for existing buckets.
Amazon S3 buckets are and always have been private by default. Only the bucket owner can access the bucket or choose to grant access to other users. Amazon S3 added Block Public Access in 2018 to prevent granting public access to S3 buckets, and the ability to disable ACLs in 2021 in favor of using AWS Identity and Access Management (IAM) policies as a simplified and more flexible access control alternative.
Since then, millions of customers have adopted these settings as best practices to protect their buckets and simplify their access management. As the new defaults, these settings automatically extend a simplified and secure access management posture to all new S3 buckets.
With these new defaults, the few applications that need their buckets to be publicly accessible or use ACLs must deliberately configure their buckets to be public or use ACLs. In these cases, you may need to update automation scripts, AWS CloudFormation templates, or other infrastructure configuration tools to configure these settings.
To learn more about how to prepare for the change, read Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023 in the AWS News Blog or Default access settings for new S3 buckets FAQ in the S3 User Guide.
These new default security settings will apply to all new S3 buckets in all AWS Regions, including the AWS GovCloud Regions and the AWS China Regions. AWS will publish another What’s New Post when they start to deploy the change in April 2023, and another one when the deployment has reached all AWS Regions.
To learn more, visit S3 Block Public Access and S3 Object Ownership in the S3 User Guide. You can also find more information on these two settings in the AWS CloudFormation User Guide (S3 Block Public Access - S3 Object Ownership).
You can now enable more granular access controls for security profiles by configuring resource tags within the Amazon Connect admin website. Tagging a security profile allows you to configure granular permissions and define who is able to access a specific security profile.
For example, you can now tag all security profiles assigned to users within a business process outsourcer (BPO) with BPO:A, and then only enable administrators of that BPO to access to these security profiles. To learn more about tagging security profiles and tag-based access controls, see the Connect Administrator Guide.
Amazon GuardDuty is now available in the Europe (Zurich) Region. You can now continuously monitor and detect security threats in this additional region to help protect your AWS accounts, workloads, and data.
Customers across many industries and geographies use Amazon GuardDuty, including more than 90% of AWS’s 2,000 largest customers. GuardDuty continuously monitors for malicious or unauthorized behavior to help protect your AWS resources, including your AWS accounts, EC2 workloads, access keys, EKS clusters, and data stored in Amazon S3 and Amazon Aurora.
GuardDuty can identify unusual or unauthorized activity like crypto-currency mining, access to data stores in S3 from unusual locations, or unauthorized access to Amazon Elastic Kubernetes Service (EKS) clusters. GuardDuty Malware Protection adds file scanning for workloads utilizing Amazon Elastic Block Store (EBS) volumes to detect the presence of malware.
GuardDuty continually evolves its techniques to identify indicators of compromise, such as updating machine learning (ML) models, adding new anomaly detections, and growing integrated threat intelligence to identify and prioritize potential threats.
AWS are excited to announce general availability of v2.0 Amplify Library for Android! Amplify Library for Android allows developers building apps for the Android platform to easily include features like authentication, storage, maps, and more.
This version of the library has been re-written to improve Android developers’ experience when using Auth and Storage features. The Amplify Library for Android is open source on GitHub, and we deeply appreciate the feedback we have received from the community.
Developers can use Amplify Library for Android to build apps for Android platforms with Auth, Storage, Geo and more features. Developers have access to Command Line Interface (CLI) tools to configure and manage their cloud resources.
Further, developers can access the breadth of AWS services via an escape hatch to the underlying AWS SDK for Kotlin that provides lower-level service-specific APIs.
Amazon Translate is a neural machine translation service that delivers fast, high-quality, affordable, and customizable language translation. This week, Amazon Translate launches support for translation of files stored in nested S3 folders.
All files under the input S3 folder and its sub-folders will be validated and translated via Amazon Translate’s asynchronous batch translation service. Customers can now preserve their folder hierarchy when they wish to use Amazon Translate and no longer need to use a flat folder structure for their translation jobs.
This update is compatible with all the other asynchronous batch translation features. This feature will not support files stored in the root S3 folder.
Support for asynchronous batch translation of files stored in nested S3 folder structure is available in all commercial AWS regions wherever asynchronous batch translation service is available.
Amazon DevOps Guru for RDS now detects if your Amazon Aurora database is receiving a significantly larger number of SQL queries and if those queries are reading more data than usual. This new functionality will help you to discover, in the event of degraded database performance, if an application traffic change is the likely cause of the performance degradation.
If you use DevOps Guru for RDS, you don’t have to take any specific step in order to use this functionality. As long as you have DevOps Guru for RDS monitoring turned on for your Amazon Aurora databases, this new feature will be available wherever applicable.
Amazon DevOps Guru for RDS is a Machine Learning (ML) powered capability for Amazon Relational Database Service (Amazon RDS) that automatically detects and diagnoses database performance and operational issues, enabling you to resolve bottlenecks in minutes, rather than days.
Amazon DevOps Guru for RDS is a feature of Amazon DevOps Guru, which detects operational and performance related issues for Amazon RDS engines and dozens of other resource types. Amazon DevOps Guru for RDS expands upon the existing capabilities of Amazon DevOps Guru to detect, diagnose, and provide remediation recommendations for a wide variety of database-related performance issues, such as resource over-utilization and misbehavior of SQL queries.
When an issue occurs, Amazon DevOps Guru for RDS immediately notifies developers and DevOps engineers and provides diagnostic information, details on the extent of the problem, and intelligent remediation recommendations to help customers quickly resolve the issue. Amazon DevOps Guru for RDS is available in these regions.
Amazon DevOps Guru is an ML powered service that makes it simple to improve an application’s operational performance and availability. By analyzing application metrics, logs, events, and traces, Amazon DevOps Guru identifies behaviors that deviate from normal operating patterns and creates an insight that alerts developers with issue details.
When possible Amazon DevOps Guru, also provides proposed remedial steps via Amazon Simple Notification Service (SNS) and partner integrations, like Atlassian Opsgenie and PagerDuty. To learn more, visit the DevOps Guru product and documentation pages or post a question to the Amazon DevOps Guru forum.
Lastly, Apps that use GraphQL or PubSub subscriptions by default automatically re-connect when the user’s internet disconnects, providing a seamless experience for end users in poor connectivity scenarios.
Amazon EMR Serverless is a serverless option that makes it simple for data analysts and engineers to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers.
This week, AWS were excited to launch near real-time job-level metrics using Amazon CloudWatch. You can now monitor EMR Serverless application jobs by job state every minute. This makes it simple to track when jobs are running, successful, or failed.
You can also get a single view of application capacity usage and job-level metrics in a CloudWatch dashboard. To get started, deploy the dashboard provided in the emr-serverless-samples git repo to your account.
Amazon RDS for Oracle now supports copying option groups during in-region cross-account snapshot copy requests using the API, CLI, or AWS Backup. Amazon RDS uses option groups to enable and configure features that make it simple to manage your databases.
Copying the option groups with the snapshot on cross account copy allows you to restore that snapshot in the target account with the same options as the source snapshot without needing to reconfigure the option groups.
A new optional Boolean parameter CopyOptionGroup is now added to the CopyDBSnapshot API and CLI commands. You can now copy options groups not present on the destination snapshot from the source snapshot by setting this parameter to true.
Starting today, storage optimized Amazon EC2 I4i Instances are now also available in Asia Pacific (Seoul) and South America (Sao Paulo) regions. Amazon EC2 I4i instances are powered by 3rd generation Intel Xeon Scalable processors (Ice Lake) and deliver the highest local storage performance within Amazon EC2 using AWS Nitro NVMe SSDs.
Amazon EC2 I4i instances are designed for databases such as MySQL, Oracle DB, and Microsoft SQL Server, and NoSQL databases such as MongoDB, Couchbase, Aerospike, and Redis where low latency local NVMe storage is needed in order to meet application service level agreements (SLAs). I4i instances are available in 8 sizes - large, xlarge, 2xlarge, 4xlarge, 8xlarge, 16xlarge, 32xlarge and metal.
Amazon Translate is a neural machine translation service that delivers fast, high-quality, affordable, and customizable language translation. Amazon Translate now supports language detection of input files for batch translations.
Amazon batch translation service will allow customers to translate multiple languages files in one translation job by detecting the dominated source language in each file. Customers can now mix multiple languages files in one S3 bucket and create one job without specifying the source language.
This feature samples first 1,000 characters in each file and detects the dominat language as the source language and translates each file to the specified target language by leveraging Amazon Comprehend’s language detection API.
This feature is available in all commercial AWS regions wherever asynchronous batch translation service is available.
Amazon Connect now allows you to configure hierarchies and resource tags for users in bulk. You can now optionally assign agent hierarchies and resource tags to each agent using the CSV bulk upload template, available on the user management page.
For example, you can import a group of users and assign both a hierarchy to denote which country the users operate in, and a tag like Department:A to indicate which department they work for. To learn more about adding users individually or in bulk, see the Amazon Connect Administrator Guide.
You can now enable granular access controls for routing profiles by configuring resource tags within the Amazon Connect admin website. Tagging a routing profile allows you to configure granular permissions and define who is able to access a specific routing profile.
For example, you can now tag routing profiles with Type:Holiday and then only let managers handling holiday contact center activity see or edit these routing profiles. To learn more about tagging routing profiles and tag-based access controls, see the Connect Administrator Guide.
You can now enable granular access controls for queues by configuring resource tags within the Amazon Connect admin website. Tagging a queue allows you to configure granular permissions and define who is able to access a specific queue.
For example, you can now tag queues with SupportLevel:Premium and then only let premium support managers see or edit these queues. To learn more about tagging queues and tag-based access controls, see the Connect Administrator Guide.
AlloyDB for PostgreSQL
Anthos Clusters on bare metal
Improved cluster lifecycle functionalities:
Upgraded from Kubernetes version 1.24 to 1.25.
Enabled customers to run the latest health and preflight checks by running the command
bmctl check cluster –check-image-version=latest. Setting the
check-image-versionflag to 'latest' ensures that clusters are examined for more recent issues, including issues discovered after a release.
Preview: Added support of Control group v2 (cgroup v2).
GA: Added automatic reservation of CPU and memory resources on cluster nodes so that system daemons have the resources they require.
Optimized the consumption of resources by components such as
preflight-check operator, and
GA: Enabled automatic and periodic health checks on all clusters.
Preview: Added support for turning on kube-proxy-free mode for cluster objects. WARNING: This operation is not reversible. Once enabled, it cannot be disabled.
Changed behavior of Dataplane V2 so that it drops a packet if no service backends are available. Previously, the packet was passed to the kernel stack.
Enabled automatic API rate limit adjustments in Dataplane V2.
Added severity level to container logs.
Enabled collection of uptime and other Kubernetes resource metrics from the kubelet summary API.
Enabled Stackdriver log forwarder in the bootstrap cluster. This log forwarder publishes bootstrap container logs to Cloud Logging.
Security and Identity:
GA: Added feature enabling cluster administrators to configure RBAC policies based on Azure Active Directory (AD) groups. Groups information for users belonging to more than 200 groups can now be retrieved.
GA: Added secure computing mode (seccomp) support. Running containers with a seccomp profile improves the security of a cluster because it restricts the system calls that containers are allowed to make to the kernel.
Added annotation in the cluster configuration file which allows customers to disable the kubelet read-only port. After disabling the read-only port, customers have to change their cluster configurations so that workloads use the kubelet secure port.
GA: Added support for guest OS booting of UEFI. Previously, only BIOS was supported.
Preview: Enabled Terraform scripting to create VMs on an Anthos cluster. For more information, including usage instructions, an input reference, and examples, see the terraform-google-anthos-vm GitHub repository.
Preview: Add support for non-uniform memory access (NUMA) awareness. When enabled, all communication within the VM is local to the NUMA node, thus avoiding the performance cost of data transactions with remote memory locations.
Preview: Enabled multicast traffic for VMs.
Added Anthos VM Runtime preflight checks to validate hardware accelerator configuration.
Enabled configuration of storage's volume mode (block or filesystem) and access modes, such as RWO and RWX.
Enabled means to configure the storage class of a scratch space. A scratch space is sometimes required when importing or uploading a VM disk image.
Added support for configuring
Enabled ability to disable auto-installation of the guest agent binary. After the initial guest agent installation, yoiu can set the
falseso that the binary doesn't mount in subsequent restarts.
Enabled the support of multiple network interfaces, by default, for all clusters.
Improved security for creating a VM with
kubectl virt create. If an initial password is specified, it is now stored in a secret and not as a VM annotation.
Reduced the permissions of the network controller.
Changed default to always use Asynchronous IO mode (AIO) in order to reduce QEMU memory pressure.
Added VM creation and disk provisioning times to Prometheus metrics.
Added support for the Tesla T4 GPU.
Enabled reset of GPU card to its original status when GPU functionality is disabled.
Enabled ability to disable Anthos VM Runtime when it's in the enabling state and custom resource definitions haven't yet been installed.
Added the following command, which allows you to display the VM screen:
kubectll virt vnc --screenshot VM_NAME.
Fixed the IP address update for Windows guest VMs.
Fixed attaching VM disk using SATA driver.
Fixed issue so that setting
disableCDIUploadProxyVIPto true correctly disables the
Fixed issue so that specifying a
PersistentVolumeClaim(PVC) with an empty underlying
PersistentVolume(PV) correctly creates the underlying empty disk format (raw or qcow2).
Enforced VM names to follow the standard RFC1123 format.
Fixed issue so that ISO image is correctly imported from a Cloud Storage bucket.
Fixed benign crash looping of the NVIDIA device plugin and the Multi-Instance GPU (MIG) manager when all GPU cards are allocated to a VM.
Fixed issue so that
virt-launcherPod can be created when advanced compute is enabled.
Anthos Clusters on VMware
Anthos clusters on VMware 1.13.3-gke.26 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.3-gke.26 runs on Kubernetes 1.24.7-gke.1700.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.
yqtool in the admin workstation to simplify troubleshooting.
- Upgraded VMware vSphere Container Storage Plug-in from 2.5 to 2.6.2. This version bump includes support for Kubernetes version 1.24. For more information, see VMware vSphere Container Storage Plug-in 2.6 Release Notes.
- Added storage validation that checks Kubernetes PersistentVolumes and vSphere virtual disks as part of admin and user cluster upgrade preflight checks.
- Fixed an issue where
anet-operatorcould be scheduled to a Windows node with
- Fixed OOM events associated with
monitoring-operator-Pods by increasing memory limit to 1GB.
- Fixed the issue where deleting a user cluster also deleted
App Engine standard environment Go / .Net / Java / Node.js / PHP / Python / Ruby
Bare Metal Solution
Enhancements to Bare Metal Solution resource management for SAP HANA–For Bare Metal Solution environments running SAP HANA, you can now use the Google Cloud console, gcloud CLI, and API to view and manage your Bare Metal Solution servers, storage, and networks.
For more information, see Maintaining your Bare Metal Solution environment in the SAP HANA on Bare Metal Solution deployment guide.
Any job can use a custom machine type. (Before, you could only use custom machine types by creating a job from a Compute Engine instance template.)
You can now view a list of certificates managed by Certificate Manager in your project in the Cloud Console. You can also view detailed information about each certificate. For instructions, see Manage Certificates.
Load Balancing SSL certificates, previously available in the "Certificates" tab on the "Load Balancing" page, are now also available in the Certificate Manager page in the "Classic Certificates" tab.
Chronicle has added a supported region for Chronicle customers in the UK, europe-west2.
Cloud Asset Inventory
Cloud Composer 1.20.2 and 2.1.2 release started on December 13, 2022. Get ready for upcoming changes and features as we roll out the new release to all regions. This release is in progress at the moment. Listed changes and features might not be available in some regions yet.
Data lineage is available in Preview in Cloud Composer 2.
Data lineage is a Dataplex feature that lets you track how data moves through your systems: where it comes from, where it is passed to, and what transformations are applied to it.
Fixed an issue where a failed upgrade to the latest Cloud Composer version caused further upgrade attempts to fail.
Cloud Composer 1.20.2 and 2.1.2 images are available:
- composer-1.20.2-airflow-1.10.15 (default)
- composer-2.1.2-airflow-2.3.4 (default)
Cloud Composer versions 1.17.6, 1.17.7, 2.0.0-preview.6, and 2.0.0-preview.7 have reached their end of full support period.
Cloud Data Fusion
Cloud Data Fusion is available in the following regions:
Cloud Data Loss Prevention
A new detection model is available for the STREET_ADDRESS infoType detector. The new model offers improved detection quality. You can try it out by setting
latest when including the STREET_ADDRESS infoType in your
You can still use the old model by setting
stable or leaving it unset when using the STREET_ADDRESS infoType. In 30 days, the new model will be promoted to
Cloud Database Migration Service
Database Migration Service now supports high availability (HA) instances for MySQL and PostgreSQL database migrations. To find out how to configure connectivity for a high availability instance, click here. To learn how to configure a high availability instance when creating a migration job, click here.
You can create private DNS zones that are scoped only to a Google Cloud zone.
You can disable noisy or otherwise unnecessary threat IDs by using the
--threat-exceptions flag when you create or update your Cloud IDS endpoint. IDS Threat Exceptions is now Generally Available. For more information, see the Cloud IDS overview
cloudfunctions.googleapis.com/v2 API now supports reading 1st gen functions, using the get and list methods. Function responses contain an Environment field that differentiates between 1st and 2nd gen functions.
You can use the filter field to restrict the response to only 2nd gen functions, for example:
Note that 1st gen functions in
europe-west5 can't be read from the v2 API as the region is not available yet in 2nd gen.
If you are using an older version of gcloud, the
gcloud functions list command may show 1st gen functions twice. Updating to a newer version of gcloud should fix this.
You can now use the Observability tab on the Kubernetes Engine Workloads page to see the five workloads consuming the most of a resource. For more information, see View cluster and workload observability metrics.
You can use the new Map view on the VM Instances dashboard to visualize the health of the resources in your fleet. Using the map, you can group VMs by resource labels, like "instance group" or "zone", and color the VMs by the value of a metric, like CPU utilization, to highlight hotspots and anomalies in your fleet.
Cloud Router supports Multiprotocol BGP (MP-BGP) and can exchange IPv6 prefixes over IPv4 BGP sessions. Cloud Router supports IPv6 prefix advertisement for VPC networks with dual-stack subnets. You can enable IPv6 prefix exchange over IPv4 BGP sessions that are created for HA VPN tunnels. This feature is generally available.
You can now create a custom instance configuration and add optional read-only replicas to your custom instance configurations to scale reads and support low latency stale reads. For more information, see Regional and multi-region configurations.
Support for moving a Cloud Spanner instance is now generally available. You can request to move your Spanner instance from any instance configuration to any other instance configuration, including between regional and multi-region configurations. For more information, see Move an instance.
An update to Spanner change streams provides two new data capture types for change records:
NEW_VALUESmode captures only new values in non-key columns, and no old values. Keys are always captured.
NEW_ROWmode captures the full new row, including columns that are not included in updates. No old values are captured.
Note that existing change streams remain set to
Cloud SQL for MySQL
You can now allow other Google Cloud services such as BigQuery to access data in Cloud SQL for MySQL and make queries against this data over a private connection. For more information, see Create instances.
Cloud SQL for PostgreSQL
You can now allow other Google Cloud services such as BigQuery to access data in Cloud SQL for PostgreSQL and make queries against this data over a private connection. For more information, see Create instances.
The Cloud SQL System insights dashboard now shows additional metrics and an events timeline. You can also use the Auto refresh function to keep the dashboard up to date.
Generally available: NVIDIA® T4 GPUs are now available in the following region and zones:
- Hong Kong, APAC:
For more information about using GPUs on Compute Engine, see GPU platforms.
Config Controller now uses the following versions of its included products:
- Config Connector v1.97.0, release notes
Dataplex auto data quality (AutoDQ) is now available in Preview. Dataplex auto data quality helps data users build trust in their data with a turnkey and automated product that encapsulates the entire process of data quality.
General Availability (GA) release of Dataproc 2.1 images.
New sub-minor versions of Dataproc images:
- 1.5.78-debian10, 1.5.78-rocky8, 1.5.78-ubuntu18
- 2.0.52-debian10, 2.0.52-rocky8, 2.0.52-ubuntu18
- 2.1.0-debian11, 2.1.0-rocky8, 2.1.0-ubuntu20
Upgrade Cloud Storage connector version to 2.1.9 for 1.5 images.
Upgrade Cloud Storage connector version to 2.2.9 for 2.1 images.
New Serverless Spark runtime versions:1.0.24 and 2.0.4
Serverless Spark runtime 1.0:
- Upgrade to Spark to 3.2.3
- Upgrade Cloud Storage connector version to 2.2.9
- Upgrade dependencies:
- Jetty to 9.4.49.v20220914
- ORC to 1.7.7
- Protobuf to 3.19.6
- RoaringBitmap to 0.9.35
- Scala to 2.12.17
Serverless Spark runtime 2.0:
- Upgrade Cloud Storage connector version to 2.2.9
- Upgrade Spark dependencies:
- Protobuf to 3.21.9
- RoaringBitmap to 0.9.35
Use jemalloc as a default OS memory allocator in Dataproc Serverless for Spark runtime.
Upgrade Cloud Storage connector version to 2.2.9 in Serverless Spark runtime 1.0 and 2.0.
Backport Spark patches in Serverless Spark runtime 1.0 and 2.0:
Dialogflow CX now supports interaction logging export to BigQuery.
- Version 1.24.7-gke.900 is now the default version
Cloud DNS for GKE (cluster scope) is now Generally Available. You can now configure GKE clusters with control plane version 1.24.7-gke.800, 1.25.3-gke.700 or later to use Cloud DNS as the DNS provider for in-cluster name resolution, and replace the existing DNS service based on kube-dns.
GKE Autopilot clusters may now migrate the cluster's datapath provider to Dataplane V2. Migration is triggered during a control plane upgrade (see version requirements below). The migration is complete once all nodes running the legacy datapath have been recreated. Node pools created after the control plane upgrade will be created using Dataplane V2.
For clusters running 1.24 without Dataplane V2, upgrading to
1.24.7-gke.300or a higher 1.24 version will begin the migration to Dataplane V2.
For clusters running 1.25 without Dataplane V2, upgrading to
1.25.3-gke.200or a higher 1.25 version will begin the migration to Dataplane V2.
Compact placement policy is now generally available. Set up a compact placement policy to specify that nodes within the node pool should be placed in closer physical proximity to each other within a zone. Having nodes closer to each other can reduce network latency between nodes, which can be useful for tightly-coupled batch workloads.
Public clusters upgraded to GKE versions 1.25 and later will eventually be migrated to use Private Service Connect (PSC) for private control plane communication. There is no price increase for using GKE public clusters running on PSC.
For information about issues with workforce identity federation, see Troubleshoot workforce identity federation
You can now use the Google Cloud console to write IAM policy analysis results to BigQuery. This feature is generally available.
Storage Transfer Service
Storage Transfer Service now offers GA Support for transferring data between file systems, including on-premises file systems and Filestore instances. This allows you to use the Transfer Service API, gcloud command line tool, or the Cloud console to migrate data from a self-managed file system to Filestore; accelerate data transfer from an on-premise file system to a cloud file system; or move data between on-premises systems.
You can also transfer specific files or objects using a manifest for file system to file system transfers.
General Availability: VPC Peering supports the exchange of IPv6 routes between peered VPC networks.
A workflow's source and details can now be updated independently through the Cloud Console using the Source and Details tabs for quicker editing.
Microsoft Azure Releases And Updates
Replicate Azure NetApp Files volumes from one availability zone to another in a fast and cost-effective way, protecting your data from unforeseeable zone failures.
Secure, govern, and manage your hybrid servers from Azure
Introducing Azure dedicated host - Restart functionality
Azure’s WAF now supports multiple new features - SQLi and XSS detection queries, new built-in Azure policies, and increased exclusions limit with support for exclusions on bot manager rule set.
Build a business case in Azure Migrate today to understand the return on investment for migrating your servers, SQL Server deployments and ASP.NET web apps running in your VMware environment to Azure.
Encrypt your data with customer-managed keys to enable an extra layer of security for data at rest.
Simplify permission management by using Microsoft Azure Active Directory identities to authenticate with instances of Azure Database for MySQL – Flexible Server.
Authenticate to Azure Functions with Azure Active Directory using the Azure Cosmos DB extension version 4.
See multiple updated API names in Azure Cosmos DB on product pages, in technical documentation, and billing that will not impact your experience, product functionality, or performance.
Create Azure Cosmos DB for PostgreSQL clusters now in the Sweden Central and Switzerland West regions.
Experience even greater compatibility for native Apache Cassandra with materialized view.
Gain powerful tools for working with JSON formatted data in Azure Cache for Redis through the RedisJSON module.
Azure Site Recovery High Churn Support
You can now run Azure Container Apps on your own Azure Arc-enabled Kubernetes clusters (AKS and AKS-HCI).
You can now take advantage of Python 3.10 features with Azure Functions.
The latest Azure Functions Extension Bundle is now generally available.
Durable Functions make it easy to orchestrate stateful workflows as-code in a serverless environment.
You can now use the latest LTS version of Java with Azure Functions.
Easily containerize non-container applications using Draft.
You can now access guided troubleshooting information related to specific issues detected within your Azure Static Web Apps service configuration.
You can now benefit from Kubernetes 1.25 enhancements in AKS.
General availability enhancements and updates released for Azure SQL in mid-December 2022.
Scale your Azure Service Bus premium namespaces far beyond the current limitations with two new features.
Retain the latest events of an event log by using compacted event hubs or Kafka topics in Azure Event Hubs.
New features are now available in Stream Analytics no-code editor GA including multiple parameter built-in functions support, Delta Lake format support in ADLS Gen2 output sink.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (includes forever free tier):