Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday October 28th 2022.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
Source: aws.amazon.com
AWS Cloud Control API is now available in the AWS Middle East (UAE) Region
AWS Cloud Control API has expanded its availability to the AWS Middle East (UAE) Region. Cloud Control API is a set of common application programming interfaces (APIs) that is designed to make it easy for developers to manage their cloud infrastructure in a consistent manner and leverage the latest AWS capabilities faster.
Using Cloud Control API, developers can manage the lifecycle of hundreds of AWS resources and over a dozen third-party resources with five consistent APIs instead of using distinct service-specific APIs. With Cloud Control API, AWS Partner Network (APN) Partners can automate how their solutions integrate with existing and future AWS features and services through a one-time integration, instead of spending weeks of custom development work as new resources become available. Terraform by HashiCorp, Pulumi, and Red Hat Ansible have integrated their solutions with AWS Cloud Control API.
Introducing the Amazon EKS Delivery Program
Amazon Web Services (AWS) is thrilled to announce the new Amazon EKS Service Delivery specialization to highlight AWS Partners with consulting offerings that have demonstrated proven capabilities to architect, run, and operate containerized workloads on Amazon EKS.
Amazon EKS Delivery partners play a crucial role in the customer journey as customers navigate modernization of legacy applications, operations, and infrastructure. Recognizing the complexity of Kubernetes, customers seek proven methodologies, tools, and best practices for accelerating EKS modernization on AWS.
Amazon EKS Delivery Partners have experience and a deep understanding to deploy, secure, and operationalize modern applications on Amazon EKS. They leverage AWS Well-Architected principles, EKS and Kubernetes best practices to help ensure the customer’s success in their container-based modernization efforts.
Amazon EKS Delivery Partners are vetted by AWS Partner Solutions Architects through a rigorous review of their delivery model, technical skills, and by demonstrating customer success through case studies. Customers can now find validated AWS Partners that can help them implement their workloads using Kubernetes on top of EKS.
With PrestoDB and Trino on EMR 6.8, users benefit from a configuration setting, called the strict mode that prevents cost overruns due to long running queries.Customers have told us that poorly written SQL queries can sometimes run for long times, and consume resources from other business critical workloads.
To help administrators take action on such queries, we are introducing strict mode setting that allows warning or rejecting certain types of queries. Examples include queries without predicates on partitioned columns that result in large table scans, or queries that involve cross join between large tables, and/or queries that sort large number of rows without limit.
You can set up strict mode configuration during cluster creation and also override the setting using session properties. You can apply strict mode checks for select, insert, create table as select and explain analyze query types.
AWS are also excited to announce that Amazon EMR PrestoDB and Trino has added a new features to handle spot interruptions that helps run your queries cost effectively and reliably. Spot Instances in Amazon EMR allows you to run big data workloads on spare Amazon EC2 capacity at a reduced cost compared to On-Demand instances.
However, Amazon EC2 can interrupt spot instances with a two-minute notification. PrestoDB/Trino queries fail when spot nodes are terminated. This has meant that customers were unable to run such workloads on spot instances and take advantage of lower costs. In EMR 6.7, AWS added a new capability to PrestoDB/Trino engine to detect spot interruptions and determine if the existing queries can complete within two minutes on those nodes.
If the queries cannot finish, we fail quickly and retry the queries on different nodes. Amazon EMR PrestoDB/Trino engine also does not schedule new queries on spot nodes that are about to be reclaimed. With these two new features, you will get best of both worlds - improved resiliency with PrestoDB/Trino engine on Amazon EMR, and running queries economically on spot nodes.
Hive users run Metastore check command with the repair table option (MSCK REPAIR table) to update the partition metadata in the Hive metastore for partitions that were directly added to or removed from the file system (S3 or HDFS). When run, MSCK repair command must make a file system call to check if the partition exists for each partition.
This step could take a long time if the table has thousands of partitions. In EMR 6.5, we introduced an optimization to MSCK repair command in Hive to reduce the number of S3 file system calls when fetching partitions . This feature improves performance of MSCK command (~15-20x on 10k+ partitions) due to reduced number of file system calls especially when working on tables with large number of partitions.
Previously, you had to enable this feature by explicitly setting a flag. Starting with Amazon EMR 6.8, we further reduced the number of S3 filesystem calls to make MSCK repair run faster and enabled this feature by default.
In addition to MSCK repair table optimization, AWS also like to share that Amazon EMR Hive users can now use Parquet modular encryption to encrypt and authenticate sensitive information in Parquet files. It is a challenging task to protect the privacy and integrity of sensitive data at scale while keeping the Parquet functionality intact.
Data protection solutions such as encrypting files or storage layer are currently used to encrypt Parquet files, however, they could lead to performance degradation. With Parquet modular encryption, you can not only enable granular access control but also preserve the Parquet optimizations such as columnar projection, predicate pushdown, encoding and compression.
Using Parquet modular encryption, Amazon EMR Hive users can protect both Parquet data and metadata, use different encryption keys for different columns, and perform partial encryption of only sensitive columns. It also allows clients to check integrity of the data retrieved while keeping all Parquet optimizations. This feature is available from Amazon EMR 6.6 release and above.
Amazon EC2 Is4gen and Im4gn Instances are now available in Asia Pacific (Singapore) Region
Starting this week, Amazon EC2 Is4gen and Im4gn instances, the latest generation storage-optimized instances, are available in Asia Pacific (Singapore) Region. Based on the AWS Nitro System, Im4gn and Is4gen instances are powered by Arm-based AWS Graviton2 processors and are build using AWS Nitro SSDs which enable up to 60% lower latency and up to 75% reduced latency variability in Im4gn and Is4gen instances compared to the third generation of storage optimized instances.
Im4gn instances deliver up to 40% better price-performance and up to 44% lower cost per TB versus comparable current generation x86-based storage optimized instances for applications requiring dense local SSD storage and higher compute performance such as MySQL, NoSQL, and file systems.
The Is4gen instances provide the lowest cost per TB and highest density per vCPU of SSD storage in Amazon EC2 for applications such as stream processing and monitoring, real-time databases, and log analytics, that require high random I/O access to large amounts of local SSD data. These instances enable 15% lower cost per TB of storage and up to 48% better compute performance compared to I3en instances.
These instances can utilize the Elastic Fabric Adapter (EFA) for workloads that require high levels of inter-node communication. Workloads on these instances will continue to take advantage of the security, scalability and reliability of Amazon’s Virtual Private Cloud (VPC).
Announcing General Availability of Amazon EC2 i4i.metal instance for VMware Cloud on AWS
AWS are excited to announce the general availability of i4i.metal instance for VMware Cloud on AWS. Designed for storage I/O intensive workloads, i4i.metal instance is powered by 3rd generation Intel® Xeon® Ice Lake processor with an all-core turbo frequency of 3.5 GHz, offer up to 30% better compute price performance over i3 instances. This new instance is intended for VMware Cloud on AWS customers looking for an optimal balance of compute, memory, and storage configuration.
Compared to the i3 instances, i4i.metal offers twice the storage at 30TiB of raw local NVMe flash storage, around 1.8 times more processing power at 128 vCPUs, processor running 1.5 times faster at 3.5GHz Turbo, twice the memory at 1,024 GiB and networking speed up to 75Gbps, 3x faster than i3.metal. The i4i.metal instance is a highly secure instance type with support for host-to-host encryption enabled by default. The high memory, CPU, and storage configuration of the i4i.metal instances is ideal for workloads that need fast access to datasets on local storage.
Starting this week, Amazon EC2 supports the replacement of instance root volume using an updated AMI without requiring customers to stop their instance. This allows customers to easily update their applications and guest operating system, while retaining the instance store data, networking and IAM configuration.
AWS Customers can use the feature to patch their software quickly without having to incur the operational costs of instance store data backups or replication. Customers with stateful workloads can use the feature to ensure that their software is up-to-date and improve their security posture by patching more frequently.
Amazon RDS for Oracle now supports memory optimized R5b instance types
Amazon RDS for Oracle now supports memory optimized R5b instance types for Bring Your Own License (BYOL) model, featuring up to 4x the RAM per vCPU of existing R5b instance classes to better fit your workloads.
Many Oracle database workloads require high memory, storage, and I/O bandwidth but can safely reduce the number of vCPUs without impacting application performance. R5b memory optimized instances come in various configurations from 2 vCPU to 48 vCPU and memory from 32 GiB to 768 GiB and up to 32:1 memory-to-vCPU ratio. These configurations will allow you to right-size the Oracle workloads.
Starting this week, you can launch additional memory configurations of the R5b instance class in the Amazon RDS Management Console or using the AWS CLI in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland) and Europe (London).
Introducing the AWS Control Tower delivery and AWS Control Tower ready program
AWS Control Tower provides the easiest way to set up and govern a secure, multi-account AWS environment, and reduces the complexity and time required to establish governance supporting multiple AWS accounts.
AWS are excited to introduce AWS Control Tower Delivery Partners offering consulting services on AWS Control Tower, and AWS Control Tower Ready Partners offering software products that support AWS Control Tower. AWS Control Tower Partners receive prescriptive guidance to build solutions on Control Tower, and their offerings are vetted by AWS Solutions Architects.
AWS customers can now choose AWS Control Tower Partner consulting services and software products that compliment AWS Control Tower directly from the AWS Control Tower console or AWS Marketplace. Consulting services include custom controls, account factory, regulatory compliance solutions, and enterprise-specific solutions such as Internet of Things (IoT), containerization and data lakes.
Software products include identity management, security information and event management (SIEM), centralized networking, operational intelligence, and cost management.
Announcing general availability of SQL Notebooks support in Amazon Redshift Query Editor
Amazon Redshift introduces a new way to work on multiple SQL queries by organizing them into a single Notebook with documentation, visualization, and collaboration capabilities. The new SQL Notebook interface available in Amazon Redshift Query Editor v2 allows users such as data analysts and data scientists to run data analytics more efficiently by keeping relevant queries and the information together for ease of use.
Data users engaging in advanced analytics work on multiple queries at a time to perform various tasks for their data analysis. Query Editor V2 helps you organize related queries by saving them together in a folder, or combining them into a single saved query with multiple statements. SQL Notebooks support provides an alternative way to embed the queries required for a complete data analysis in a single document using SQL cells.
You can effortlessly visualize the query results. You can share your SQL Notebooks with team members and use Markdown cells to provide documentation on your queries to ease the learning curve for others to work on even on the most complicated data analysis tasks. You can use the built-in feature "version history" to track changes in your SQL and Markdown cells. Using the export and import capabilities, you can effortlessly move your SQL Notebooks across AWS accounts and regions when required.
Amazon MSK Connect now supports private DNS hostnames for enhanced security
Amazon MSK Connect now supports Private DNS hostnames for enhanced security. With Private DNS hostname support in MSK Connect, you can configure connectors to reference public or private domain names. Connectors will use the DNS servers configured in your VPC’s DHCP option set to resolve domain names. You can now use MSK Connect to privately connect with databases, data warehouses and other resources in your VPC to comply with your security needs.
Private DNS feature is available in the commercial AWS regions where Amazon MSK Connect is available. MSK Connect is available in the following AWS Regions: Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (Sao Paulo), US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon). You can get started with this feature by accessing MSK Connect from the Amazon MSK console or the AWS CLI. To learn more, visit visit our Amazon MSK Connect Developer Guide.
Amazon Managed Streaming for Apache Kafka (Amazon MSK) is a streaming data service that makes it easy for users to run Apache Kafka applications on AWS. Amazon MSK Connect (MSKC) allows users to deploy and operate connectors in a fully managed environment. A connector can either be a source or a sink connector to stream data into or out of Amazon MSK and enables a simple no-code data integration between databases, data warehouses, etc. along with advanced transformation capabilities.
Introducing the Amazon OpenSearch Service delivery program
Amazon Web Services (AWS) is pleased to announce the new Amazon OpenSearch Service Delivery specialization for AWS Partners that help customers perform interactive log analytics, real-time application monitoring, website search, and more. Amazon OpenSearch Service manages software installation, upgrades, patching, scaling (up to 3 PB), and cross-region replication with no downtime. Amazon OpenSearch Service is also bundled with a dashboard visualization tool, OpenSearch Dashboards. This tool helps visualize not only log and trace data, but also machine-learning powered results for anomaly detection and search relevance ranking.
The Amazon OpenSearch Service Delivery specialization provides customers with a vetted list of AWS Partners with proven success in delivering Amazon OpenSearch Service solutions. Customers can browse validated partners with confidence, knowing these partners have deep technical knowledge and valuable experience delivering solutions with Amazon OpenSearch Service.
AWS WAF launches Challenge rule action and Bot Control for Targeted Bots
AWS WAF announces AWS Bot Control for Targeted Bots, a new feature of AWS Bot Control that provides protection against bots that attempt to evade detection and target applications such as e-commerce, retail, and financial services websites. Traffic from targeted bots can result in a poor user experience by competing against legitimate user traffic for website access to high-demand inventory, increasing business risk through chargebacks from fraudulent transactions, and increasing infrastructure costs.
AWS WAF previously released AWS Bot Control, which protects against common bots. With AWS Bot Control for Targeted Bots, customers can easily enable advanced bot detection techniques, such as browser interrogation, fingerprinting, and behavioral analysis to protect against targeted bot attacks. AWS Bot Control for Targeted Bots creates intelligent baselines and automatically applies mitigations, such as dynamic rate-based limiting when anomalous access patterns are detected, without the need for users to configure these thresholds. Learn more about AWS WAF Bot Control by visiting the Bot Control feature page.
To ensure that you use Targeted Bots only on the requests that need it, you can use WAF scope-down statements. When a request is evaluated by Targeted Bots, AWS WAF creates a baseline for each device and uses machine learning models to identify and rate-limit if a dynamic threshold is exceeded.
With the recommended JavaScript integration, you can receive additional telemetry on devices to better protect your applications against targeted bots. Targeted Bots also includes a new WAF rule action ‘Challenge’ that enforces ‘aws-waf-token’ token generation and is available with all AWS WAF rules. Lastly, you can override the rule action for any WAF rule, and use captcha and challenge as new rule actions, in addition to blocking or allowing the requests.
Amazon QuickSight launches Customer Managed Keys (CMK) for SPICE data encryption
Amazon QuickSight launches new capability for account administrators to use Customer Managed Keys (CMK) to encrypt and manage SPICE datasets. Previously, QuickSight fully manages the protection of customer-data stored inside of the QuickSight service.
Using the new Customer Managed Keys (CMK) capability will benefit QuickSight users to 1/ be able to revoke access to SPICE datasets with one click, and 2/ maintain an auditable log that tracks how SPICE datasets are accessed.
This feature increases level of security and transparency, gives customers more control over their SPICE datasets and satisfies security requirements by company and government agency policies. For further details, visit here.
Amazon Aurora supports cluster export to S3
Amazon Aurora now supports exporting database clusters directly to S3 in Apache Parquet format without creating a snapshot first. Customers can also initiate an export to S3 directly from the Aurora database cluster, saving them time, cost and the extra overhead of creating/retaining snapshots to export data to S3.
Exporting data to S3 from an Aurora cluster does not impact the performance of the Aurora database. The exported data in Apache Parquet format is portable, so customers can analyze their data with query services such as Amazon Athena or big data processing frameworks such as Apache Spark. To get started, visit the Amazon RDS Management Console or use the AWS CLI to export your Aurora database clusters directly to S3 without taking a manual snapshot.
Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.
AWS Console Mobile Application adds support for AWS CloudShell
AWS Console Mobile Application users can now access AWS CloudShell in the iOS and Android applications. The Console Mobile App provides AWS CloudShell in a mobile-friendly interface that enables users to run scripts with the AWS command-line interface (AWS CLI) to interact with 250+ AWS services while on-the-go. Users also have access to an extended mobile keyboard when using AWS CloudShell in the Console Mobile App.
The extended mobile keyboard provides users with key inputs (e.g. tab, ctrl, alt, esc) that are available in the AWS CloudShell console on desktop. The Console Mobile App currently offers AWS CloudShell in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), and South America (São Paulo).
The Console Mobile App lets users view and manage a select set of resources to stay informed and connected with their AWS resources while on-the-go. The login process supports biometrics authentication, making access to AWS resources simple, secure, and quick.
Amazon SageMaker Canvas supports tags to track and allocate costs incurred by users
AWS are excited to announce the support of assigning tags to user-profiles created within Amazon SageMaker. This enables you to track Amazon SageMaker Canvas usage costs categorized by users, departments, lines of businesses, or cost centers.
SageMaker Canvas is a visual point-and-click interface that enables business analysts to generate accurate ML predictions on their own — without requiring any machine learning experience or having to write a single line of code. SageMaker Canvas makes it easy to access and combine data from a variety of sources, automatically clean data, and build ML models to generate accurate predictions with a few clicks.
User profiles are created within Amazon SageMaker domains to access applications such as SageMaker Canvas. A tag is a label defined as a key-value pair that helps organize AWS resources. Tags assigned to user profiles are now automatically propagated and applied to users who access SageMaker Canvas. You can categorize SageMaker Canvas model building and session usage costs incurred by these users in AWS Cost Explorer and AWS Cost and Usage Reports (AWS CUR). This enables you to track costs and charge them back to the appropriate department of the user.
Amazon Neptune Serverless is now generally available
Amazon Neptune Serverless is a new deployment option that automatically scales capacity based on the needs of the application, making it easy and cost effective for developers to run graph databases without managing database capacity.
Neptune is a fast, reliable, and fully managed graph database service for building and running applications with highly connected datasets, such as knowledge graphs, fraud graphs, identity graphs, and security graphs. With Neptune Serverless, you can run applications built using a graph database with a just few steps, and scale automatically to meet your application’s needs.
If you have unpredictable and variable workloads, Neptune Serverless automatically determines and provisions the compute and memory resources to run the graph database. Database capacity scales up and down based on the application’s changing requirements to maintain consistent performance, saving up to 90% in database costs compared to provisioning at peak capacity.
With Neptune Serverless, you create a graph database for your application, using the three most popular graph query languages: Apache TinkerPop Gremlin, openCypher, and SPARQL.
Amazon WorkSpaces now supports the following new features for WorkSpaces Web Access using the WorkSpaces Streaming Protocol (WSP):
These features are available at no additional charge on WorkSpaces Web Access using WSP. WSP is a high performance cloud-native streaming protocol. WorkSpaces using WSP is available in commercial AWS Regions in which Amazon WorkSpaces is available this week as well as in AWS GovCloud (US-West).
AWS Private Certificate Authority introduces a mode for short-lived certificates
AWS Private Certificate Authority (AWS Private CA) now offers short-lived certificate mode, a lower cost mode of AWS Private CA designed for issuing short-lived certificates. With this new mode, public key infrastructure (PKI) administrators, builders, and developers can save money when issuing certificates with validity periods of 7 days or fewer.
If you use certificates to convey privileged access, such as with IAM Roles Anywhere, short-lived certificates may offer better security because they expire quickly rather than relying on the need to revoke certificates with a longer validity period. With today’s launch of short-lived certificate mode, you can now use a private CA with a dedicated mode for issuing those short-lived certificates.
Additionally, you can now align the lifetime of the certificate with the lifetime of the resource it identifies. For example, you can use short-lived certificates to identify containers for Elastic Kubernetes Service. You set the private CA mode during private CA creation, existing private CAs cannot switch modes.
The existing mode of AWS Private CA is now known as general-purpose mode and supports certificates of any validity period. Both modes have distinct pricing for the different use cases they support.
Introducing AWS Toolkit for .NET Refactoring, a new Visual Studio extension
The AWS Toolkit for .NET Refactoring is a new extension for Microsoft Visual Studio 2019 and Microsoft Visual Studio 2022. The extension helps transform your legacy .NET Framework applications to a modern, cloud-optimized architecture letting you fully leverage the benefits of reduced cost, increased up time, and improved scalability. It extends the functionality of Porting Assistant for .NET with new features, such as testing on AWS environments directly from Visual Studio IDE.
The AWS Toolkit for .NET Refactoring accelerates legacy .NET application transformation through the following capabilities:
Compatibility Assessment
The AWS Toolkit for .NET Refactoring extension scans your legacy .NET Framework application to identify Windows dependencies and API and package incompatibilities with newer, cross-platform .NET versions (.NET Core 3.1, .NET 5, .NET 6).
Porting Assistance
Where possible, the AWS Toolkit for .NET Refactoring extension kickstarts code modifications by making changes to project reference files and web.config files (Internet Information Services (IIS) and Active Directory (AD)) for cross-platform .NET and Linux compatibility.
Application validation on AWS
The AWS Toolkit for .NET Refactoring extension enables you to validate refactoring code changes by deploying to AWS directly from Visual Studio. The toolkit generates the required containerization artifacts to help you test your ported code on Amazon Elastic Container Service (ECS) via AWS Fargate through an endpoint URL, without leaving the Visual Studio IDE.
The AWS Toolkit for .NET Refactoring is available as an extension for Visual Studio 2019 and Visual Studio 2022. To learn more, please visit our documentation and blog. The extension is also available as part of license included Visual Studio Amazon Machine Images (AMIs) on Amazon EC2.
Amazon Connect adds real-time schedule adherence
Amazon Connect now includes the ability to view agent schedule adherence in real-time as part of the forecasting, capacity planning, and scheduling capabilities (preview). Within the real-time metrics page, contact center supervisors can identify when agents deviate from their planned schedule, enabling supervisors to quickly take action to help improve agent productivity.
For example, if agents are working but supposed to be in training, you can use Amazon Connect real-time schedule adherence to identify those agents and remind them to join the training to improve their long term performance and avoid overstaffing.
Amazon MSK adds support for Apache Kafka version 3.3.1
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 3.3.1 for new and existing clusters. Apache Kafka 3.3.1 includes several bug fixes and new features that improve performance. Some of the key features include enhancements to metrics and partitioner. Amazon MSK will continue to use and manage Zookeeper for quorum management in this release for stability. For a complete list of improvements and bug fixes, see the Apache Kafka release notes for 3.3.1.
Amazon MSK is a fully managed service for Apache Kafka that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is 100% compatible with Apache Kafka, which enables you to quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you can spend more time innovating on applications and less time managing clusters.
Amazon EC2 C6i, M6i instances are now available in additional regions
Starting this week, Amazon EC2 C6i and M6i instances are available in the Asia Pacific (Osaka) and Africa (Cape Town) regions. C6i instances are an ideal fit for compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC). M6i instances are SAP Certified and are ideal for workloads such as web and application servers, back-end servers supporting enterprise applications, gaming servers, caching fleets, as well as for application development environments.
Based on the AWS Nitro System, C6i and M6i instances are powered by 3rd Gen Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz, offering up to 15% better price performance, 33% larger instance sizes (up to 128 vCPUs), two new sizes (32xlarge and metal), 2x networking speed (up to 50 Gbps) and 2x EBS bandwidth (up to 40 Gbps) versus comparable Gen5 instances.
Amazon SageMaker Automatic Model Tuning now supports Grid Search
Amazon SageMaker Automatic Model Tuning now supports Grid Search to enable use cases that require reproducibility of hyperparameter tuning. Grid search will cover every combination of the specified hyperparameter values and yield reproducible tuning results.
Amazon SageMaker Automatic Model Tuning allows you to tune and find the most accurate version of a machine learning model by searching for the optimal set of hyperparameter configurations for your dataset using various search strategies. Before this launch, you had the option to tune your models through "Random", "Bayesian" or "Hyperband" search strategies.
Starting this week, you can choose Grid search for hyperparameter optimization. When compared to "Random", "Bayesian" or "Hyperband", Grid search determines which regions of the hyperparameter search space are most promising by exhaustively exploring each and every combination of the specified hyperparameters. This makes grid search the preferred choice for use cases where reproducibility of hyperparameter tuning is important.
AWS Fault Injection Simulator now supports network connectivity disruption
AWS Fault Injection Simulator (FIS) now supports network connectivity disruption as a new FIS action type. Using the new disrupt connectivity action in AWS FIS, you can inject a variety of connectivity issues as part of an AWS FIS experiment. Supported connectivity issues include disrupting all traffic, or, limiting the disruption to traffic to/from a specific Availability Zone, VPC, custom prefix list, or service (including Amazon S3 and DynamoDB). This helps you validate that your applications are resilient to a total or partial loss of connectivity.
AWS FIS is a managed service for running controlled fault injection experiments on AWS. To get started using this new FIS action type, log in to AWS FIS in the AWS Management Console and create a new experiment template. Next, select the aws:network:disrupt-connectivity action type and provide the type of connectivity issue and duration of time for connectivity to be disrupted. Then, specify the subnet(s) you want to affect. To ensure the experiment remains controlled, you can also specify Amazon CloudWatch alarms to automatically stop the experiment if a predefined threshold is met.
AWS Identity and Access Management (IAM) Access Analyzer now supports six additional resource types to help you identify public and cross-account access from outside your AWS account and organization. These six resource types include Amazon SNS topics, Amazon EBS volume snapshots, Amazon RDS DB snapshots, Amazon RDS DB cluster snapshots, Amazon ECR repositories, and Amazon EFS file systems.
IAM Access Analyzer now analyzes resource policies, access control lists, and other access controls for these resources to make it easier for you to identify public, cross-account, and cross-organization access. These findings can help you adhere to the security best practice of least privilege and reduce unintended external access to your resources.
You can also use IAM Access Analyzer to preview and validate public and cross-account access before deploying permissions changes to production. Now, you can use IAM Access Analyzer APIs to preview access to these six additional resource types.
IAM Access Analyzer resource types are available to you at no additional cost. IAM Access Analyzer is available in the IAM console and through APIs in all AWS Regions, including the AWS GovCloud (US) Regions.
AWS DataSync adds support for self-signed certificates
AWS DataSync now supports the use of self-signed certificates when connecting to object storage locations via HTTPS. When configuring an object storage location, you can specify a self-signed X.509 (.pem) certificate that the DataSync agent will use to secure the TLS connection to your self-managed object storage server.
With this launch, you can now configure DataSync to use secure HTTPS connections with self-managed object storage systems that do not provide certificates from a trusted Certificate Authority (CA).
As a fully-managed service, DataSync removes the operational burden of online data movement, including setting up and maintaining infrastructure, building, buying and operating data transfer software, and manually executing and verifying one-time or periodic data transfers.
DataSync also has built-in security capabilities such as encryption of data in-transit and at-rest and end-to-end data integrity verification. DataSync uses a purpose-built network protocol and scale-out architecture to accelerate data movement and optimize the use of your network through bandwidth throttling controls and compression of data in-transit.
It also automatically recovers from temporary network issues and provides control and monitoring capabilities such as data transfer scheduling, include and exclude filters, and granular visibility into the transfer process through Amazon CloudWatch metrics, logs, and events.
This week, AWS enhanced the AWS Organizations console to enable you to centrally view and update the primary contact information for your AWS accounts. This follows the release of the Primary Contact Information API that enabled you to programmatically manage primary contact information for accounts in your organization.
With this launch, you can now also use the console to easily perform this operation without logging into each account separately. AWS already launched API and Organizations console support for alternate contacts, and support for additional account settings will be available in future releases.
AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources. Your organization’s administrators can now use the console UI in the management account to centrally manage primary contact information for member accounts without requiring credentials for each AWS account.
To update the management account primary contact information via the console, customers will still need to use the Billing Console at this time.
Starting this week, Amazon EC2 High Memory instances with 24TiB (u-24tb1.112xlarge) of memory are now available in Asia Pacific (Seoul) region and high Memory instances with 18TiB (u-18tb1.112xlarge) of memory are now available in US East (N. Virginia) and US West (Oregon) regions.
These instances give AWS customers greater flexibility for instance usage and procurement - customers can start using them with On Demand, Reserved Instance, and Savings Plan purchase options. With u-24tb1 and u-18tb1, customers have a choice of 24TiB and 18TiB of memory, respectively - both offering 448 vCPUs, 100Gbps network and 38Gbps EBS bandwidth.
These instances are built on the AWS Nitro System, a collection of AWS-designed hardware and lightweight Nitro hypervisor which delivers practically all of the compute and memory resources of the host hardware to your instances. This frees up additional memory for your workloads which boosts performance and lowers the $/GiB costs.
Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments
Amazon SageMaker adds eight new Graviton-based instances for model deployment
Amazon SageMaker expands access to eight new Graviton2 and Graviton3-based machine learning (ML) instance families so that AWS customers have more options for optimizing their cost and performance when deploying their ML models on SageMaker. Now, customers can use ml.c7g, ml.m6g, ml.m6gd, ml.c6g, ml.c6gd, ml.c6gn, ml.r6g, and ml.r6gd for Real-time and Asynchronous Inference model deployment options.
Amazon SageMaker now supports 7 instance families: ml.m6g, ml.m6gd, ml.c6g, ml.c6gd, ml.c6gn, ml.r6g, and ml.r6gd, that are powered by AWS Graviton2 processors that provide customers with up to 40 percent better performance at the same price for a wide range of workloads over comparable fifth-generation x86-based instances. Graviton2-based ML instances are available in all commercial regions. To find out which instance families are available in the region of your choice, please visit the pricing page.
Building on the improvements of Graviton2, Graviton3-based instances deliver up to 25 percent higher performance, up to 2x higher floating-point performance, and 50 percent faster memory access based on leading-edge DDR5 memory technology compared with Graviton2 processors.
Specifically for ML workloads, AWS Graviton3 processors deliver up to 3x better performance compared to AWS Graviton2 processors, including support for bfloat16. Amazon SageMaker now supports the ml.c7g instance family. ml.c7g instances are available in US East (Ohio), US East (N. Virginia), US West (Oregon), and Europe (Ireland).
Announcing two new HERE map styles for Amazon Location Service
Amazon Location Service adds two new map styles for the base map service from our data provider HERE Technologies. HERE Imagery provides high quality satellite imagery with global coverage and HERE Hybrid displays the road network, street names and city labels over the satellite imagery. Amazon Location Service now has a total of 11 map styles to support a wide range of use cases for interactive maps in applications.
HERE Imagery and Hybrid map styles provide more options for developers for their asset tracking and routing use cases. In addition to HERE vector styles such as HERE Explore and HERE Explore Truck, developers now can overlay user-defined information over satellite imagery and road network to support more use cases in delivery and fleet management business.
Amazon Location Service is available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney) Region, and Asia Pacific (Tokyo).
Amazon Aurora MySQL 2.11 with R6i instance support is generally available
Aurora MySQL 2.11, compatible with MySQL 5.7, is now generally available. Aurora MySQL 2.11 includes security updates and also supports R6i instances powered by 3rd generation Xeon Scalable processors.
R6i instances are the 6th generation of Amazon EC2 memory optimized instances, designed for memory-intensive workloads. R6i instances provide up to 50 Gbps of networking speed, 2x that of R5 instances, and up to 20% higher memory bandwidth per vCPU compared to R5 instances. To meet customer demands for increased scalability, R6i instances provide a new instance size of 32xlarge with 128 vCPUs and 1,024 GiB of memory - 33% more vCPUs and memory than the largest R5 instance.
To upgrade to Aurora MySQL 2.11, you can initiate a minor version upgrade manually by modifying your DB cluster, or you can enable the “Auto minor version upgrade” option when creating or modifying a DB cluster. For more details, see Automatic Minor Version Upgrades for MySQL. You can launch new R6i instances in the Amazon RDS Management Console or using the AWS CLI. For more details, refer to the documentation.
Amazon SageMaker Multi-Model Endpoint (MME) is fully managed capability of SageMaker Inference that allows customers to deploy thousands of models on a single endpoint and save costs by sharing instances on which the endpoints run across all the models. Until today, MME was only supported for machine learning (ML) models which run on CPU instances. Now, customers can use MME to deploy thousands of ML models on GPU based instances as well, and potentially save costs by 90%.
MME dynamically loads and unloads models from GPU memory based on incoming traffic to the endpoint. Customers save cost with MME as the GPU instances are shared by thousands of models. Customers can run ML models from multiple ML frameworks including PyTorch, TensorFlow, XGBoost, and ONNX. Customers can get started by using the NVIDIA Triton™ Inference Server and deploy models on SageMaker’s GPU instances in “multi-model“ mode. Once the MME is created, customers specify the ML model from which they want to obtain inference while invoking the endpoint.
AWS Batch now supports Amazon Elastic Kubernetes Service (Amazon EKS)
This week, AWS Batch introduced support for Amazon Elastic Kubernetes Service (Amazon EKS), enabling customers to run their jobs on Amazon EKS clusters as Kubernetes pods. AWS Batch manages the scaling of Kubernetes nodes, placement of pods, and supports job execution using Amazon Compute Cloud (Amazon EC2) or Amazon EC2 Spot. Furthermore, Batch respects other workloads on these EKS clusters, and will not place jobs on non-Batch nodes.
AWS Batch has optimized the experience of running batch workloads at scale reliably and efficiently for years and now extends those capabilities to Amazon EKS customers. AWS Batch simplifies execution of batch workloads on EKS clusters by providing fully managed batch capabilities such as queueing, dependency tracking, managing job retries and priorities, pod management, and node scaling.
AWS Batch is designed to handle multiple availability zones, multiple Amazon EC2 instance types and sizes, and integrates with Amazon EC2 Spot best practices to run your workloads in a fault-tolerant manner with low interruption rates. You can use AWS Batch to run a handful of overnight jobs, or millions of mission-critical jobs, with confidence that Batch will help you manage them with speed and efficiency.
AWS Resource Access Manager is now available in the Middle East (UAE) Region
You can now use AWS Resource Access Manager (AWS RAM) in the AWS Middle East (UAE) Region.
AWS RAM helps you securely share your resources across AWS accounts, within your organization or organizational units (OUs), or with AWS Identity and Access Management (IAM) roles and users for supported resource types.
CDK For Kubernetes (CDK8s) announces general availability of CDK8s+ and manifest validation support
CDK For Kubernetes Plus (CDK8s+) is a multi-language class library for defining Kubernetes applications using high level intent based constructs. Customers defining Kubernetes application indicate that maintainability of Kubernetes manifests is challenging; CDK8s+ aims to lower the entry barrier and improve maintainability of Kubernetes manifests by offering a hand crafted construct for each core Kubernetes object, exposing a richer API with reduced complexity.
With this launch, CDK8s+ is now generally available and stable for use. This means that the API will remain unchanged and fully supported (no breaking changes), at least until the next major version. CDK8s+ is vended as a separate library for each Kubernetes spec version, all those libraries are now generally available and stable to use.
Users also want to validate their manifest by applying either community or organizational policies. CDK8s now supports integration with third-party tools that facilitate this, and can perform validation as part of the synthesis process. This supports manifests produced by CDK8s adhering to the necessary policies.
Amazon CloudWatch RUM (Real User Monitoring) adds the ability for customers to include additional customer-defined metadata attributes as key-value pairs to RUM events when instrumenting their web applications. Additionally, customers are now able to use these self defined attributes as an additional filter when slicing and dicing the data in the AWS Management Console. When combined with pre-defined metadata attributes (eg. browser, device, country) that RUM supports today, customers can see a better classification of different end user activities.
CloudWatch RUM gives customers visibility into their web application’s client side performance by collecting performance and error data in real time. RUM reduces MTTR by providing visualizations that allow customers to troubleshoot and debug issues such as high page load times, error messages and stack traces. Customers can add metadata attributes in the events payload using a new API in the CloudWatch RUM Web Client (version 1.10.0).
AWS Batch increases compute and memory resource configurations for Fargate type jobs by 4X
AWS Batch customers can now submit Fargate type jobs to use up to 16 vCPUs, an approximately 4x increase from before. vCPUs are the primary compute resource in Fargate type Batch job. Larger vCPUs enable compute-heavy applications like machine learning inference, scientific modeling, and distributed analytics to more easily run on Fargate.
In addition, customers can now provision up to 120 GiB of memory for Fargate type jobs, also a 4x increase from before. This helps their batch jobs better perform memory-intensive operations on Fargate. Larger vCPU and memory options may also make migration to serverless container compute simpler for jobs that need more compute resources and cannot be easily re-architected into smaller sized containers.
To run Fargate type batch job with these increased vCPU and memory configurations, simply create a new job definition with new requirements and submit the job with this new job definition, or submit job with the new requirements overrides with an exiting job definiton. Customer can run their job with increase compute and memory resource in Batch Fargate On-Demand or the Fargate Spot ComputeEnvironment.
The new compute and memory resource configurations for Fargate type jobs are available in all AWS Regions where AWS Batch is currently available. Running jobs with Increased vCPU and memory configurations requires the use of Fargate’s vCPU-based Service Quotas, learn more about vCPU-based Service Quotas on the FAQ page.
Amazon Aurora now supports T4g instances in AWS GovCloud (US) Regions
Amazon Aurora now supports AWS Graviton2-based T4g database instances in the AWS GovCloud (US) Regions. T4g database instances deliver a performance improvement of up to 49% over comparable current generation x86-based database instances. You can launch these database instances when using Amazon Aurora MySQL-Compatible Edition and Amazon Aurora PostgreSQL-Compatible Edition.
T4g instances provide a baseline level of CPU performance, with the ability to burst CPU usage at any time, for as long as required. They offer a balance of compute, memory and network resources, and are ideal for database workloads with moderate CPU usage that experience temporary spikes in use. Amazon Aurora T4g instances are configured for Unlimited Mode, which means they can burst beyond the baseline over a 24-hour window for an additional charge.
You can launch new instances in the Amazon RDS Management Console or using the AWS CLI. Upgrading a database instance to Graviton2 requires a simple instance type modification, using the same steps as any other instance modification. The T4g database instances are supported on Aurora MySQL 2.09.3 and higher, 2.10.0 and higher, 3.01.0 and higher, and on Aurora PostgreSQL 14.3 and higher, 13.3 and higher, 12.4 and higher, and 11.9 and higher versions.
Amazon Elastic Block Store (EBS) Snapshots Archive helps customers save up to 75% on storage costs for Amazon EBS Snapshots that they rarely access and intend to retain for more than 90 days.
Amazon EBS Snapshots are incremental in nature, storing only the changes since the last snapshot. This makes them cost-effective for daily and weekly backups that need to be accessed frequently. If you have snapshots that you access every few months or years, and would like to retain them long-term for legal or compliance reasons, you can use Amazon EBS Snapshots Archive to store full, point-in-time snapshots at a lower cost than what you would incur if stored in the standard tier.
You can also use Amazon Data Lifecycle Manager to create snapshots and automatically move them to EBS Snapshots Archive based on your specific policies, further reducing the need to manage complex custom scripts and the risk of having unattended storage costs.
Snapshots in the Amazon EBS Snapshots Archive tier have a minimum retention period of 90 days. When you archive a snapshot, a full snapshot archive is created that contains all the data needed to create your Amazon EBS Volume. To create a volume from the snapshot archive, you can restore the snapshot archive to the standard tier, and then create an Amazon EBS volume from the snapshot in the same way you do today.
You can now automate the creation and moving snapshots to archive tier with Amazon Data Lifecycle Manager polices at no additional cost. After a period of time that you specify, any policies with Snapshots Archive enabled will automatically move the snapshots to the archive tier. Finally, Data Lifecycle Manager will automatically delete the snapshots at the end of its retention.
Amazon Cognito now provides user pool deletion protection
You can now activate deletion protection for your Amazon Cognito user pools. When you configure a user pool with deletion protection, the pool cannot be deleted by any user. Deletion protection is now active by default for new user pools created through the AWS Console.
You can activate or deactivate deletion protection for an existing user pool in the AWS Console, the AWS Command Line Interface, and API. Deletion protection prevents you from requesting the deletion of a user pool unless you first modify the pool and deactivate deletion protection.
Amazon S3 Replication now supports SSE-C encrypted objects
Amazon S3 Replication now supports objects encrypted with server-side encryption with customer-provided keys (SSE-C). SSE-C is an encryption option that allows you to store your own encryption keys to satisfy compliance or security requirements, rather than having AWS store the keys on your behalf using SSE-S3 or SSE-KMS.
Now you can automatically replicate your SSE-C encrypted objects to a secondary bucket for your data protection or multi-region resiliency needs. S3 Replication will automatically replicate newly uploaded SSE-C encrypted objects if they are eligible, as per your S3 Replication configurations. To replicate existing SSE-C objects, you can use S3 Batch Replication. To retrieve a replicated SSE-C encrypted object from S3, you supply the same key used to encrypt that object when it was initially uploaded to S3.
Amazon S3 Replication is an elastic, fully managed, low-cost way to replicate objects between buckets, giving you the control you need to meet your data protection or multi-region resiliency needs. You can configure S3 Replication to automatically replicate S3 objects in the same AWS Region or across different AWS Regions.
You have the flexibility to replicate to multiple destination buckets, and to replicate bi-directionally between buckets. If you need a predictable replication time, you can use Replication Time Control (RTC). S3 RTC is designed to replicate 99.99% of objects within 15 minutes after upload, with the majority of those new objects replicated in seconds.
S3 RTC is backed by a Service Level Agreement (SLA) with a commitment to replicate 99.9% of objects within 15 minutes during any billing month.
Amazon S3 Replication support for SSE-C encrypted objects is available in all AWS Regions, including the AWS GovCloud (US) Regions and AWS China Regions.
Anthos Clusters on VMware
A new vulnerability, CVE-2022-3176, has been discovered in the Linux kernel that can lead to local privilege escalation. This vulnerability allows an unprivileged user to achieve full container breakout to root on the node.
For instructions and more details, see the Anthos clusters on VMware security bulletin.
Anthos clusters on VMware 1.12.3-gke.23 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.12.3-gke.23 runs on Kubernetes 1.23.8-gke.1900.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.
NetworkGatewayGroup
object may erroneously report nodes as having NotHealthy
status.NetworkGatewayGroup
objects fails because of a webhook IP conflict error.Fixed the following vulnerabilities:
High-severity container vulnerabilities:
Critical container vulnerabilities:
Anthos Config Management
Added the spec.helm.values field in RootSync and RepoSync to allow overriding the default values that accompany the Helm chart.
The constraint template library includes a new template: K8sBlockLoadBalancer
. For reference, see Constraint template library.
The constraint template library's K8sHttpsOnly
template now supports Ingress blocks which do not include tls:
using the new tlsOptional: true
parameter. For reference, see Constraint template library.
Policy Controller has been updated to include a more recent build of OPA Gatekeeper (hash: 600a68d).
Config Sync now handles exporting metrics correctly with the right permissions and resource names after the update to Open Telemetry v0.54.0 which was introduced in ACM 1.12.2.
Fixed a Prometheus exporter error in the otel-collector by resolving a discrepancy between components regarding the description of the pipeline_error_observed
metric.
Config Sync is not compatible with Autopilot starting from GKE version 1.23. To use Config Sync on an Autopilot cluster, ensure the GKE version is 1.22 or earlier.
Anthos Service Mesh
1.15.2-asm.6 is now available.
Anthos Service Mesh 1.15.2-asm.6 includes the features of Istio 1.15.2 subject to the list of Anthos Service Mesh supported features.
Docker images for Anthos Service Mesh v1.15 and later now also support the Arm architecture.
Anthos Service Mesh now supports configuring Mesh CA and Google CA Service connectivity through an HTTPS proxy when direct connectivity from the sidecar-injected workloads is not available (for example, due to firewalls or other restrictive features). See Configure Certificate Authority connectivity through a proxy for more information.
Anthos Service Mesh 1.12 is no longer supported. For more information, see Supported versions.
Managed Anthos Service Mesh 1.15 isn't rolling out to the rapid release channel at this time. You can periodically check this page for the announcement of the rollout of Managed Anthos Service Mesh to the rapid channel. See Select a managed Anthos Service Mesh release channel for more information.
1.14.5-asm.3 is now available.
Anthos Service Mesh 1.14.5-asm.3 includes the features of Istio 1.14.5 subject to the list of Anthos Service Mesh supported features.
1.13.9-asm.1 is now available.
Anthos Service Mesh 1.13.9-asm.1 includes the features of Istio 1.13.9 subject to the list of Anthos Service Mesh supported features.
Apigee API Hub
A link to the Settings page has been added to the APIs list page.
Provisioning API hub using the UI failed if you selected a region other than the following: asia-east1
, asia-southeast1
, europe-west1
, europe-west4
, us-central1
, us-east1
, us-west1
, us-west4
.
Apigee X
This release contains the General Acceptance (GA) release of Advanced API Security, which:
Advanced API Security is a paid add-on to Apigee. You can try out Advanced API Security for free in any trial org—follow the procedure described in Enable Advanced API Security. Contact Apigee to learn more.
On October 24, 2022, GCP released an updated version of Apigee X (1-9-0-apigee-5).
BigQuery
Search indexes and the SEARCH() function are now generally available (GA). These enable you to use Google Standard SQL to efficiently pinpoint specific data elements in unstructured text and semi-structured data.
The following geography functions are now generally available (GA):
ST_ISCLOSED
: Returns TRUE
for a non-empty geography, where each element in the geography has an empty boundary.ST_ISRING
: Checks if a geography is a linestring and if the linestring is both closed and simple.You can now view BI Engine Top Tables Cached Bytes, BI Engine Query Fallback Count, and Query Execution Count as dashboard metrics for BigQuery. This feature is now in preview.
Cloud Data Fusion
Cloud Data Fusion version 6.7.2 is generally available (GA). This release is in parallel with the CDAP 6.7.2 release.
In Cloud Data Fusion version 6.7.2, the default machine type changed from N2 to E2.
Fixed in 6.7.2:
NullPointerException
error when table metrics were updated or when the output schema was not defined.
Cloud Functions
Cloud Functions now supports the .NET Core 6.0 runtime at the General Availability release level.
Cloud Logging
You can now instrument gRPC applications to use Microservices observability.
Pricing for Microservices observability is the same as Cloud Operations Pricing. There are no separate charges for using Cloud Trace, Cloud Monitoring, or Cloud Logging Microservices observability plugins.
You can now instrument gRPC applications to use Microservices observability.
Pricing for Microservices observability is the same as Cloud Operations Pricing. There are no separate charges for using Cloud Trace, Cloud Monitoring, or Cloud Logging Microservices observability plugins.
Cloud Monitoring
You can now instrument gRPC applications to use Microservices observability.
Pricing for Microservices observability is the same as Cloud Operations Pricing. There are no separate charges for using Cloud Trace, Cloud Monitoring, or Cloud Logging Microservices observability plugins.
A new version of Managed Service for Prometheus is now available. Version 0.5.0 of managed collection for Kubernetes has been released. Users who deploy managed collection using kubectl
should reapply the manifests. Users who deploy the service using gcloud
or the GKE UI will be upgraded on a rolling basis over the coming weeks. This release has no impact on users of self-deployed collection.
For details about the changes included, see the release page on GitHub.
Cloud Storage
Bucket tags are now generally available (GA).
Cloud Trace
You can now instrument gRPC applications to use Microservices observability.
Pricing for Microservices observability is the same as Cloud Operations Pricing. There are no separate charges for using Cloud Trace, Cloud Monitoring, or Cloud Logging Microservices observability plugins.
Compute Engine
Generally available: Compute Engine flexible committed use discounts (flexible CUDs) are spend-based discounts that add flexibility to your spending capabilities by eliminating the need to restrict your commitments to a single project, region, or machine series. You can purchase flexible commitments and commit to a minimum hourly spend amount to use vCPUs and/or memory in any of the projects within your Cloud Billing account, across any region, and belonging to any eligible general-purpose and/or compute-optimized machine types.
Learn more about flexible CUDs and how to purchase flexible commitments.
Dataproc
All Dataproc Serverless for Spark runtime versions prior to 1.0.21 and 2.0.1 will be deprecated on November 2, 2022.
Dataproc Serverless for Spark runtime version 2.0 will become the default Dataproc Serverless for Spark runtime version on December 13, 2022.
Dataproc Serverless for Spark now supports spark.dataproc.diagnostics.enabled
property that enables auto diagnostics on Batch failure. Note that enabling auto diagnostics will hold compute and storage quota after Batch is complete and until diagnostics is finished.
Google Cloud Armor
Default security policies are now Generally Available. You can configure a default rate-limiting security policy when you use the Google Cloud Console to set up your load balancer. For more information, see the Rate limiting overview.
IAM
Deny policies are generally available (GA). Use deny policies to prevent principals from using certain permissions, regardless of the roles they're granted.
Retail API
Recording Google Analytics 4 user events to the Retail API is available in GA. If you have integrated Google Analytics 4 for your user events, you can record the user event data in Google Analytics 4 format directly to the Retail API.
To use this feature, see the Record user events with Google Analytics 4 documentation.
A/B experiment traffic monitoring for Retail Search is available in private preview. See the documentation for A/B experiment monitoring.
A/B experiments compare key metrics between the Retail API and your existing search implementation. After setting up an experiment and its traffic splitting, you can monitor experiment traffic using the Retail console. In the console, you create variant arms that map to each experiment group that you created for the A/B experiment. This allows you to check whether the actual traffic matches the intended traffic split of your experiment. Traffic monitoring can help you determine if differences in traffic are due to a quality gap between services or an incorrect experiment setup.
To use A/B experiment traffic monitoring in private preview, contact Retail Support.
Traffic Director
Traffic Director deployment with automatic Envoy injection for Google Kubernetes Engine Pods currently installs Envoy version 1.20.0.
Vertex AI
Vertex AI Prediction
You can now use E2 machine types to serve predictions.
Vertex AI Workbench
The v1beta1
version of the Notebooks API is scheduled for removal no earlier than January 16, 2023. After this date, you must use Notebooks API v1
to manage Vertex AI Workbench resources.
VPC Service Controls
General availability for the following integration:
Workflows
Eventarc event-triggered requests are limited by the execution API write request on workflows. Events that exceed the limit follow the Eventarc retry policy.
Support for limiting the maximum number of concurrent branches or iterations within a parallel
step is available in Preview.
Microsoft Azure Releases And Updates
Source: azure.microsoft.com
Public preview: SAP S/4HANA events are now available on Azure Event Grid
Integrate your apps using Event Grid and SAP S/4HANA events.
General availability: Azure Sphere OS version 22.10
This quality release includes bug fixes in the Azure Sphere OS.
Execute high-volume SMS campaigns in seconds with easy, automated short code functionality.
Azure SQL General availability updates for late October 2022
General availability enhancements and updates released for Azure SQL in late October 2022.
Public preview: Azure Load Testing supports authenticating with client certificates
Azure Load Testing now enables you to authenticate to application endpoints which require a client certificate for authentication.
Azure SQL Public preview updates for late October 2022
Public preview enhancements and updates released for Azure SQL in late October 2022.
Public preview: Read replicas for Azure Database for PostgreSQL Flexible Server
Read replicas are used to offload read workloads and for disaster recovery scenarios in Azure Database for PostgreSQL – Flexible Server, a managed service running the open source Postgres database.
Public preview: Improved passive geo-replication for Azure Cache for Redis
Utilize new functionality that makes passive geo-replication more seamless and transparent.
General availability: Azure Cosmos DB for MongoDB data plane RBAC
Create users and roles and configure fine-grained access to your database account data for Azure Cosmos DB for MongoDB data plane.
Public preview: Azure CNI Powered by Cilium
Leverage next generation eBPF dataplane for pod networking, Kubernetes network policies and service load balancing.
Public preview: Azure Synapse Link for Azure Cosmos DB Gremlin API
Unlock near real time analytics on your graph data with Azure Synapse Link for Azure Cosmos DB Gremlin API.
Public preview: Mariner container optimized OS
Get reliability and consistency from cloud to edge across the AKS using the Mariner container optimized OS.
Public preview: Azure Load Testing is now HITRUST certified
Azure Load Testing is now HITRUST certified.
Public preview: K8s 1.25 support
You can now benefit from the latest features in Kubernetes version 1.25.
Azure Container Apps now supports exposing container apps that use a TCP-based protocol other than HTTP or HTTPS.
Public preview: Azure Load Testing in West Europe
Azure Load Testing is in public preview in West Europe.
Public preview: New metrics capabilities in OpenTelemetry-based Application Insights
Use OpenTelemetry vender-neutral APIs to power distributed tracing and metrics experiences in Azure Monitor Application Insights.
Public preview: V2 programming model for Azure Functions using Python
Azure Functions V2 programming model for development in Python offers several enhancements.
Generally available: CSI Extensible API for AKS
The CSI Extensible API is now generally available in AKS.
Dapr extension for AKS and Arc-enabled Kubernetes now support Dapr v1.9.0
Take advantage of the latest features in Dapr v1.9.0 with Dapr extension for AKS and Arc-enabled Kubernetes.
Public preview: ASO makes it easy to manage database and connection
Azure Service Operator (ASO) now has an integration with Workload Identity.
Public preview: IPVS load balancer support in AKS
You can now use the IP Virtual Server (IPVS) load balancer with AKS, with configurable connection scheduling and TCP/UDP timeouts.
Public preview: AKS image cleaner
You can now more easily remove unused and vulnerable images stored on AKS nodes.
Generally available: Premium SSD v2 disks available on Azure Disk CSI driver
Premium SSD v2 support is now generally available on AKS.
Public preview: Vertical Pod Autoscaler
Vertical Pod Autoscaler can help you reduce operational overhead, increase savings and improve stability of your cluster.
Generally available: AKS support for Ubuntu 22.04
AKS will start using Ubuntu 22 instead of Ubuntu 18 as the Ubuntu version for node pools.
Generally available: Azure Storage — Attribute-based access control for standard storage accounts
ABAC for Azure Storage Blobs, ADLS Gen2, and queues is now generally available and can be used for access control by defining conditions on role-assignments based on resource and request attributes.
Generally available: Custom IP Prefixes (BYOIP) now available in US Government regions
The ability to bring your own public IP ranges is now available in all US Government regions. Additionally, you can now bring your own IPv6 ranges to use as Public IP Prefixes.
Public preview: Availability zone volume placement for Azure NetApp Files
Deploy new Azure NetApp Files volumes in Azure availability zones (AZs) of your choice to support workloads across multiple availability zones.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (includes forever free tier):