This week's roundup of all the cloud news.
Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday October 14th 2022.
You can find out more about the plan details and new flexible pricing here: https://www.hava.io/blog/pricing-and-plan-updates
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
AWS Updates and Releases
Source: aws.amazon.com
Monitor Amazon EMR Serverless jobs in real-time with native Spark and Hive Tez UI
AWS are excited to announce that you can now monitor and debug jobs in EMR Serverless using native Apache Spark & Hive Tez UIs. The Apache Spark & Hive Tez UIs present visual interfaces with detailed information about your running and completed jobs. You can dive into job-specific metrics and information about event timelines, stages, tasks, and executors for each job.
After you submit a job to an EMR Serverless application, you can view the real-time Spark UI or the Hive Tez UI for the running job from the EMR Studio console or request a secure URL using the GetDashboardForJobRun API. For completed jobs, you can view the Spark History Server or the Persistent Hive Tez UI from the EMR Studio Console.
To learn more and get started, please see the Monitoring EMR Serverless applications and jobs documentation in the Amazon EMR Serverless User Guide. Real-time job monitoring is available at no additional charge in all the regions where Amazon EMR Serverless is available.
Amazon Athena announces upgraded query engine
Amazon Athena has upgraded its SQL query engine to include the latest features from the Trino open source project. Athena engine version 3 includes over 50 new SQL functions, 30 new features, and more than 90 query performance improvements. With today’s launch, Athena is also introducing a continuous integration approach to open source software management that will improve currency with the Trino and Presto projects so that you get faster access to community improvements, integrated and tuned within the Athena engine.
Athena engine version 3 includes all the features of version 2 while bringing numerous enhancements, such as T-Digest functions that can be used to approximate rank-based statistics with high accuracy, new Geospatial functions to run optimized Geospatial queries, and new query syntaxes such as MATCH_RECOGNIZE for identifying data patterns in applications such as fraud detection and sensor data analysis. With our simplified engine upgrade process, you can configure existing workgroups to be automatically upgraded to engine version 3 without requiring manual review or intervention.
You can start using the new Athena engine today by creating or configuring a workgroup and selecting the recommended “Athena Engine Version 3”. For more details about selecting engine versions, see Changing Athena engine versions. To learn more about the features of Athena engine version 3, see Athena engine version 3 reference and Upgrade to Athena engine version 3 to increase query performance and access more analytics features.
Athena engine version 3 is available today in all regions supporting Athena except for AWS China (Beijing) region, operated by Sinnet and AWS China (Ningxia) region, operated by NWCD.
AWS Gateway Load Balancer launches new option to rebalance flows when target fails or deregisters
This week AWS launched a new feature that provides an option to define flow handling behavior for AWS Gateway Load Balancer. Using this option, customers can now rebalance existing flows to a healthy target, when the target fails or deregisters. This helps reduce failover time when a target becomes unhealthy, and also allows customers to gracefully patch or upgrade the appliances during maintenance windows.
This feature uses the existing ELB API/Console and provides new attributes to specify the flow handling behavior. See the documentation on how to use this capability. Since this feature changes flow behavior, customers should evaluate the effect of enabling this feature on availability and check with their third-party appliance provider documentation.
AWS appliance partners should consider taking following actions - (a) Partners should validate whether rebalancing existing flows to healthy target has implications on their appliance as it will start receiving the flow midway, i.e. without getting the TCP SYN. (b) Update public documentation on how this feature will affect their appliance. (c) Partner may use this capability to improve stateful flow handling on their appliances.
AWS Cloud Map is available in Middle East (UAE) AWS Region
AWS Cloud Map is now available in Middle East (UAE) AWS Region. AWS Cloud Map is a cloud resource discovery service. With AWS Cloud Map, you can define custom names for your application resources, such as Amazon Elastic Container Services (Amazon ECS) tasks, Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon DynamoDB tables, or other cloud resource. You can then use these custom names to discover the location and metadata of cloud resources from your applications using AWS SDK and authenticated API queries.
Amazon Aurora supports PostgreSQL 14.4 version
Following the announcement of updates to the PostgreSQL database by the open source community, AWS has updated Amazon Aurora PostgreSQL-Compatible Edition to support PostgreSQL 14.4. PostgreSQL 14.4 fixes an issue that could cause silent data corruption when using the CREATE INDEX CONCURRENTLY or REINDEX CONCURRENTLY commands. Refer to the Aurora version policy to help you to decide how often to upgrade and how to plan your upgrade process. Aurora has also included the fix for this issue in Aurora PostgreSQL 14.3.
You can initiate a minor version upgrade manually by modifying your DB cluster, or you can enable the “Auto minor version upgrade" option when creating or modifying a DB cluster. By doing so, your DB cluster is automatically upgraded after AWS tests and approves the new version. For more details, see Automatic Minor Version Upgrades for PostgreSQL. Please review the Aurora documentation to learn more. For full feature parity list, head to our feature parity page, and to see all the regions that support Amazon Aurora head to our region page.
Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services.
Improve availability of SAP HANA systems on AWS with Host Auto-Failover
You can now deploy SAP HANA systems with Host Auto-Failover on AWS. Host Auto-Failover is a fully automated host fault recovery solution for SAP HANA that allows you to add one or more hosts in standby mode to an SAP HANA system.
Using Host Auto-Failover, SAP HANA can automatically detect host failures (EC2 instance, OS-level, or SAP HANA) and trigger an automated failover to a standby host, enabling you to automatically recover in minutes.
Host Auto-Failover depends on shared storage so that a standby host can automatically takeover when an active host fails. Starting this week, you can deploy SAP HANA with Host Auto-Failover on Amazon Elastic Compute Cloud (EC2) by using Amazon FSx for NetApp ONTAP as the shared storage solution.
Amazon FSx for NetApp ONTAP is the first AWS shared storage service that is SAP-certified for workloads such as S/4HANA, Business Suite on HANA, and BW/4HANA, offering fully managed storage built on NetApp’s popular ONTAP file system. In addition to supporting Host Auto-Failover, Amazon FSx for NetApp ONTAP also offers a number of data management features that make it even faster and easier to deploy and run SAP HANA, such as snapshots, clones, and SnapMirror replication.
RStudio on Amazon SageMaker now comes with new developer productivity and security capabilities
RStudio on Amazon SageMaker now comes with the new RStudio Workbench version 2022.02.2-485.pro2 with enhanced developer productivity and security capabilities. The new capabilities include an enhanced R help system, improved editor support for the R pipe-bind placeholder and full end-to-end encryption that secures communication between the RStudioServerPro and RSession applications.
RStudio on Amazon SageMaker is the industry’s first fully managed, cloud-based RStudio Workbench. Data scientists and developers can launch the familiar RStudio integrated development environment (IDE) in a single click to build, train, and deploy models on Amazon SageMaker. You can elastically dial up and down the underlying compute resources without interrupting your work, and even switch to programming using Python on Amazon SageMaker Studio notebooks.
You will automatically get the updated RStudio Workbench version 2022.02.2-485.pro2 when you create a new SageMaker domain with RStudio enabled. If you have an existing domain, you will need to restart RStudio to get the new version.
AWS Glue introduces Git integration
AWS Glue now offers integration with Git, the widely-used open source version control system. AWS Glue is a serverless data integration service that uses reusable jobs to perform extract, transform, and load (ETL) tasks on data sets of nearly any scale. With this feature, customers can use GitHub and AWS CodeCommit to maintain a history of changes to their AWS Glue jobs and apply their existing DevOps practices to deploy them. Before now, customers needed to set up their own integrations with their code versioning systems and build tooling to move jobs from development environments to production environments.
Git integration in AWS Glue works for all AWS Glue job types, whether visual or code-based. It includes built-in integration with both GitHub and AWS CodeCommit and also makes it simpler to use automation tools like Jenkins and AWS CodeDeploy to deploy AWS Glue jobs. This feature also adds the option to download and upload jobs manually. Finally, AWS Glue Studio’s visual editor now supports parameterizing data sources and targets so you can update them when deploying the job to a new account.
Amazon EC2 Auto Scaling now supports Predictive Scaling in the Asia Pacific (Jakarta) Region
Amazon EC2 Auto Scaling now supports Predictive Scaling in the Asia Pacific (Jakarta) Region. Predictive Scaling can proactively scale out your Auto Scaling group to be ready for upcoming demand. This allows you to avoid the need to over-provision capacity, resulting in lower EC2 cost, while ensuring your application’s responsiveness. To see the list of all supported AWS public regions and AWS GovCloud regions, click here.
Predictive Scaling is appropriate for applications that experience recurring patterns of steep demand changes, such as early morning spikes when business resumes. It learns from the past patterns and launches instances in advance of predicted demand, giving instances time to warm up. Predictive scaling enhances existing Auto Scaling policies, such as Target Tracking or Simple Scaling, so that your applications scale based on both real-time metrics and historic patterns. You can preview how Predictive Scaling works with your Auto Scaling group by using “Forecast Only” mode.
Predictive Scaling is available as a scaling policy type through AWS Command Line Interface (CLI), EC2 Auto Scaling Management Console, AWS CloudFormation and AWS SDKs. To learn more, visit the Predictive Scaling page in the EC2 Auto Scaling documentation.
AWS Lambda now supports content filtering options for Amazon MSK, Self-Managed Kafka, Amazon MQ for Apache ActiveMQ, and Amazon MQ for RabbitMQ. With event pattern content filtering, customers can write complex rules so that their Lambda function is only invoked to process meaningful events. This helps reduce traffic to customers’ Lambda functions, simplifies code, and reduces overall cost. Filtering was already available for SQS, DynamoDB, and Kinesis as event sources for Lambda.
AWS Customers can specify up to 5 filter criteria when creating or updating the event source mappings for their Lambda functions triggered by an event source that supports filtering. The filters are combined using OR logic by default. In other words, an event/payload meeting any of the filtering criteria defined will be passed on to trigger a Lambda function while an event/payload not matching any of the filtering criteria will be dropped. This feature helps reduce function invocations for microservices that only use a subset of events available, removing the need for the target Lambda function or downstream applications to perform filtering.
Amazon EC2 C6gn instances are now available in additional region
Starting this week, Amazon EC2 C6gn instances are available in AWS Asia Pacific (Osaka) Region.
Based on the AWS Nitro System, C6gn instances are powered by Arm-based AWS Graviton2 processors and feature up to 100Gbps network bandwidth, delivering up to 40% better price-performance versus comparable current generation x86-based network optimized instances for applications requiring high network bandwidth such as high performance computing (HPC), network virtual appliances, data lakes and data analytics.
These instances can utilize the Elastic Fabric Adapter (EFA) for workloads like HPC and video processing that can take advantage of lower network latency with Message Passing Interface (MPI) for at-scale clusters. Workloads on these instances will continue to take advantage of the security, scalability and reliability of Amazon’s Virtual Private Cloud (VPC).
AWS Activate is now open to all startups
AWS Activate provides startups, including both smaller, early stage companies and more advanced digital businesses, with free tools and resources to quickly get started on AWS. This week AWS were excited to announce the availability of the new AWS Activate to any self-identified startup. Activate is full of personalized tools and resources designed to support startups through every stage of their journey, from their initial idea, to building a MVP, to securing their first customer, to scaling their business on AWS and beyond.
Any startup can join AWS Activate and get instant access to:
- More than 40 battle-tested AWS solution templates tailored to a startup’s use case
- Personalized tools, resources, and content tailored to a startup’s needs
- Best practices for optimizing performance, managing risks, and keeping costs under control
- Access to AWS Startup Lofts offering founders a wealth of resources, exclusive webinars, and free 1:1 sessions with AWS startup experts to support a startup’s journey from idea to IPO including co-working space in select locations.
Then, when startups are ready, AWS Activate members can apply for up to $100,000 in AWS credits, technical support credits, and access to exclusive third party offers.
AWS CloudFormation StackSets increases limits on three service quotas
This week, AWS CloudFormation StackSets increased the default for three service quotas: number of stack instances per stack set, number of stack sets per management account, and number of concurrent stack instance operations in a single AWS Region per management account.
You can now (1) deploy up to 100,000 stack instances per stack set (previously 2,000), (2) create up to 1,000 stack sets in your management account (previously 100), and (3) run up to 10,000 concurrent stack instance operations in a single Region per management account (previously 3,500). See AWS CloudFormation quotas for the latest service quotas.
StackSets extends the functionality of CloudFormation stacks, and enables you to create, update, or delete stacks across multiple AWS accounts and Regions with a single operation. With this launch, you can scale your stack instance deployments from a single management account by 40x, and minimize creation of management accounts to support multiple stack sets by 9x.
You can use StackSets for bootstrapping AWS account, baselining configuration, deploying cross account data collection systems, setting up disaster recovery, and solving other use cases for multi-accounts and regions.
Amazon EC2 now offers an automated connection set-up solution between EC2 instance and RDS Database
Starting this week, in Amazon Elastic Compute Cloud (Amazon EC2), you have the option to automatically set up connectivity between the EC2 instances and an Amazon Relational Database Services (Amazon RDS) or Amazon Aurora database. Once you have provisioned EC2 instances, you can select a RDS db instance or cluster, and with a single click, complete connectivity configurations to allow traffic from the EC2 instance to the RDS database. Amazon EC2 follows the connectivity best practices and automatically sets up security groups on your behalf, helping to establish a secure connection.
This feature provides a simplified and secure mechanism to complete the connection setup between an EC2 instance and RDS database. If done manually, establishing a connection between your application and database requires tasks such as setting up a VPC, security groups, and ingress/egress rules. The automated solution helps improve productivity for new users and application developers, who can now seamlessly connect EC2 instances to databases in a simplified and automated way.
Announcing a new Cost Explorer console experience
Starting this week, Cost Explorer customers will see a new interface in the console to visualize their cost and usage. The new summary widget provides an at-a-glance view of the total and average monthly cost. Customers can also look up specific spend and usage information using the new search function introduced in the table view.
Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. You can analyze your data at a high level (for example, total costs and usage across all accounts) or dive deeper into your costs and usage data to identify trends, pinpoint cost drivers, and detect anomalies.
AWS Neuron adds support for AWS Trainium powered Amazon EC2 Trn1 instances to unlock high-performance, cost effective deep learning training at scale. The Neuron SDK includes a compiler, runtime libraries, and profiling tools that integrate with popular ML frameworks such as PyTorch and Tensorflow. With this first release of Neuron 2.x, developers can now run deep learning training workloads on Trn1 instances to save training costs by up to 50% over comparable GPU-based EC2 instances, while getting the highest training performance in AWS cloud for popular NLP models.
Neuron adds support for training deep learning models, starting with language models, to be followed by additional model families including vision models [as outlined in the Neuron roadmap]. Under language models, this release of Neuron supports Transformers Encoder/Autoencoder and Transformers Decoders/Autoregressive model architectures such as GPT. To help speed up developer workflows and provide better insight into training workloads, Neuron now supports seamless Just-In-Time compilation, step-by-step execution with Eager Debug mode, and tools that provide performance and utilization insig
To help developers capitalize on Trainium innovations and maximize their performance and cost benefits, Neuron unlocks various hardware optimizations. It supports FP32, TF32, FP16, and BF16 data types and automatic casting from FP32 to TF32, BF16 and FP16. It also adds support for hardware-accelerated stochastic rounding which enables training at BF16 speeds, with FP32 range of accuracy when auto-casting from FP32 to BF16.
To support distributed training of large-scale models across accelerators in a Trn1 UltraCluster, Neuron adds support for various collective compute operations and 800 Gbps of EFA networking, which is the highest networking bandwidth currently offered in the AWS cloud. Neuron also provides support for distributed training libraries such as Megatron-LM in a public gitHub repository.
Developers can run DL training workloads on Trn1 instances using AWS Deep Learning AMIs, AWS Deep Learning Containers, or managed services such as Amazon Elastic Container Service (Amazon ECS), and AWS ParallelCluster, with support for Amazon Elastic Kubernetes Service (Amazon EKS), Amazon SageMaker, and AWS Batch coming soon. To help developers get started, this release provides examples for pre-training and fine-tuning of HuggingFace BERT-large, and pre-training of Megatron-LM GPT3 (6.7B) model.
Trn1 instances are available in the following AWS Regions as On-Demand Instances, Reserved Instances, and Spot Instances, or as part of a Savings Plan: US East (N. Virginia) and US West (Oregon). To get started on Trn1 instances, please refer to Neuron documentation. For a full list of features, enhancements, and changes in this release, please view the release notes. To get insight into the up-coming capabilities, please see the Neuron roadmap.
Announcing Amazon EC2 Trn1 instances for high-performance, cost-effective deep learning training
AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Trn1 instances. Amazon EC2 Trn1 instances are powered by AWS Trainium chips, which are purpose built for high-performance ML training applications in the cloud.
Trn1 instances deliver the highest performance on deep learning (DL) training of popular natural language processing (NLP) models on AWS while offering up to 50% cost savings over comparable GPU-based EC2 instances. You can get started with Trn1 instances by using popular ML frameworks, such as PyTorch and TensorFlow, helping you to lower training costs, reduce training times, iterate faster to build more innovative models, and increase productivity.
You can use EC2 Trn1 instances to train natural language processing (NLP), computer vision, and recommender models across a broad set of applications, such as speech recognition, recommendation, fraud detection, image and video classification, and forecasting.
Trn1 instances feature up to 16 AWS Trainium chips, a second-generation ML chip built by AWS after AWS Inferentia. Trn1 instances are the first EC2 instances with up to 800 Gbps of Elastic Fabric Adapter (EFA) network bandwidth. For efficient data and model parallelism, each Trn1 instance has 512 GB of high-bandwidth memory, delivers up to 3.4 petaflops of FP16/BF16 compute power, and features NeuronLink, an intra-instance high-bandwidth nonblocking interconnect.
To support large-scale deep learning models, Trn1 instances are deployed in EC2 UltraClusters. You will be able to use the UltraClusters to scale to up to 30,000 Trainium accelerators, which are interconnected with a nonblocking petabit scale network, and will get on-demand access to a supercomputer with 6.3 exaflops of compute.
Trn1 instances have native support for a wide range of data types, including the new configurable FP8, dynamic input shapes, control flow, C++ custom operators, and stochastic rounding.
AWS Neuron SDK, unlocks these advanced features and adds support for just-in-time (JIT) compilation and the eager debug mode. AWS Neuron is integrated with leading ML frameworks and libraries, such as PyTorch, TensorFlow, Megatron-LM, Hugging Face, PyTorch FSDP, so you can continue using your existing frameworks and run your application with minimal code changes.
Developers can run DL training workloads on Trn1 instances using AWS Deep Learning AMIs, AWS Deep Learning Containers, or managed services such as Amazon Elastic Container Service (Amazon ECS), and AWS ParallelCluster, with support for Amazon Elastic Kubernetes Service (Amazon EKS), Amazon SageMaker, and AWS Batch coming soon.
You can now inject Amazon EC2 Spot Instance interruptions into your Spot Instance workloads directly from the Amazon EC2 console. In 2021, AWS launched the ability for you to use AWS Fault Injection Simulator (AWS FIS) to simulate what happens when Amazon EC2 reclaims Spot Instances, enabling you to test that your application is prepared for an interruption. Now, AWS have made this capability available in the Amazon EC2 console.
Spot Instances enable you to run compute workloads on Amazon EC2 at a discount in exchange for returning the Spot Instances when Amazon EC2 needs the capacity back. Because it is always possible that your Spot Instance may be interrupted, you should ensure that your application is prepared for a Spot Instance interruption.
To trigger a Spot Instance interruption from the the Amazon EC2 console, you just need to navigate to the Spot Request section, select a Spot Instance request and then choose Actions, Initiate interruption. Behind the scenes, AWS then use AWS FIS to inject the interruption in your selected Spot Instance so that you can test how your application will react to a real-world interruption.
QuickSight Q now supports questions for access restricted datasets that use Row level Security (RLS) with user based rules. Readers can now ask questions about Topics that contain restricted access datasets and instantly receive accurate and pertinent answers based on access control rules defined by authors in RLS settings.
Authors can create Q Topics to answer questions on RLS enabled datasets without making any additional changes to existing rules. QuickSight Q leverages existing user based rules defined in RLS settings and enforces these rules not only on answers to questions but also on auto complete suggestions provided at the time of question framing. Therefore, Q Topics created with RLS enabled datasets always surface data that users are granted permission for.
Authors with a subscription for Enterprise Edition can define user based access restrictions on datasets by following instructions here. These rules are enforced when readers access data either by Dashboards or Topics, making it easy for Authors to manage data access for readers in a single rules dataset. Existing datasets for which rules are already defined do not need any additional changes

Source: cloud.google.com
Anthos Clusters on VMware
Anthos clusters on VMware 1.11.4-gke.32 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.11.4-gke.32 runs on Kubernetes 1.22.8-gke.204.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.
The Connect Agent version used in Anthos clusters on VMware versions 1.8 and earlier is no longer supported. If you upgrade your user cluster to these versions, the gkectl updgrade cluster
command may fail. If you encounter this issue and need further assistance, you should contact Google Support.
If you use gcloud anthos
version 1.4.2, and authenticate an Anthos cluster on VMware with gcloud anthos auth
, the command fails with the following error:
Decryption failed, no keys in the current key set could decrypt the payload.
To resolve this, you must upgrade gcloud anthos
to 1.4.3 or above (gcloud
SDK 397.0.0 or above) to authenticate clusters with gcloud anthos auth
.
Artifact Registry
When users enable the Container Scanning API and push container images to Artifact Registry, automatic container scanning now generates metadata including a software bill of materials (SBOM) dependency list. Users can analyze the metadata to gain insights into software dependencies and vulnerabilities. For more information, see Examine dependencies. This feature is in private preview.
Batch
Batch is generally available (GA). Batch jobs can be created in the supported locations and resources can be created in any location supported by Compute Engine. With this release the Terms of Service in the Cloud Services Summary fully apply.
The release includes additional capabilities such as support for user-defined service accounts, VPC Service Control, and HIPAA compliance.
The documentation has been updated to include the following new content:
-
Create a job that uses environment variables, a custom service account, Message Passing Interface (MPI), GPUs, or storage.
Samples in Java, Node.js, and Python are available for Batch. Documentation has been updated to include the following samples:
For more information, see All Batch code samples.
BigQuery
The reporting process for the tabledata.list bytes per minute
quota has been updated to more accurately reflect the enforced limit. The limit has not changed.
Analytics Hub is now generally available. As an Analytics Hub publisher, you can now view all subscriptions to your listing and remove a subscription from your listing.
You can now use stored procedures for Apache Spark. This feature is in preview.
Multi-statement transactions are now generally available (GA).
The ability to use physical bytes for storage billing is now in Preview. For more information, see Dataset storage billing models.
Carbon Footprint
Google Cloud Carbon Footprint is now Generally Available.
Chronicle
Chronicle CLI provides a text-based interface to initiate all Chronicle user workflows, acting as an alternative to the graphical user interface for advanced users.
Access to fields stored as key-value pairs in Detection Engine rules
You can now create Detection Engine rules that include UDM fields stored as key-value pairs, such as google.protobuf.Struct and Label data type. Using the map syntax, you access fields stored as the:
-
google.protobuf.Struct data type using syntax similar to
$e.additional.fields["key"] = "value"
. -
Label data type using syntax similar to
$e.target.labels["key"] = "value"
.
For more details about the map syntax, see the YARA-L 2.0 language syntax.
Cloud Build
Cloud Build now displays build security information for artifacts stored in Artifact Registry in the Google Cloud console. The new Security insights panel is part of Build History in the console. Users can access information such as Supply chain Levels for Software Artifacts (SLSA) level for built artifacts, vulnerabilities and provenance in the panel. To learn more, see View build security insights. This feature is in public preview.
Cloud Monitoring
SLO monitoring: You can now define a set of generic services by using the Service Monitoring API. This change streamlines integration with tools like Terraform. For more information about defining generic services, see Services.
Cloud Spanner
Spanner Vertex AI integration is now available in public preview. You can now enhance your Spanner applications with machine learning capabilities by using Google Standard SQL. For more information, see About Spanner Vertex AI integration.
Cloud SQL for MySQL
Cloud SQL supports the preview version of the following recommenders that help you optimize your instance's performance:
- High number of open tables recommender: Optimize the performance of your instance by increasing the size of table open cache for the Cloud SQL instances that have the number of open tables equal to the table open cache and keep opening too many tables concurrently
- High number of tables recommender: Optimize the performance of your instance by reducing the number of tables for the Cloud SQL instances whose table count is too high and close to the SLA limit.
Cloud SQL for PostgreSQL
Cloud SQL supports the preview version of the high transaction ID utilization recommender that helps you avoid potential transaction ID wraparound for Cloud SQL for PostgreSQL instances.
Data Catalog
Data Catalog integration with Analytics Hub is now generally available (GA). For more information, see Analytics Hub documentation and Search syntax.
Dialogflow
Dialogflow CX Advanced NLU now supports automatic training.
Document AI
Known issue (Document Labeling)
-
If you delete one or more documents, and these documents selected for deletion are all associated with an active labeling job, then all documents in that dataset will also be deleted, even if you did not select them for deletion. This is true regardless of the number of documents selected.
Workaround: Do not delete documents during an active labeling job. You can track active labeling jobs on the Dataset management page, under the category Labeling tasks, located on the right side of the page. If you absolutely must delete documents during an active labeling job, ensure that you also select at least one document that is NOT part of this active labeling job. Then, only the non-associated documents will be deleted, and the remaining documents in the dataset will be preserved.
Enterprise Knowledge Graph
Enterprise Knowledge Graph is available in Preview.
Enterprise Knowledge Graph API has been updated with the following features:
- Support to cancel and delete an entity reconciliation job
- Support for three entity types:
Organization
,LocalBusiness
, andPerson
. - Entity linking to Google Knowledge Graph is available for
Organization
andLocalBusiness
entity types.
Firestore
Time-to-live (TTL) policies are now supported at the General Availability level.
GKE
Creating public clusters on GKE versions 1.23 or later might fail with the following error due to a missing API permission in certain compliance regimes (FedRAMP High, US Regions and Support, EU Regions and Support, EU Regions and Support with Sovereign Controls)
ManagedResourceService.AddServiceBundle, PERMISSION_DENIED'/> APPLICATION_ERROR;google.cloud.servicedirectory.v1beta1/ManagedResourceService.AddServiceBundle;Request is disallowed by organization's constraints/gcp.restrictServiceUsage constraint for 'projects/<projectID> attempting to use service 'servicedirectory.googleapis.com'
To fix this issue, refer to the October 5, 2022 Assured Workloads release note.
Pub/Sub
- Update dependency cachetools to v5 (#1324) (72b6d5f)
- Update dependency certifi to v2022.9.24 (#1303) (dc05237)
- Update dependency charset-normalizer to v2.1.1 (#1308) (fedf2e1)
- Update dependency click to v8.1.3 (#1309) (0ddcb5b)
- Update dependency com.google.cloud:google-cloud-core to v2.8.15 (#1299) (11f220c)
- Update dependency com.google.cloud:google-cloud-core to v2.8.16 (#1301) (186c794)
- Update dependency com.google.cloud:google-cloud-core to v2.8.17 (#1326) (361a2f2)
- Update dependency com.google.cloud:google-cloud-core to v2.8.18 (#1328) (ae23532)
- Update dependency com.google.cloud:google-cloud-core to v2.8.20 (#1329) (c37b88e)
- Update dependency com.google.cloud:google-cloud-shared-dependencies to v3.0.4 (#1330) (0f6cc6c)
- Update dependency com.google.protobuf:protobuf-java-util to v3.21.7 (#1327) (6355eb0)
- Update dependency gcp-releasetool to v1.8.8 (#1304) (1c7c6eb)
- Update dependency google-api-core to v2.10.1 (#1310) (14725f2)
- Update dependency google-auth to v2.11.1 (#1305) (a6954d1)
- Update dependency google-auth to v2.12.0 (#1313) (ffcebe4)
- Update dependency google-cloud-core to v2.3.2 (#1306) (fbb4460)
- Update dependency importlib-metadata to v4.12.0 (#1314) (e319df0)
- Update dependency jeepney to v0.8.0 (#1315) (5ed336e)
- Update dependency jinja2 to v3.1.2 (#1316) (14ecdc6)
- Update dependency keyring to v23.9.3 (#1317) (3e783d4)
- Update dependency markupsafe to v2.1.1 (#1318) (ecd9c76)
- Update dependency org.graalvm.buildtools:native-maven-plugin to v0.9.14 (#1297) (7e7ce60)
- Update dependency protobuf to v3.20.2 (#1319) (f5123fa)
- Update dependency pyjwt to v2.5.0 (#1320) (a568462)
- Update dependency requests to v2.28.1 (#1321) (41b105a)
- Update dependency typing-extensions to v4.3.0 (#1322) (288cd7e)
- Update dependency zipp to v3.8.1 (#1323) (e78a284)
Resource Manager
The organization restrictions feature has launched into public preview. The organization restrictions feature enables you to prevent data exfiltration through phishing or insider attacks.
For managed devices in an organization, the organization restrictions feature restricts access only to resources in authorized Google Cloud organizations. For more information, see Introduction to organization restrictions.
Retail API
Auto-completion for Retail Search is now GA.
Auto-completion predicts the rest of a query a user is typing, which can improve the user search experience and accelerate the shopping process before checkout.
For more about auto-completion for Retail Search, see the Auto-completion documentation.
Recommendations AI now provides a Buy It Again model.
The Buy it Again model encourages purchasing items again based on previous recurring purchases.This personalized model predicts products that have been previously bought at least once and that are typically bought on a regular cadence.
For more information about the Buy It Again model, see the Buy It Again documentation. For how to create this model, see Create models.
Recommendations AI now provides a revenue per session optimization objective for the Others You May Like and Frequently Bought Together model types.
This objective works differently for each model type, but always optimizes for revenue by recommending items that have a higher probability of being added to carts.
For more about the revenue per session optimization objective, see the Revenue per session documentation.
Recommendations AI now provides two diversification options when you create serving configs for recommendations.
- Ruled-based diversification affects whether results returned from a single prediction request are from different categories of your product catalog.
- Data-driven diversification uses machine learning to balance category diversity and relevance in your prediction results.
For more about diversification types, see the Diversification documentation.
Software Delivery Shield
Software Delivery Shield, a fully-managed, end-to-end software supply chain security solution, has launched. It provides a comprehensive and modular set of capabilities and tools across Google Cloud services that developers, DevOps, and security teams can use to improve the security posture of the software supply chain. For more information on the features of Software Delivery Shield, see Software Delivery Shield overview.
Translation Hub
Translation Hub advanced-tier portals is available in Preview.
Vertex AI
Tabular Workflow for TabNet Training is available in Preview. For documentation, refer to Tabular Workflows for TabNet Training.
Tabular Workflow for Wide & Deep Training is available in Preview. For documentation, refer to Tabular Workflow for Wide & Deep Training.
Vertex AI Feature Store streaming ingestion is available in Preview.
The Vertex AI Model Registry is generally available (GA). Vertex AI Model Registry is a searchable repository where you can manage the lifecycle of your ML models. From the Vertex AI Model Registry, you can better organize your models, train new versions, and deploy directly to endpoints.
The Vertex AI Model Registry and BigQuery ML integration is generally available (GA). With this integration, BigQuery ML models can be managed alongside other ML models in Vertex AI to easily version, evaluate, and deploy for prediction.
VPC Service Controls
Preview stage support for the following integration:
Workflows
The memory available for workflow variables and runtime arguments (including Eventarc events) has been doubled to 512 KB per execution.
Workload Manager
Preview: Workload Manager is now available for SAP workloads. For more information, see the Product overview.
Microsoft Azure Releases And Updates
Source: azure.microsoft.com
Generally available: Kusto Trender
Kusto Trender is a new JavaScript client library for Kusto focused on IoT scenarios. It is now available on GitHub.
Public preview: Enhanced soft delete for Azure Backup
Enhanced soft delete helps you recover your backup data after it has been deleted, giving you protection against data loss scenarios like ransomwares, malicious actors, and accidental deletes.
Public preview: Multi-user authorization for Backup vaults
Multi-user authorization provides enhanced protection for your backup data against unauthorized critical operations.
Public preview: Immutable vaults for Azure Backup
Immutable vaults helps you protect your backups against threats like ransomware attacks and malicious actors by ensuring that your backup data cannot be deleted before its intended expiry time.
General availability: Azure NetApp Files application volume group for SAP HANA
With Azure NetApp Files application volume group (AVG) for SAP HANA, you are able to deploy all volumes required to install and operate an SAP HANA system according to best practices.
Generally available: Windows Admin Center for Azure Virtual Machines
Windows Admin Center for Azure Virtual Machines is now generally available for you to manage the Operating System of your Windows Server infrastructure in Azure.
General availability: Static IP configurations of private endpoints
Azure private endpoint support of statically defined IP addresses is now generally available allowing you to provide static IP addresses when configuring your private endpoints.
General availability: Azure Sphere OS version 22.10 expected on October 25
Participate in the retail evaluation now to ensure full compatibility. The OS evaluation period provides 14 days for backward compatibility testing.
General availability: Custom network interface name configurations of private endpoints
Azure private endpoint (PE) support of custom network interface (NIC) names is now generally available. With this feature you can now deploy PE NICs without the GUID suffix.
General availability: Azure Monitor predictive autoscale for Azure Virtual Machine Scale Sets
Predictive autoscale manages and scales your virtual machine scale sets by observing and learning from historical CPU usage patterns.
Generally available: Azure Site Recovery update rollup 64 - October 2022
This update provides the improvements for the latest version of Azure Site Recovery components.
Public preview: Computer Vision Image Analysis 4.0
Get results from all Image Analysis visual features including Read (OCR) with only need one API call.
Public preview: Microsoft Azure Deployment Environments
Azure Deployment Environments has entered public preview and helps dev teams create and manage all types of environments throughout the application lifecycle.
Public preview: Support for HANA System Replication in Azure Backup for HANA
Now, if a HANA database fails over with HANA System Replication, backups are automatically continued from the new primary, with Azure Backup for HANA.
Generally available: Service Bus Explorer for the Azure portal
Manage and work with your messages directly in the portal integrated version of Service Bus Explorer.
Generally available: Control the minimum TLS version used with Azure Service Bus
Configure your Service Bus namespace to require that clients send and receive data with a minimal version of transport layer security (TLS).
Public preview: Job diagram simulator in VS Code
The job diagram simulator provides a capability to visualize your Stream Analytics job’s topology and help you improve the query’s parallelism as you develop your streaming query.
General availability: Stream Analytics no-code editor in Event Hub
Stream Analytics no-code editor enables you to develop a Stream Analytics job in minutes with drag and drop experience. Now, it is generally available with several new capabilities added.
Public preview: Oracle to Azure migration with Database Migration Assessment for Oracle
Migrate from Oracle to Azure Database for PostgreSQL with the Database Migration Assessment for Oracle extension available in Azure Data Studio.
Public preview: Azure Active Directory for Azure Database for PostgreSQL – Flexible Server
Use Azure Active Directory-based authentication methods using Azure Active Directory principals, managed identities, and groups to connect and access Azure Database for PostgreSQL - Flexible Server.
Azure Machine Learning—Public preview updates for October 2022/Ignite
Features include functionality to promote pipelines and models across workspaces, perform data wrangling at scale, shorten training times, and lower set-up costs with Azure Container for PyTorch.
General availability: Azure Database for MySQL - Flexible Server with AMD compute
Take advantage of AMD compute for Azure Database for MySQL – Flexible Server general purpose and business critical tiers, with tuned performance out of the box.
Public preview: Azure Database for MySQL - Flexible Server auto scale IO
Use auto scale IO to automatically scale the IO on an instance of Azure Database for MySQL - Flexible Server to efficiently adapt for workload needs without manual intervention.
Infrastructure encryption with customer managed keys adds a second layer of encryption for your data at rest using customer managed keys.
Generally available: Azure Advisor score across all Azure regions
Azure Advisor score enables you to quickly prioritize the Azure Advisor recommendations that will have the greatest impact on your workload health.
Generally available: Azure Cosmos DB for PostgreSQL
Build distributed PostgreSQL applications that scale globally with cross-region read replicas with Azure Cosmos DB for PostgreSQL.
Public preview: Azure Resource Topology
Enhancing network and resource health visualization with unified and dynamic topology across subscriptions, regions, and resource groups.
Public preview: Integration of Azure Monitor Agent with Connection Monitor
Integration of Azure Monitor Agent support in Connection Monitor for monitoring connectivity to ARC enabled endpoints.
Public preview: Azure Migrate - SQL Server migration assessment for migration or modernization
Save cost in Azure for your SQL estate through the new unified discovery and assessment feature in Azure Migrate.
General availability: Azure Stack HCI new features release
Get the latest features release, version 22H2, for Azure Stack HCI.
Public preview: New Azure Virtual Machine Scale Set and Spot Virtual Machines capabilities
You can now include standard VMs and Spot VMs in the same Azure Virtual Machine Scale Set. New VM-series are now supported with Virtual Machine Scale Sets flexible orchestration mode.
General availability: Azure Monitor Logs - capabilities to add value and lower costs
Azure Monitor Logs announce the general availability of capabilities that help you increase cost effectiveness.
Public preview: Azure Monitor Logs - create granular level RBAC for custom tables
Setting RBAC query read action on Azure Monitor Logs table is now publicly available.
Public preview: Azure Monitor managed service for Prometheus
Azure Monitor's new fully managed Prometheus compatible service delivers the best of what you like about open-source ecosystem while automating complex tasks such as scaling, high-availability, and long-term data retention.
Public preview: Rules for Azure Kubernetes Service and Log Analytic workspace resources
Enable a set of recommended alert rules for your Azure Kubernetes Service and Log Analytics workspace resources.
Generally available: Azure Monitor agent migration tools
Migration to the cost-effective, secure, performant, and manageable new Azure Monitor agent is now made easy using migration tools and helpers.
Generally available: Azure Monitor agent support for Windows clients
Using the new Windows client installer you can now get the benefits of the new agent and data collection rules on your Windows 10 and 11 client devices.
General availability: AMD confidential VM guest attestation
Guest attestation enables verifying the trustworthiness (good state) of the trusted execution environment on which the guest VM is executing.
General availability: Confidential VM option for SQL Server on Azure Virtual Machines
Run your SQL Server workloads on the latest AMD-backed confidential virtual machines, ensuring protection of data in use.
Public preview: Azure Managed Confidential Consortium Framework
Confidential Consortium Framework is a new Azure service that lets you use open-source for building and managing multi-party applications that require decentralized trust and governance in a trusted network.
Public preview: Confidential VM option for Azure Virtual Desktop
You can now choose a confidential VM for your Windows 11 desktop virtualization experience, ensuring that workloads running on your virtual desktop are encrypted in memory and protected in use.
General availability: AMD-based confidential VMs for Azure Kubernetes Service
Announcing the general availability of confidential node pools on Azure Kubernetes Service (AKS) with AMD SEV-SNP confidential VMs.
Public preview: Azure CNI Overlay mode in Azure Kubernetes Service
Azure Kubernetes Service (AKS) now offers a new network overlay mode that is more efficient in terms of IP utilization.
Public preview: Kubernetes apps on Azure Marketplace
Public preview: Azure Active Directory workload identity support in AKS
Workload identity provides a simpler approach for managing identities and overcomes limitations of Azure Active Directory pod-managed identity.
Generally available: Windows server 2022 host support in AKS
You can now benefit from the latest Windows Server 2022 features in AKS.
Generally available: 5,000 node scale in AKS
Generally available: 5,000 node scale in AKS
Generally available: Event Grid integration with AKS
You can now get notification via Event Grid when a new Kubernetes version is available on AKS.
General availability: Azure DNS Private Resolver – hybrid name resolution and conditional forwarding
Seamlessly perform conditional forwarding from on-prem to Azure Private DNS Zones and from Azure Virtual Networks to any target DNS server.
Public preview: IP Protection SKU for Azure DDoS Protection
IP Protection is a new SKU that provides cost-effective, enterprise-grade DDoS protection for SMBs and offers the flexibility to enable protection on an individual public IP.
Public preview: Azure Kubernetes Service hybrid deployment options
Azure Kubernetes Service has new hybrid deployment options.
Public preview: Planned maintenance feature for App Service Environment v3
Get a notification 15 days ahead of planned automatic maintenance and start the maintenance when it is convenient for you.
In development: Larger SKUs for App Service Environment v3
Using the new Isolated v2 SKUs, you can now run larger and more complex workloads.
In development: Day 0 support for .NET 7
Day 0 support for .NET 7.0 on App Service.
In development: Go language support
Go language (v1.18 and v1.19) natively supported on Azure App Service.
Public preview: Stretched clusters for Azure VMware Solution
Microsoft is announcing the public preview of stretched clusters for Azure VMware Solution, providing 99.99% uptime for customers mission critical applications.
Generally available: Azure Hybrid Benefit for AKS and Azure Stack HCI
Azure are expanding Azure Hybrid Benefit for Windows Server Software Assurance (SA) customers to easily modernize with Azure, wherever they are.
Public preview: Customer-managed keys for Azure VMware Solution
Microsoft is announcing customer-managed keys for Azure VMware Solution. This new feature will support higher security for customers’ mission critical workloads.
Generally available: Azure Automanage for Azure Virtual Machines and Arc-enabled servers
Announcing the general availability for Azure Automanage with expanded configuration options.
General availability: User-assigned managed identities for Azure Stream Analytics
Azure Stream Analytics currently allows you to use user-assigned managed identities to authenticate your job's inputs and outputs.
General availability: Use managed identities to access Cosmos DB from Stream Analytics
You can use managed identity to authenticate to your Cosmos DB output from Azure Stream Analytics.
General availability: Azure Data Explorer output from Azure Stream Analytics
Azure Data Explorer output from Azure Stream Analytics is now generally available.
General availability: Azure Premium SSD v2 Disk Storage
Premium SSD v2 is the next generation Azure Premium SSD Disk Storage. It offers the most advanced general-purpose block storage solution with the best price-performance.
Generally available: Dapr support for managed identity in Azure Container Apps
Container Apps now provides support for using Dapr with Managed Identity.
Public preview: Azure Container Apps Azure Monitor integration
Azure Container Apps now support using Azure Monitor to send your logs to additional destinations.
Public preview: Support for hybrid rendering with Next.js apps
Azure Static Web Apps now supports zero configuration deployment and hosting of hybrid-rendered Next.js applications.
Generally available: Dapr secrets API support
You can now use Dapr secrets APIs from your Azure Container Apps and leverage Dapr secret store component references when creating other components in the environment.
Public preview: Physical job diagram for Stream Analytics job troubleshooting
The physical job diagram provides rich, instant insights to your Stream Analytics job to help you quickly identify the causes of problems when you troubleshoot issues.
Public preview: Support for HANA Instance snapshot integrated with backint logs
Protect your large HANA databases in Azure Virtual Machines and recover them instantly with Azure Backup for HANA backint certified solution which now supports HANA snapshots.
Azure Functions support for Node 12 is ending on 13 December 2022
TARGET RETIREMENT DATE: DECEMBER 13, 2022
Update your functions apps to use Azure Functions host version 4 and Node 16.
TARGET RETIREMENT DATE: AUGUST 31, 2024
Integration service environment in Azure Logic Apps will be retired on 31 August 2024.
General availability: New Azure proximity placement groups feature
You can now use the new optional parameter "intent" to specify the VM sizes and zone intended for the proximity placement group.
General availability: Simplified disaster recovery for VMware machines using Azure Site Recovery
This week Azure officially announced the general availability of a simpler, more reliable, and modernized way to protect your VMware virtual machines using Azure Site Recovery, for recovering quickly from disasters.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (includes forever free tier):