Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday September 9th 2022.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
Source: aws.amazon.com
AWS Backup adds Amazon CloudWatch metrics to its console dashboard
AWS Backup now provides you a way to centrally view your Amazon CloudWatch metrics for your data protection jobs directly in the AWS Backup console. With this launch, you can monitor your data protection metrics (of backup, copy, and restore jobs) for all the AWS Backup supported services, spanning compute, storage, databases, and third-party applications. To drill down to a custom view, you can add your tracked metrics to a custom CloudWatch Dashboard using the "Add to dashboard" capability.
The new integrated Amazon CloudWatch dashboard for AWS Backup is available to all customers in all AWS Regions where AWS Backup is available. To learn more about AWS Backup, visit the AWS Backup product page, technical documentation, and pricing page. For more information on the AWS Backup features available across AWS Regions, see AWS Backup documentation.
AWS CloudFormation announces new language extensions transform
AWS CloudFormation announces the general availability of a new transform supporting extensions to the CloudFormation template language. AWS CloudFormation is an Infrastructure as Code (IaC) service that lets you model, provision, and manage AWS and third-party resources by authoring templates which are formatted text files in JSON or YAML. This release introduces a language transform called 'AWS::LanguageExtensions.' When declared in a template, the transform enables extensions to the template language. At launch, these include: new intrinsic functions for length (Fn::Length) and JSON string conversion (Fn::ToJsonString), and support for intrinsic functions and pseudo-parameter references in update and deletion policies.
These new language extensions are the result of open discussions with the larger CloudFormation community via our Language Discussion Github repository. This repository allows customers to request language features and leave feedback on Request for Comments (RFC) proposals for new language features. The Fn::Length intrinsic function returns the number of elements within an array or an intrinsic function that returns an array. The Fn::ToJsonString intrinsic function converts an object or array to its corresponding JSON string. Finally, you can use intrinsic functions to define the DeletionPolicy and UpdateReplacePolicy resource attributes. Please visit the Language Discussion repo to recommend or provide input on new language extensions.
Introducing Seekable OCI for lazy loading container images
Seekable OCI (SOCI) is a technology open sourced by AWS that enables containers to launch faster by lazily loading the container image. SOCI works by creating an index (SOCI Index) of the files within an existing container image. This index is a key enabler to launching containers faster, providing the capability to extract an individual file from a container image before downloading the entire archive.
Most methods for launching containers download the entire container image from a remote container registry before starting the container. Waiting for all of the data is wasteful in cases when only a small amount of data is needed for startup. Prior research has shown that the container image downloads account for 76% of container startup time, but on average only 6.4% of the data is needed for the container to start doing useful work.
There are various solutions to this problem, including reducing the size of a container image and pre-fetching container images to local storage. Lazy loading is an approach where data is downloaded from the registry in parallel with the application startup. Container images are stored as an ordered list of layers, and layers are most often stored as gzipped tar files. It’s usually not possible to fetch individual files from gzipped tar files. Some projects enable lazy loading through format conversion. One such project is stargz-snapshotter, which takes an existing OCI image and builds a new OCI image with an embedded table of contents. With SOCI, we borrowed some of the design principles from stargz-snapshotter, but took a different approach. A SOCI index is generated separately from the container image, and is stored in the registry as an OCI Artifact and linked back to the container image by OCI Reference Types. This means that the container images do not need to be converted, image digests do not change, and image signatures remain valid.
An open-source build tool is used to create SOCI indices for existing OCI container images and a remote snapshotter, called soci-snapshotter, provides containerd the ability to lazy load images that have been indexed by SOCI. SOCI and the soci-snapshotter are open sourced under Apache 2.0, and you can learn more about the project on GitHub. We look forward to working and engaging with the community on improving SOCI and making container launches faster.
Amazon SageMaker Canvas announces additional capabilities for exploratory data analysis (EDA) with advanced visualizations, enabling you to explore and analyze your data better before building machine learning (ML) models. SageMaker Canvas is a visual point-and-click interface that enables business analysts to generate accurate ML predictions on their own — without requiring any machine learning experience or having to write a single line of code.
Starting this week, Amazon SageMaker Canvas provides new visualizations for EDA that enable you to understand your data better before model building. These visualizations add to the range of capabilities for data preparation and exploration already offered by Canvas such as flexible sizes for data sampling, impute missing values, replace outliers, filter, join, and modify datasets, and expanded timestamp formats. The visualizations help you analyze the relationships between features in your data sets and comprehend your data better. This is done in an easy-to-read visual format, with the ability to interact with the data and discover insights that may go unnoticed with ad-hoc querying. They can be created quickly through the Data Visualizer within SageMaker Canvas prior to building and training ML models. The new visualizations include:
Scatter Plots: These plots can be used to observe relationships between different numeric variables in your data. Dots are used to present values for two different numeric variables, with the position of each dot indicating the value for a particular data point on the horizontal and vertical axes.
Bar Charts: These charts can be used to summarize a set of categorical data represented by bars for instant data comparison. The height of each bar represents the proportion of a specific aggregation of the data.
Box Plots: These plots represent groups of numerical data through their quartiles. Box plots help you determine how the values from your data are spread out. The graphical view represents the distribution of one or more groups of numeric data.
All the EDA capabilities including the new visualizations are supported in all AWS regions where SageMaker Canvas is available.
Amazon QuickSight Q is now available in four new regions
QuickSight Q is now available in four new regions (in addition to six existing regions) - Asia Pacific (Mumbai, Singapore, Sydney) and Canada (Central). AWS customers can signup for QuickSight in these four new regions in addition to existing regions, details can also be found at AWS QuickSight Q regions.
Starting today AWS Transit Gateway supports internet group management protocol (IGMP) multicast in Asia Pacific (Osaka) and Asia Pacific (Jakarta) AWS Regions.
IP multicast on AWS Transit Gateway helps customers build multicast applications in the cloud and distribute data across thousands of Amazon Virtual Private Cloud networks. IGMP support on Transit Gateway makes it easier for customers to scale up multicast workloads, while also simplifying the management of multicast group membership and network deployment.
Customers no longer need to configure static multicast groups, sources, and receivers when building a multicast network in AWS. AWS Transit Gateway dynamically adds and deletes multicast members based on IGMP protocol interactions. Many on-premises multicast applications require IGMP to dynamically add and remove multicast group members. With native IGMP support on AWS Transit Gateway, customers can easily lift-and-shift such workloads to AWS Cloud without requiring changes to applications or network configuration.
In addition, this feature also provides real-time visibility into the multicast network and enables customers to accurately keep track of group membership changes over time.
Now track user identity for API calls from Amazon SageMaker Studio in AWS CloudTrail
Amazon SageMaker Studio is a fully integrated development environment (IDE) for machine learning that enables data scientists and developers to perform every step of the machine learning workflow, from preparing data to building, training, tuning, and deploying models.
SageMaker Studio is integrated with AWS CloudTrail to enable administrators to monitor and audit user activity and API calls from Studio notebooks, SageMaker Data Wrangler and SageMaker Canvas. Starting today, you can configure SageMaker Studio to also record the user identity (specifically, user profile name) in CloudTrail events thereby enabling administrators to attribute those events to specific users, thus improving their organization's security and governance posture.
Administrators can audit user activity and API calls from Studio notebooks, SageMaker Data Wrangler and SageMaker Canvas through events logged in AWS CloudTrail. However, until today, those log records only identified events by the IAM role used by the user.
This level of logging is sufficient to associate a CloudTrail event with a user when each user is assigned a unique IAM role. For data science teams where several users require similar data and resource access permissions, administrators frequently configure a single IAM role to be shared among those users.
In such cases, administrators didn’t have the ability to attribute CloudTrail events to a specific user thus creating a gap in their auditing of user activity. Starting today, you can configure SageMaker Studio to automatically record the Studio user profile name as the Source Identity in CloudTrail events generated as a result of user activity and API calls from Studio notebooks, Data Wrangler and SageMaker Canvas. With this feature, administrators now have the ability to attribute Studio user actions to specific users even when users share the same IAM role.
AWS Fargate announces migration of service quotas to vCPU-based quotas
AWS Fargate (Fargate), the serverless compute engine for Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS), recently announced the migration of service quotas from the current Amazon ECS task and Amazon EKS pod count-based quotas to vCPU-based quotas. The migration to vCPU quotas will not have any impact on your running tasks and pods.
Starting October 3, 2022, all accounts will be automatically migrated to the new vCPU-based quotas in a phased manner. To facilitate this transition, starting today, you can opt-in to using the new vCPU-based quotas and by doing so, your account will be governed by vCPU-based quotas rather than the current task and pod count-based quotas. By opting-in to vCPU quotas earlier, you can give yourself valuable time to get familiar with the new vCPU-based quotas and make modifications to your quota management tools.
During this transition period, if you run into issues with vCPU-based quotas, you can temporarily opt-out of vCPU quotas until October 31, 2022 and remediate your systems. Starting November 1, 2022, Fargate will automatically transition any remaining accounts to vCPU quotas regardless of your account settings, and the current task and pod count quotas will no longer be supported starting November 16, 2022.
SageMaker built-in algorithms now provides TensorFlow Image Classification algorithms
Amazon SageMaker provides a suite of built-in algorithms, pre-trained models, and pre-built solution templates to help data scientists and machine learning practitioners get started on training and deploying machine learning models quickly. These algorithms and models can be used for both supervised and unsupervised learning. They can process various types of input data including tabular, image, and text.
Starting this week, Amazon SageMaker provides a new built-in algorithm for image classification: Image Classification - TensorFlow. It is a supervised learning algorithm that supports transfer learning for many pre-trained models available in TensorFlow Hub. It takes an image as input and outputs probability for each of the class labels. These pre-trained models can be fine-tuned using transfer learning even when a large number of training images are not available. It is available through the SageMaker Built-in algorithms as well as through SageMaker JumpStart UI inside SageMaker Studio.
Image classification TensorFlow in Amazon SageMaker provides transfer learning on many pre-trained models available in TensorFlow hub. In machine learning, the ability to utilize the training results from one model to produce another model is called transfer learning. According to the number of class labels in your training data, a classification layer is attached to the pre-trained TensorFlow hub model.
The classification layer consists of a dropout layer and a dense layer, fully connected layer, with 2-norm regularizer, which is initialized with random weights. The model training has hyper-parameters for dropout rate of dropout layer, and L2 regularization factor for the dense layer. Then either the whole network, including the pre-trained model, or only the top classification layer can be fine-tuned on the new training data. The algorithm provides a wide-range of training hyper-parameters for fine-tuning on your custom dataset.
AWS Security Hub launches 2 new security best practice controls
AWS Security Hub has released 2 new controls for its Foundational Security Best Practice standard (FSBP) to enhance your Cloud Security Posture Management (CSPM). These controls conduct fully-automatic checks against security best practices for AWS Auto Scaling and Elastic Load Balancing (ELB). If you have Security Hub set to automatically enable new controls and are already using AWS Foundational Security Best Practices, these controls are enabled for you automatically. Security Hub now supports 225 security controls to automatically check your security posture in AWS.
The 2 FSBP controls launched are:
Available globally, AWS Security Hub gives you a centralized and comprehensive view of your security posture across all of your AWS accounts and across all Regions. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, and AWS IAM Access Analyzer, as well as from over 65 AWS Partner Network (APN) solutions.
You can also continuously monitor your environment using automated security checks based on standards, such as AWS Foundational Security Best Practices, the CIS AWS Foundations Benchmark, and the Payment Card Industry Data Security Standard. You can also take action on these findings by investigating findings in Amazon Detective and by using Amazon CloudWatch Event rules to send the findings to ticketing, chat, Security Information and Event Management (SIEM), Security Orchestration Automation and Response (SOAR), and incident management tools or custom remediation playbooks.
Amazon Virtual Private Cloud (VPC) Flow Logs can now be delivered to Amazon Kinesis Firehose
You can now deliver Amazon Virtual Private Cloud (VPC) Flow Logs directly to Amazon Kinesis Firehose, allowing you to stream your flow logs real-time to destinations supported by Amazon Kinesis Firehose or downstream logging solutions via custom HTTP endpoints.
VPC Flow Logs enable you to capture and log information about your VPC network traffic. Until today, you could deliver VPC Flow Logs to Amazon CloudWatch Logs and Amazon Simple Storage Service (S3). With this release, you can now stream your flow logs in real-time to supported Amazon Kinesis Firehose destinations. In addition, you can also use AWS Lambda functions on Amazon Kinesis to enrich or transform the VPC Flow logs while delivering them to downstream logging solutions.
VPC Flow Log delivery to Amazon Kinesis Data Firehose is available through the AWS Management Console, the AWS Command Line Interface (AWS CLI), and the AWS Software Development Kit (AWS SDK). To get started, simply create a new flow log subscription and select Amazon Kinesis Firehose as a destination. To learn more about Amazon VPC Flow Logs delivery to Amazon Kinesis Firehose, please refer to the Amazon Kinesis Firehose documentation and VPC Flow Logs documentation. See the blog to learn about AWS Partner Network solutions that support ingestion of VPC Flow Logs to Amazon Kinesis Firehose. Refer to the CloudWatch pricing for cost of delivering VPC Flow Logs to Amazon Kinesis Firehose.
VPC Flow Logs delivery to Amazon Kinesis Firehose is generally available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Northern California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Jakarta), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), South America (Sao Paulo), Middle East (Bahrain), AWS GovCloud (US-East) and AWS GovCloud (US-West).
Amazon Simple Notification Service (Amazon SNS) is launching a public preview of message data protection, a new set of capabilities for Amazon SNS Standard Topics that leverage pattern matching, machine learning models, and data protection policies to help security and engineering teams facilitate real-time data protection in their applications that use Amazon SNS to exchange high volumes of data.
Amazon SNS is a fully managed, reliable, and highly available messaging service that enables you to connect decoupled microservices or send messages directly to users via SMS, mobile push, and email. With message data protection for Amazon SNS, you can now discover and protect certain types of personally identifiable information (PII) and protected health information (PHI) data that is in motion between your applications.
This can help support your compliance objectives, for example, with regulations such as the Health Insurance Portability and Accountability Act (HIPAA), General Data Privacy Regulation (GDPR), Payment Card Industry Data Security Standard (PCI-DSS), and Federal Risk and Authorization Management Program (FedRAMP). Message data protection enables topic owners to define and apply data protection policies that scan messages in real-time for sensitive data to either provide detailed audit reports of findings or block message delivery altogether.
Amazon EC2 C6id, M6id and R6id instances are now available in an additional region
Starting this week, Amazon EC2 C6id, M6id and R6id instances are available in AWS Region Asia Pacific (Tokyo). C6id, M6id and R6id instances are powered by 3rd generation Intel Xeon Scalable Ice Lake processors, with an all-core turbo frequency of 3.5 GHz, up to 7.6 TB of local NVMe-based SSD block-level storage, and up to 15% better price performance than their 5th generation counterpart instances.
AWS Transit Gateway Connect is now available in two additional AWS Regions
Starting this week, AWS Transit Gateway Connect is available in Asia Pacific (Osaka) and Asia Pacific (Jakarta) Regions.
AWS Transit Gateway Connect is a feature of AWS Transit Gateway that simplifies branch connectivity through native integration of Software-Defined Wide Area Network (SD-WAN) appliances into AWS. Customers can now seamlessly extend their SD-WAN edge into AWS using standard protocols such as Generic Routing Encapsulation (GRE) and Border Gateway Protocol (BGP) through partner orchestrators with just a few clicks.
Transit Gateway Connect provides customers with added benefits such as improved bandwidth and supports dynamic routing with increased route limits, thus removing the need to set up multiple IPsec VPNs between the SD-WAN appliances and Transit Gateway. This simplifies the overall network design and reduces the associated operational cost. Furthermore, Transit Gateway Connect is fully integrated with AWS Transit Gateway Network Manager to provide customers with advanced visibility through global network topology, attachment-level performance metrics, and telemetry data.
Amazon Lookout for Metrics increases limits on number of measures and dimensions
AWS are excited to announce that you can now add up to 10 measures and 10 dimensions when setting up your detector for Amazon Lookout for Metrics. With this launch you can now include more measures and dimensions in a single detector, which allows you to get insights on root causes and causality across all the measures and dimensions that you have selected.
Amazon Lookout for Metrics uses machine learning (ML) to automatically monitor the metrics that are most important to businesses with greater speed and accuracy than traditional methods used for anomaly detection. The service also makes it easier to diagnose the root cause of anomalies like unexpected dips in revenue, high rates of abandoned shopping carts, spikes in payment transaction failures, increases in new user sign-ups, and many more.
Measures are the numerical values that you want to detect anomalies on, and dimensions are the categorical information that is associated with the measures. For example, a measure can be revenue numbers, churn rate, or error count rates in your application; and a dimension can be different store locations, types of products, or different resources you are monitoring your error count rates on.
AWS Snowball is now available in the AWS Asia Pacific (Jakarta) Region
AWS Snowball devices are now available in the AWS Asia Pacific (Jakarta) Region.
AWS Snowball, a member of the AWS Snow Family, is an edge computing, data migration, and edge storage device that comes in two configurations - Snowball Edge Storage Optimized and Snowball Edge Compute Optimized.
The Snowball Edge Storage Optimized device provides 80 TB of Amazon S3 compatible object storage and is designed for local storage and large-scale data transfer. The Snowball Edge Compute Optimized device provides 52 vCPUs, 7.68 TB of NVMe SSD storage, 42TB of HDD storage, and 208 GB of RAM for use cases like advanced machine learning and full motion video analysis in disconnected environments.
You can use Snowball Edge Compute Optimized devices for data collection, machine learning and processing, and storage in environments with intermittent connectivity, like manufacturing, industrial, and transportation environments, or in extremely remote locations, like during military or maritime operations, before shipping the devices back to AWS. All AWS Snowball devices can be rack mounted and clustered together to build larger temporary installations.
Amazon Managed Blockchain (AMB) is now available in AWS GovCloud (US-West) Region
Amazon Managed Blockchain (AMB) Hyperledger Fabric is now generally available in AWS Govcloud (US-West) Region, allowing customers in both the public and commercial sectors to create as well as manage production-grade blockchain infrastructure with just a few clicks.
With the launch of AMB in AWS GovCloud (US-West), public sector customers can now build solutions using AMB capturing the cost savings and operational efficiency benefits of a fully-managed blockchain network. Both federal government agencies and contractors can create or join Hyperledger Fabric networks to build blockchain applications that can be made compatible with their requirements for production deployment. For example, customers can implement a Hyperledger Fabric blockchain network to track the provenance of components from their suppliers for compliance purposes.
AMB is a fully managed service that reduces the overhead required to create both private Hyperledger Fabric networks and Ethereum full nodes on the public mainnet and testnets (Goerli, Ropsten and Rinkeby). With the Hyperledger Fabric private blockchain offering, once your network is up and running, AMB makes it simple to manage and maintain your blockchain network, including critical tasks like managing certificates and facilitating network governance to add or remove member organizations from your blockchain network.
With the launch of AMB in AWS GovCloud (US), public sector customers including government agencies and private enterprises that work for government agencies, will be able to leverage the benefits of blockchain technology and deploy production workloads to achieve various use-cases such as track-and-trace of supply chains, trade finance, as well as clearing and settlement of financial assets.
Amazon MemoryDB for Redis is now available in the AWS Europe (Paris and Milan) Regions
Starting this week, Amazon MemoryDB for Redis is generally available in two additional AWS Regions: Europe (Paris) and Europe (Milan).
Amazon MemoryDB for Redis is a Redis-compatible, durable, in-memory database service that delivers ultra-fast performance. It is purpose built for modern applications with microservices architectures. Amazon MemoryDB is compatible with Redis, a popular open source data store, where customers can quickly build applications using the same flexible and friendly Redis data structures, APIs, and commands that they already use today.
With Amazon MemoryDB, all of your data is stored in memory, which enables you to achieve microsecond read and single-digit millisecond write latency and high throughput. Amazon MemoryDB also stores data durably across multiple Availability Zones (AZs) using a Multi-AZ transactional log to enable fast failover, database recovery, and node restarts.
Delivering both in-memory performance and Multi-AZ durability, Amazon MemoryDB can be used as a high-performance primary database for your microservices applications eliminating the need to separately manage both a cache and durable database. Amazon MemoryDB additionally provides native support for JavaScript Object Notation (JSON) documents in addition to the data structures included in open source Redis, at no additional cost. To learn more about MemoryDB, visit the MemoryDB features page and documentation.
Amazon Connect Voice ID now detects fraud risk from voice spoofing during customer calls
Amazon Connect Voice ID now detects fraud risk from voice-based deception techniques such as voice manipulation during customer calls, helping you make your voice interactions more secure. For example, Voice ID can detect, in real-time, if an imposter is using a speech synthesizer to spoof a caller’s voice and bluff the agent or Interactive Voice Response (IVR) system.
When such fraud is detected, Voice ID flags these calls as high risk in the Amazon Connect agent application, enabling you to take additional security measures or precautions. This feature works out-of-the-box once Voice ID fraud detection is enabled for a contact, and no additional configuration is required.
Announcing new AWS Console Home widgets for recent AWS blog posts and launch announcements
AWS are excited to announce two new widgets (Latest announcements and Recent AWS blog posts) are available on AWS Console Home. Using these widgets, you can more easily learn about new AWS capabilities and get the latest news about AWS launches, events, and more.
The AWS blog posts and launch announcements shown are related to the services used in your applications.
You can access the Latest Announcements and New AWS Blogs widgets on Console Home by signing into AWS Management Console. The new widgets are available in all public AWS Regions.
An additional 5 AWS Controllers for Kubernetes (ACK) service controllers have graduated to generally available status. Customers can now provision and manage AWS resources using ACK controllers for Amazon Relational Database Service (RDS), AWS Lambda, AWS Step Functions, Amazon Managed Service for Prometheus (AMP), and AWS Key Management Service (KMS).
ACK lets you define and use AWS service resources directly from Kubernetes clusters. With ACK, you can take advantage of AWS managed services for your Kubernetes applications without needing to define resources outside of the cluster or run services that provide supporting capabilities like databases or message queues within the cluster. ACK now supports 12 AWS service controllers as generally available with an additional 13 in preview.
Easily process your data while using Amazon Lookout for metrics
AWS are excited to announce that you can now filter your data by its dimensions while using Amazon Lookout for Metrics. Amazon Lookout for Metrics uses machine learning (ML) to automatically monitor the metrics that are most important to businesses with greater speed and accuracy than traditional methods used for anomaly detection.
The service also makes it easier to diagnose the root cause of anomalies like unexpected dips in revenue, high rates of abandoned shopping carts, spikes in payment transaction failures, increases in new user sign-ups, and many more.
With this launch you can now filter your data based on dimension values, giving you the ability to do data processing from within the AWS console or using the API. Additionally, filtering dimension values can reduce training time, and also decrease costs compared to processing all dimension values within your data.
Dimensions are categorical fields that create subgroups of measures based on their value. Previously, setting up your dataset with dimensions meant that we would use all of your data and segment it with each dimension value. Now, by enabling filters on dimensions, you can choose the specific labels you would want us to perform anomaly detection on. For example, if your dataset contained sales data with a column indicating which country each data point came from, you can now select a subset of countries that are more important to you without having to pre-process your data.
Anthos clusters on VMware 1.10.7-gke.15 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.10.7-gke.15 runs on Kubernetes 1.21.14-gke.2100.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.12, 1.11, and 1.10.
Managed Anthos Service Mesh support for GKE Autopilot is now generally available in the Regular and Rapid channels. For more information, see Configure managed Anthos Service Mesh with fleet API or Configure managed Anthos Service Mesh with asmcli
.
Automatically configuring managed Anthos Service Mesh using the Fleet Feature API is now generally available in the rapid, regular, and stable release channels. With this feature, Google will automatically configure your control plane, data plane, and multi-cluster endpoint visibility. This is the preferred method to provision managed Anthos Service Mesh on GKE. For more information, see Configure managed Anthos Service Mesh with fleet API.
The Google-managed data plane is now generally available (GA) as a part of managed Anthos Service Mesh. The managed data plane helps you upgrade data plane proxies automatically. For more information see Configure managed Anthos Service Mesh
Batch
Batch is now available in the following regions: asia-southeast1
and europe-west6
. For more information, see Locations.
BigQuery
Cloud console updates: Improvements that are related to query execution include the following:
For long-running queries, the Execution details tab is automatically displayed with the timing details of each stage of the query.
In the query editor, you can now see the query validation message when your query is completed or canceled.
Cloud Armor
Adaptive Protection suggested rules can now be deployed automatically in public preview. For more information, see Automatically deploy Adaptive Protection suggested rules.
Cloud Logging
Cloud Audit Logging no longer redacts the principal email associated with service accounts in audit logs. For more information, see Caller identities in audit logs.
Cloud Run
Cloud Run now allows up to 4,000 serving revisions and 2,000 tagged revisions per region and project.
Cloud SQL for MySQL
Cloud SQL for MySQL now supports minor version 8.0.30. To upgrade your existing instance to the new version, see Upgrade the database minor version.
Compute Engine
The incorrect quota limits displayed in the Cloud console in the us-east5
region have been resolved.
Generally available: To reduce image licensing cost, you can now bring your Red Hat Enterprise Linux subscriptions to Google Cloud. For more information, see Create a VM using a RHEL BYOS image.
Preview: Accelerator-optimized (A2 ultraGPU) machine types with their attached A100 80GB GPUs are now available in the following region:
us-central1-c
Generally available: Archive snapshots are now available for more cost-efficient data retention as compared to regular snapshots, which are best suited for long-term back up and disaster recovery. For more information, see Archive snapshots.
Dataproc
Avoid using the following image versions when creating new clusters:
2.0.31-debian10
, 2.0.31-ubuntu18
, 2.0.31-rocky8
2.0.32-debian10
, 2.0.32-ubuntu18
, 2.0.32-rocky8
2.0.33-debian10
, 2.0.33-ubuntu18
, 2.0.33-rocky8
1.5.57-debian10
, 1.5.57-ubuntu18
, 1.5.57-rocky8
1.5.58-debian10
, 1.5.58-ubuntu18
, 1.5.58-rocky8
1.5.59-debian10
, 1.5.59-ubuntu18
, 1.5.59-rocky8
If your cluster uses one of these image versions, there is a small chance that the cluster might enter an ERROR_DUE_TO_UPDATE
state while being updated, either manually or as a result of autoscaling. If that happens, contact support. You can avoid future occurrences by creating new clusters with a newer image version.
The Calico issue link included in the August 19, 2022 release notes issue was updated to the Calico issue #4857.
The ip-masq-agent
is not able to boot up on Arm nodes in GKE clusters with control planes running the following versions:
2022-R18: 1.23.8-gke.1900, 1.24.2-gke.1900
2022-R19: 1.24.3-gke.200
2022-R20: 1.23.9-gke.900, 1.24.3-gke.900
This regression has been fixed. Please upgrade your control plane to versions included in the 2022-R21 release.
CVE-2021-4160, CVE-2022-1664, CVE-2022-1292, and CVE-2022-29155 have been patched in the Filestore CSI driver for newly created clusters.
Secret Manager
Secret Manager now supports using annotations to define custom metadata about the secret. The metadata in an annotation can be small or large, structured or unstructured, and can include characters. You can add annotations to secrets when you create a new secret or when you edit an existing secret. For information, see Creating and managing annotations.
Storage Transfer Service
Storage Transfer Service now offers Preview support for moving data from S3-compatible storage to Cloud Storage. This feature builds on recent Cloud Storage launches, namely support for Multipart upload and List Object V2, which makes Cloud Storage suitable for running applications written for the S3 API.
With this new feature, customers can seamlessly copy data from self-managed object storage to Google Cloud Storage. For customers moving data from AWS S3 to Cloud Storage, this feature provides an option to control network routes to Google Cloud, resulting in considerably lower egress charges.
See Transfer from S3-compatible sources for details.
Microsoft Azure Releases And Updates
Source: azure.microsoft.com
In Development: New options to bring your licenses to a partner’s cloud
On October 1, 2022, Microsoft will implement significant upgrades to our outsourcing and hosting terms that will benefit customers worldwide.
Generally available: Resource instance rules for access to Azure Storage
Resource instance rules enable secure connectivity to Azure Storage by only allowing access from specific resources of select Azure services.
General availability: Up to 45% performance gains in stream processing
Your Stream Analytics jobs get up to 45% performance boost in CPU utilization by default.
General availability: Managed private endpoint support to Synapse SQL output
You can now connect your Stream Analytics jobs running on a dedicated cluster to your synapse dedicated SQL pool using managed private endpoints.
General availability: Azure Database for PostgreSQL output in Stream Analytics
Native output connector for Azure Database for PostgreSQL allows you to easily build real time applications with the database of your choice.
General availability: Authenticate to Service Bus using managed identity
Authenticate your Stream Analytics jobs to connect to Service Bus using system-assigned managed identities.
Public preview: Stream Analytics no-code editor updates in August 2022
New features are now available in Stream Analytics no-code editor public preview including Azure SQL database available as reference data input and output sink, diagnostic logs available for troubleshooting, and new designed ribbon.
Generally available: Azure Data Explorer Kusto Emulator
New Azure Data Explorer offering: the Kusto Emulator is a Docker Container encapsulating the Kusto Query Engine.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here: