Hava Blog and Latest News

In Cloud Computing This Week [July 15th 2022]

Written by Team Hava | July 15, 2022

This week's roundup of all the cloud news.


Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday 15th July 2022.

This week at Hava we've added more GCP resources and are prepping the GA release of self hosted deployments. Now you can host Hava on your own infrastructure should you have any security or governance policies preventing you from connecting to the SaaS version.

To stay in the loop, make sure you subscribe using the box on the right of this page.

Of course we'd love to keep in touch at the usual places. Come and say hello on:

Facebook.      Linkedin.     Twitter.

AWS Updates and Releases

Source: aws.amazon.com

Amazon VPC Flow Logs adds Transit Gateway support for improved visibility and monitoring

Starting this week, Amazon VPC Flow Logs adds support for Transit Gateway. With this feature, Transit Gateway can export detailed telemetry information such as source/destination IP addresses, ports, protocol, traffic counters, timestamps and various metadata for all of its network flows. This feature provides you with an AWS native tool to centrally export and inspect flow-level telemetry for all network traffic that is traversing between Amazon VPCs and your on-premises networks via your Transit Gateway.

Transit Gateway enables you to connect thousands of Amazon Virtual Private Clouds (VPCs) and your on-premises networks using a single gateway. Until now VPC flow logs provided network telemetry from individual VPCs attached to the Transit gateway, and you had to run complex procedures to correlate that data for gaining end-to-end network insights. With Transit gateway Flow logs, you are able to gain flow-level insights from one central point in your network(s) using a single AWS account. This capability provides you with flow-level visibility for traffic across AWS regions over Transit gateway peering connections as well as your traffic over Direct Connect and Site-to-site VPN connections without having to rely on third-party routers or telemetry export tools. Transit Gateway Flow Logs feature can help you with myriads of use-cases around proactive network troubleshooting, network capacity planning and compliance and security.

To get started, simply create a new Flow Logs subscription using Transit Gateway or a Transit Gateway Attachment as a resource. You can select custom log format to choose specific log fields and the desired log destination type such as Amazon S3 or Cloudwatch logs. This feature is available through the AWS Management Console, the Amazon Command Line Interface (Amazon CLI), and the Amazon Software Development Kit (Amazon SDK).

AWS Firewall Manager now supports AWS Network Firewall strict rule order with alert and drop configurations

AWS Firewall Manager now enables you to centrally deploy AWS Network Firewalls with additional strict rule order, default deny, and default drop configurations.

Starting this week, you can use AWS Firewall Manager to specify the precise order by which AWS Network Firewalls should evaluate rules, making it easier to write and process Network Firewall rules. For example, you can choose to evaluate a drop rule before a pass rule, or you can choose to evaluate an alert rule followed by a drop rule, followed by another alert rule. AWS Firewall Manager enables you to centrally configure strict rule ordering for both stateful firewall rule groups and firewall policies. When you configure a firewall to use strict ordering, rule groups are evaluated by order of priority, starting from the lowest number, and the rules in each rule group are processed in the order in which they're defined. Once strict rule order has been enabled, you can specify a default action of Drop and/or Alert without having to write additional firewall rules.

AWS Firewall Manager is a security management service that acts as a central place for you to configure and deploy firewall rules across accounts and resources in your organization. With Firewall Manager, you can deploy and monitor rules for AWS WAF, AWS Shield Advanced, VPC security groups, AWS Network Firewall, Amazon Route 53 Resolver DNS Firewall and third-party firewalls across your entire organization. Firewall Manager ensures that all firewall rules are consistently enforced, even as new accounts and resources are created.

Amazon Redshift improves cluster resize performance and flexibility of cluster restore

Amazon Redshift has improved the performance of Redshift’s classic resize feature and increased the flexibility of the cluster snapshot restore operation. Redshift classic resize is used to resize a cluster in scenarios where you need to change the instance type or transition to a configuration that cannot be supported by elastic resize. Previously, this can take the cluster offline for many hours during resize, but now the cluster can typically be available to process queries in minutes. Clusters can also be resized when restoring from a snapshot and in those cases there could be restrictions.

Now you can also restore an encrypted cluster from an unencrypted snapshot or change the encryption key. Amazon Redshift uses AWS Key Management Service (AWS KMS) as an encryption option to provide an additional layer of data protection by securing customers’ data from unauthorized access to the underlying storage. Now you can encrypt an unencrypted cluster with an AWS KMS key faster by specifying an AWS KMS key ID when modifying a cluster. You can also restore an AWS KMS-encrypted cluster from an unencrypted snapshot. Access these features via API or CLI. Please note that the above features only apply to the target clusters with RA3 nodes.

AWS announces AWS AppConfig Extensions

AWS announces AWS AppConfig Extensions, a new capability that allows customers to enhance and extend the capabilities of feature flags and dynamic runtime configuration data. AWS AppConfig, a capability of AWS Systems Manager, allows customers to configure, validate, and deploy configuration data to more safely and quickly update application behavior. The AppConfig Extensions framework exposes action points along the lifecycle of feature flags and configuration data; customers can hook new functionality onto each action point. Action points are exposed during the creation, validation, deployment, and rollback of feature flag and configuration data.

Available extensions at launch include AppConfig Notification extensions that pushes messages about configuration updates to EventBridge, SNS, or SQS. There is also a Jira extension, which allows a customer to track Feature Flag changes in AppConfig as individual issues in Atlassian’s Jira.

Furthermore, customers can create their own extensions. For example, a customer may want to extend AppConfig’s built in automatic rollback functionality by calling a Lambda function if a rollback occurs. Another example would be a customer pulling configuration data from another source, like a database or a git repository, to be merged into other AppConfig data prior to deployment. These are possible through a customer authoring their own extension. In addition to customer-authored and current AWS-authored extensions, more extensions are planned in the future.

AWS Fault Injection Simulator is now available in AWS GovCloud (US) Regions

AWS Fault Injection Simulator (FIS) is now available in the AWS GovCloud (US-East and US-West) Regions. The expansion of AWS FIS into the AWS GovCloud (US) Regions allows US government agencies and contractors to create and run fault injection experiments that reveal how their applications respond to stress under real world conditions.

You can use AWS FIS experiments to reproduce disruptions such as EC2 instance terminations and Spot Instance interruptions, API errors, network latency, and RDS database failovers. Because it can be difficult to predict how distributed systems will behave under such conditions, AWS FIS can help you uncover hidden bugs, monitoring blind spots, and performance bottlenecks whether you run fault injection experiments in gameday scenarios, test environments, or production deployments.

Amazon QuickSight support for IE11 is ending by July 31, 2022

Effective July 31, 2022, Amazon QuickSight is ending support for IE11. After that date, we can no longer guarantee that the features and webpages of Amazon QuickSight will function properly on IE 11. We recommend customers use one of our supported browsers: Microsoft Edge (Chromium), Google Chrome, or Mozilla Firefox.

Introducing Log Anomaly Detection and Recommendations for Amazon DevOps Guru

This week, AWS are announcing the general availability of a new feature, Log Anomaly Detection and Recommendations for Amazon DevOps Guru. As part of this feature, DevOps Guru will ingest Amazon CloudWatch Logs for AWS resources that make up your application, with Lambda being first. Logs will provide new enrichment data in an insight to enable more accurate understanding of the root cause behind an application issue, and provide more precise remediation steps.

With this new feature, when an operational metric goes anomalous, DevOps Guru will analyze relevant Amazon CloudWatch Logs and surface log anomalies, such as exception keywords, numerical anomalies, HTTP status codes, and log format anomalies. This information helps target root causes with greater accuracy and increases the precision of remediation steps provided. Additionally, a log sample for each anomaly along with deep link to Amazon CloudWatch will be provided, helping streamline the user's troubleshooting and remediation experience within the DevOps Guru Console.

You can enable the Log Anomaly Detection and Recommendations feature today at no additional charge in all Regions where DevOps Guru is available.

Amazon DevOps Guru is a Machine Learning (ML) powered service that makes it easy to improve an application’s operational performance and availability. When DevOps Guru detects anomalous behavior in these metrics, it creates an insight that contains recommendations and lists of metrics and events that are related to the issue to help you diagnose and address the anomalous behavior.

Amazon Redshift announces support for Row-Level Security (RLS)

Amazon Redshift now supports Row-Level Security (RLS), a new enhancement that simplifies design and implementation of fine-grained access to the rows in tables. With RLS, you can restrict access to a subset of rows within a table based on the users’ job role or permissions and level of data sensitivity with SQL commands. By combining column-level access control and RLS, Amazon Redshift customers can provide comprehensive protection by enforcing granular access to their data.

Amazon Redshift already supported enforcing row-level policies on Amazon Redshift Spectrum tables using AWS Lake Formation. With this release, as a security administrator, you can create a RLS policy for a table that allows access to perform database operations, such as SELECT, DELETE, and UPDATE on a subset of rows defined by the RLS policies. Amazon Redshift allows you to apply the same RLS policy to multiple tables with common column names thus simplifying development and testing of RLS policies. After the policies are created, attach them to the users or roles and turn on RLS on the table to enforce the policies. Row-level access control allows users to enforce RLS policies for restricting data access when queries are run. Once RLS is turned on in a table, a user who doesn’t have a RLS policy applied cannot access any records of the table. Multiple RLS policies can be attached to the same role, and users can have multiple roles which have RLS policies associated with them.

Announcing general availability (GA) of Automated Materialized View for Amazon Redshift

Amazon Redshift announces GA of Automated Materialized View (AutoMV) that helps you to lower query latency for repeatable workloads. AutoMV minimizes your effort for manually creating and managing materialized views and provides you the same performance benefits of user-created materialized views. Dashboard queries used to provide quick views of key performance indicators (KPIs), events, trends, and other metrics are some examples of workloads that can benefit from AutoMV. Reporting queries scheduled at various frequencies may also benefit from AutoMV.

Materialized views are a powerful tool for improving query performance, and you could set these up if you have well understood workloads. However, you might have increasing and changing workloads where query patterns are not predictable.

AutoMV in Amazon Redshift continually monitors the workload using machine learning to decide whether a new materialized view will be beneficial for your workload. AutoMV balances the cost of creating and keeping materialized views up-to-date vs. expected improvements to query latency. The system also monitors previously created AutoMVs and drops them when they are no longer beneficial to your workload. This avoids expending resources to keep unused AutoMVs fresh.

Amazon Redshift Serverless is now generally available

Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available. With Amazon Redshift Serverless, all users—including data analysts, developers, and data scientists—can now use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.

With a few clicks in the AWS Management Console, you can get started with querying data using the Query Editor V2 or your tool of choice with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can take advantage of preloaded sample datasets along with sample queries to kick-start analytics immediately. You can create databases, schemas, and tables, and load your own data from Amazon Simple Storage Service (Amazon S3), access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, in Amazon S3 data lakes, as well as data in your operational databases, such as Amazon Aurora and Amazon Relational Database Service (Amazon RDS). Amazon Redshift Serverless provides unified billing for queries on these data sources, helping you efficiently monitor and manage costs.

Announcing the general availability of AWS Cloud WAN

This week, AWS announced the general availability of AWS Cloud WAN, a wide area networking (WAN) service that helps you build, manage, and monitor a unified global network. The service manages traffic running between your AWS resources and your on-premises environments.

With Cloud WAN, you can use a central dashboard and network policies to create a global network that spans multiple locations and networks—removing the need to configure and manage different networks individually by using different technologies. You can use your network policies to specify which of your Amazon Virtual Private Clouds, AWS Transit Gateways, and on-premises locations you want to connect to by using an AWS Site-to-Site VPN, AWS Direct Connect, or third-party software-defined WAN (SD-WAN) products. The Cloud WAN central dashboard generates a complete view of the network to help you monitor network health, security, and performance. Cloud WAN automatically creates a global network across AWS Regions by using Border Gateway Protocol (BGP) so that you can easily exchange routes around the world.

AWS Security Hub adds four new integration partners

AWS Security Hub has added four new integration partners to help customers with their cloud security posture monitoring. Integrations from Lacework, Juniper Networks, SentinelOne, and K9 Security bring Security Hub to 79 integrations. Lacework sends findings from their Polygraph Data Platform (PDP) to Security Hub to help manage AWS posture and compliance events. Juniper Networks' vSRX Virtual Next Generation Firewall sends security events observed by the firewall to Security Hub. SentinelOne sends security findings, identified by SentinelOne endpoints running in your AWS environment, to Security Hub. K9 Security sends findings to Security Hub related to important access changes within your AWS Identity and Access Management (IAM) configuration.

Security Hub is available globally and is designed to give you a comprehensive view of your security posture across your AWS accounts. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, including Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, AWS Systems Manager Patch Manager, AWS Config, AWS Health, AWS IAM Access Analyzer, as well as from over 65 AWS Partner Network (APN) solutions. You can also continuously monitor your environment using automated security checks based on standards, such as AWS Foundational Security Best Practices, the CIS AWS Foundations Benchmark, and the Payment Card Industry Data Security Standard. In addition, you can take action on these findings by investigating findings in Amazon Detective or AWS Systems Manager OpsCenter or by sending them to AWS Audit Manager or AWS Chatbot. You can also use Amazon EventBridge rules to send the findings to ticketing, chat, Security Information and Event Management (SIEM), response and remediation workflows, and incident management tools.

AWS re:Post introduces profile pictures and inline images

re:Post has launched a new functionality for community members to add a profile picture or avatar to their account. re:Post members will now be able to better personalize their accounts by uploading a photo or image of their choice. The ability to add a profile image creates a visual identifier for the account and helps members form connections, build relationships, and foster learning in the community.

re:Post also introduces a functionality for community members to add inline images within questions and answers. Using this feature, members can now enhance the content they post with architectural diagrams and screenshots. Having the ability to add inline images as supporting content to posts enables members to represent their ideas better, which will help other community members provide inputs and guide the author toward resolution. Members can add the following acceptable images and profile pictures: max 2MiB and file types of .jpg, .jpeg, or .png that comply with the re:Post Community Guidelines

AWS Firewall Manager now supports VPC security group tag distribution with common security group policies

AWS Firewall Manager now supports centrally distributing VPC security group tags when creating a common security group policy.

A primary security group is a set of security group rules defined by the AWS Firewall Manager administrator that is replicated to all in-scope accounts when deploying a policy to those accounts. With this release, you can configure AWS Firewall Manager to distribute tags associated with the primary security group when creating a common security group policy. Every security group created by Firewall Manager in the member accounts will include the same tags as the primary security group, enabling easier distribution of security group tags. Optionally, you can have additional tags on member accounts, in addition to the base tags created on primary Security Groups. Firewall Manager will track compliance of the distributed security group tags in the member accounts and alert you if any of the primary security group tags are deleted or modified. You can optionally enable auto-remediation, and Firewall Manager will add the deleted tag to the non-compliant security groups.

AWS Firewall Manager is a security management service that acts as a central place for you to configure and deploy firewall rules across accounts and resources in your organization. With Firewall Manager, you can deploy and monitor rules for AWS WAF, AWS Shield Advanced, VPC security groups, AWS Network Firewall, Amazon Route 53 Resolver DNS Firewall, and third-part firewalls across your entire organization. Firewall Manager ensures that all firewall rules are consistently enforced, even as new accounts and resources are created.

Introducing Nimble Studio seamless IAM access for studio components

Amazon Nimble Studio now supports seamless AWS Identity Access Management (IAM) profile access for studio components, and custom studio components, directly to workstation sessions, available immediately. This allows Nimble Studio admins to set up and control additional properties of their streaming workstations via seamless IAM role permissions, ensuring artists have the right level of access for the tasks they’re working on, without the need to switch profiles. Custom components use PowerShell scripts for Windows, and shell scripts for Linux instances. These configurations can then be added to Nimble Studio Launch Profiles for easy retrieval. With custom configurations, you can add resources to your workstations and run custom scripts on your instance, system, and user initialization with greater flexibility than before with seamless IAM role permissions.

Use Result Fragment Caching with EMR runtime for Apache Spark to boost query performance by up to 15x

The Amazon EMR runtime for Apache Spark is a performance optimized runtime environment for Apache Spark, available and turned on by default on Amazon EMR clusters 5.28 onward. Amazon EMR runtime for Spark is up to 3x faster with 100% API compatibility with open source Spark.

This week, AWS are excited to announce Result Fragment Caching, a new feature available in Amazon EMR runtime for Apache Spark to help speed up queries that target a static subset of data. In our internal tests derived from the TPC-H benchmark, when using Result Fragment Caching, queries that employ rolling and incremental window functions consistently delivered query performance speedups of up to 15.6x.

Customers often have data in Amazon S3 that does not change over time. However, repeated queries over such datasets need to re-compute results in each run. For e.g. consider eCommerce orders data which contains information about orders that are delivered or returned within 30 days. Suppose a user wants to compare the daily trend of orders for a certain product for the entire year. All orders data is mostly unchanged, except the last 30 days of data, but a query aggregating order returns for every day of last 1 year would need to recompute the results for each day of the year every time the query runs. Result Fragment Caching addresses this by automatically caching results fragments i.e. all parts of the results from subtrees of Spark query plans and then reusing the results on subsequent query executions. Result Fragment Caching allows maximum reuse of the cached data as the fragments i.e. parts of the result can be reused in subsequent queries as opposed to Result Set caching where the query has to exactly match to return the results from cache. Results fragments are stored in a customer-designated S3 bucket and can be re-used across multiple clusters. This can speedup queries especially when the results are significantly smaller than the input data set. With Result Fragment Caching, repeated queries aggregating orders in the last 1 year would only need to compute the last 30 days of changed data while for all previous days of the year, aggregated results could be simply served from cache significantly improving query performance. The speedup could be even more pronounced for queries on data that is mostly immutable and is only inserted for the current day e.g. for IOT events, clickstream data etc. In the rare cases where data is modified, Result Fragment Caching automatically detects and invalidates the cached data ensuring data accuracy.

Result Fragment Caching is available in all regions where the Amazon EMR runtime for Apache Spark is available.

Amazon Athena enhances console and API support for parameterized queries

This week, Amazon Athena announced enhancements to its console and API which provide more flexibility when using parameterized queries. You can now run parameterized queries directly from the Athena console and an enhanced API that no longer requires you to prepare SQL statements in advance. With today’s launch, it is now easier than before to take advantage of the reusability, simplification, and security benefits of parameterized queries.

Parameterized queries are often used when a query has criteria that changes from one execution to the next. Available today, users can now parameterize and run such queries directly in the Athena console. When running queries with parameters, you will see a user interface that allows you to enter the parameter values directly, eliminating the complexity of modifying the SQL directly. Developers using parameterized queries in their applications can take advantage of an enhanced query execution API which allows you to provide the execution parameters and SQL in a single call.

Amazon CloudFront supports header names of up to 1024 characters in CloudFront policies

Amazon CloudFront now supports a maximum of 1024 characters across all header names in cache, origin request, and origin response policies. With 1024 characters, customers now have 512 extra characters to add header metadata to their policies.

A CloudFront policy allows customers to apply the same specific combination of settings across many distribution behaviors. Previously, customers could add a maximum of 512 characters as the CloudFront or custom header names in a policy. With the increased character limit, customers can now, for example, add additional headers to a cache policy to configure a more granular cache key, or customers can leverage additional headers as inputs to implement user authentication. All the headers are available to use in Lambda@Edge, CloudFront Functions, or application logic at the Origin.


Google Cloud Releases and Updates
Source: cloud.google.com

Agent Assist

Agent Assist now offers UI Modules as a public Preview feature. UI Modules are an out-of-the-box option for integrating Agent Assist features into your agent UI system. For more information, see the UI Modules documentation.


Anthos component releases for June, 2022

Anthos clusters on VMware:

Anthos clusters on bare metal:

Anthos clusters on AWS:

Anthos clusters on Azure:

Anthos Config Management:

Anthos Service Mesh:

Migrate to Containers:

Cloud Logging:

Cloud Monitoring:

Anthos Clusters on AWS

You can now launch Kubernetes 1.23 clusters.

Kubernetes 1.23.7-gke.1300 includes the following changes:

  • Disable profiling endpoint (/debug/pprof) by default in kube-scheduler and kube-controller-manager.
  • Update kube-apiserver and kubelet to only use Strong Cryptographic Ciphers.
  • Add an instance metadata server (IMDS) emulator.

In a future release of 1.23 VolumeSnapshot v1beta1 APIs will no longer be served. Please update to VolumeSnapshot v1 APIs as soon as possible.

In Kubernetes 1.23 and higher, cluster Cloud Audit Logs is now available and is enabled by default.

CIS benchmarks are now available for Kubernetes 1.23 clusters.

This release fixes the following vulnerabilities:

Restrictions on IP ranges that can be used for a cluster's Pods and Services are now relaxed. Pod and Service IP ranges can now overlap with VPC's IP ranges, provided they do not intersect the control plane or node pool subnets.

Anthos Clusters on Azure

You can now launch clusters with the following Kubernetes versions:

  • 1.23.7-gke.1300
  • 1.22.10-gke.1500
  • 1.21.11-gke.1900

You can now launch Kubernetes 1.23 clusters.

Kubernetes 1.23.7-gke.1300 includes the following changes:

  • Disable profiling endpoint (/debug/pprof) by default in kube-scheduler and kube-controller-manager.
  • Update kube-apiserver and kubelet to only use Strong Cryptographic Ciphers.

In a future release of 1.23 VolumeSnapshot v1beta1 APIs will no longer be served. Please update to VolumeSnapshot v1 APIs as soon as possible.

In Kubernetes 1.23 and higher, cluster Cloud Audit Logs is now available and is enabled by default.

CIS benchmarks are now available for Kubernetes 1.23 clusters.

Apigee Integration

On July 09, 2022 we released an updated version of the Apigee Integration software.

Data Mapping task enhancements

The Data Mapping task in Apigee Integrations now provides the following enhancements:

  • Nested function support. You can pass one or more transformation functions as input parameters to another function.
  • New transformation functions. You can use the following new transform functions for array-type variables:

    • FILTER - Filters the array elements that satisfy a given condition.
    • FOR_EACH - Applies one or more transformation functions for each element in an array.
  • Subfield mapping support for JSON variables. You can view and search all the subfields of a JSON variable in the data mapping editor variable list.

For more information, see the Data Mapping task.

App Engine standard environment

  • Updated the Java SDK to version 1.9.98.

  • Updated Jetty web server to version jetty-9.4.46.v20220331.


Batch is now available in Preview! For more information about using Batch, see the documentation.


Previously, the Storage Write API had a maximum concurrent connection limit of 100 connections for non-multi-regions such as Montreal (northamerica-northeast1). This limit has now been increased to 1,000 connections across all non-multi-regions. For more information, see Storage Write API quotas and limits.

You can now select a job type when assigning a folder, organization, or project to a reservation in the Google Cloud console. This feature is now generally available (GA).

The google.cloud.bigquery.reservation.v1beta1.api package is deprecated and will be removed on September 27, 2022. After that date, requests to that package will fail. Data created by using google.cloud.bigquery.reservation.v1beta1.api are accessible by using the google.cloud.bigquery.reservation.v1.api package.

Next steps:

Cloud Bigtable

Cloud Bigtable is available in the us-south1 (Dallas) and europe-southwest1 (Madrid) regions. For more information, see Bigtable locations.

Cloud Build

The gcr.io/cloud-builders/docker builder has been upgraded to Docker client version 20.10.14. For instructions on using this builder with the Docker client versions, see Interacting with Docker Hub images.

Cloud Composer

Cloud Composer 1.19.3 and 2.0.20 release started on July 11, 2022. Get ready for upcoming changes and features as we roll out the new release to all regions. This release is in progress at the moment. Listed changes and features might not be available in some regions yet.

DAG UI is now generally available (GA).

(Cloud Composer 2) Improved the reliability of web server proxy connectivity. This change reduces the chance of 504 timeout errors when connecting to an environment's web server.

Set memory and CPU limits for the Composer Agent pod. This change increases this pod's priority and improves the reliability of operations that could fail because of resource starvation.

Environments no longer produce error log messages about the connection timeout when initializing the Airflow database during the environment creation. These messages were not associated with any error.

Source code for the apache-airflow-providers-google package versions 2022.6.22+composer and 2022.5.18+composer is available on GitHub:

Cloud Composer 1.19.3 and 2.0.20 images are available:

  • composer-1.19.3-airflow-1.10.15 (default)
  • composer-1.19.3-airflow-2.1.4
  • composer-1.19.3-airflow-2.2.5
  • composer-2.0.20-airflow-2.1.4
  • composer-2.0.20-airflow-2.2.5

Cloud Composer versions 1.16.8, 1.16.9, 1.17.0.preview.4, and 1.17.0.preview.5 have reached their end of full support period.

Cloud Composer 1.19.2 and 2.0.19 are versions with an extended upgrade timeline.

Cloud Logging

Log-based alerting is now generally available (GA). Log-based alerts match on the content of your logs. When triggered, a log-based alert notifies you that a match has appeared in your logs and opens an incident in Cloud Monitoring. The minimum autoclose duration for incidents is now 30 minutes. For more information, see Monitor your logs and Use log-based alerts.

Cloud Monitoring

Log-based alerting is now generally available (GA). Log-based alerts match on the content of your logs. When triggered, a log-based alert notifies you that a match has appeared in your logs and opens an incident in Cloud Monitoring. The minimum autoclose duration for incidents is now 30 minutes. For more information, see Monitor your logs and Use log-based alerts.

Cloud Spanner

You can now view aggregated Cloud Spanner statistics related to transactionsreadsqueries, and lock contentions in GA in Cloud Monitoring.

Cloud SQL for MySQL

For enhanced security with built-in authentication, Cloud SQL now lets you set password policies at the instance and user levels.

You can enable high availability for read replicas. See Disaster recovery for additional information about the use of high-availability replicas in a disaster recovery configuration.

You can create external server replicas with HA enabled.

Cloud SQL for PostgreSQL

You can enable high availability for read replicas. See Disaster recovery for additional information about the use of high-availability replicas in a disaster recovery configuration.

You can create external server replicas with HA enabled.

Cloud SQL for SQL Server

The database major version upgrade feature of Cloud SQL for SQL Server is generally available. For more information, see Upgrade the database major version in-place.

Compute Engine

Generally available: NVIDIA® T4 GPUs are now available in the following additional regions and zones:

Ashburn, Virginia, North America : us-east4-c

For more information about using GPUs on Compute Engine, see GPU platforms.

Generally Available: A version of Rocky Linux is now available that is optimized for running on Compute Engine.

This version of Rocky Linux is configured to use the latest version of the Google virtual network interface (gVNIC) which is specifically designed to support workloads that require higher network bandwidths. For more information, see the Rocky Linux section of the Operating systems details documentation.

Preview: Tau T2A, Google Cloud's first general purpose VM family to run on Arm architecture, is now available. Tau T2A VMs are available in three regions.

For more information, see Arm VMs on Compute Engine.


You can use the Apache Beam SDK for Go to create batch and streaming Dataflow pipelines. This feature is now in General Availability.


Eventarc support for Customer-Managed Encryption Keys (CMEK) using the Cloud Console is available in Preview.


  • You can now run Arm-based workloads in Preview in Standard clusters with GKE version 1.24 and later, and in Autopilot clusters with GKE version 1.24.1-gke.1400 and later.

    You can now select compute classes to run GKE Autopilot workloads that have specialized hardware requirements, such as Arm architecture. The Scale-Out compute class is available in Preview in Autopilot clusters running GKE version 1.24.1-gke.1400 and later.

Google Cloud VMware Engine

VMware Engine nodes are now available in the following additional region:

  • Zurich, Switzerland, Europe (europe-west6)

Google Cloud Deploy

You can now permanently abandon a release using Google Cloud Deploy.

You can now suspend a delivery pipeline using Google Cloud Deploy.

Google Distributed Cloud Edge

This is a minor release of Google Distributed Cloud Edge (version 1.1.0).

The following changes have been introduced in this release of of Distributed Cloud Edge:

  • The Kubernetes control plane has been updated to version 1.22.

The following issues have been resolved in this release of Distributed Cloud Edge:

  • The Kubernetes control plane no longer becomes intermittently unavailable during Distributed Cloud Edge software updates.
  • VPN connectivity between non-Anthos gateway nodes and Google Cloud Platform now works reliably.

This release of Distributed Cloud Edge contains the following known issues:

  • Garbage collection intermittently fails to clean up terminated Pods.

Transfer Appliance

Transfer Appliance is now available in an additional size. The TA7 appliance offers up to 7GB of storage in a smaller form factor than our other appliances. It offers both online and offline transfer modes.

Learn more about the TA7 on the Specifications page, or order an appliance from the Cloud console.

Vertex AI

The Pipeline Templates feature is available in Preview. For documentation, refer to Create, upload, and use a pipeline template.

The features supported by pipeline templates include the following:

  • Create a template registry using Artifact Registry (AR).
  • Compile and publish a pipeline template.
  • Create a pipeline run using the template and filter the runs.
  • Manage (create, update, or delete) the pipeline template resources.

You can now use a pre-built container to perform custom training with TensorFlow 2.9

Virtual Private Cloud

Private Service Connect supports publishing a service that is hosted on the following load balancers:


Added support to deploy a workflow using a cross-project service account through the Google Cloud CLI.


Microsoft Azure Releases And Updates
Source: azure.microsoft.com


Generally available: Azure Site Recovery update rollup 62 - July 2022

Azure update for Azure Site Recovery update rollup 62.

Generally available: Exporting device customizations and cloud properties in Azure IoT Central

Device capability customizations and cloud properties are now included in exported device templates in Azure IoT Central.

Generally available: Azure Gateway Load Balancer

Gateway Load Balancer enables you to deploy, scale, and enhance the availability of third party network virtual appliances (NVAs) in Azure with ease.

Generally available: Azure IoT Edge 1.3.0 release

The Azure IoT Edge release 1.3 is available with support for Red Hat Enterprise Linux 8 and requires that inbound/outbound communication use TLS 1.2 by default.

Public preview: Azure Active Directory authentication for exporting and importing Managed Disks

Improve security with the new integration to Azure Active Directory (AD) used to export and import data to Azure Managed Disks.


Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes.  Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free. 

When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here: