Hava Blog and Latest News

In Cloud Computing This Week [Aug 12th 2022]

Written by Team Hava | August 12, 2022

This week's roundup of all the cloud news.

 

Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday August 12th 2022.

This week at Hava we've rolled out a few more under the hood performance tweaks which will make the generation and diagram loading process a lot quicker.

To stay in the loop, make sure you subscribe using the box on the right of this page.

Of course we'd love to keep in touch at the usual places. Come and say hello on:

Facebook.      Linkedin.     Twitter.

AWS Updates and Releases

Source: aws.amazon.com

Amazon EventBridge now supports receiving events from GitHub, Stripe and Twilio using Webhooks

Amazon EventBridge now supports integrations with GitHub, Stripe, and Twilio via webhooks using Quicks Starts. You can subscribe to events from these SaaS applications and receive them on an Amazon EventBridge event bus for further processing.

With Quick Starts, you can use AWS CloudFormation templates to create HTTP endpoints for your event bus that are configured with security best practices for GitHub, Stripe, and Twilio. You can configure your GitHub, Stripe, and Twilio webhooks from the respective accounts; simply select the types of events you want to send to the newly generated endpoint and begin securely receiving events on your event bus.

EventBridge is a serverless event bus that enables you to build scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services.

You can set up routing rules on an event bus to determine where to send your data, allowing downstream applications to react to relevant changes in your applications as they occur. Amazon EventBridge can make it easier to build event-driven applications by facilitating event ingestion, delivery, security, authorization, and error handling.

You can get started by navigating to the ‘Developer Resources’ section of the AWS Management Console under a new ‘Quick starts’ menu item. Quick Starts are automated deployments intended for developers who want to quickly set up a trial or production environment according to AWS best practices and start using EventBridge functionality within minutes.

The Quick Starts for GitHub, Stripe and Twilio integration using webhooks have their own section titled ‘Inbound webhooks using Lambda Function URLs’.

Amazon CloudWatch Synthetics adds support for custom canary groups with group-level availability metrics

Amazon CloudWatch Synthetics, an outside-in monitoring capability to continually verify your customer experience even when you don’t have any customer traffic on your applications, introduced a new capability to create custom groups of canaries.

By creating a group of canaries, you can track success/failure status at a group or application level yet with an easy drill down to the failing canary, making it easier to pinpoint the canary failures in the context of the group or application. When groups consist of canaries across multiple AWS regions, this new capability allows you to more easily isolate region-specific issues.

Some commonly used group combinations include creating groups of canaries for business critical and non-business critical workflows. For example - If you have a payment gateway service that has a subset of services that needs to be monitored to ensure the third-party payment gateway service is available, you can set up a paymentgateWay-canarygroup and add all the canaries pertaining to this transaction to the group.

This will allow you to identify and isolate individual canary failures in the payment gateway amongst the tens or hundreds of other canaries in your AWS Account. There is no additional cost to create canary groups.

Amazon Cognito enables native support for AWS WAF

You can now enable AWS WAF protection for Amazon Cognito, making it even easier to protect Amazon Cognito user pools and hosted UI from common web exploits.
Amazon Cognito is a service that makes it easy to add authentication, authorization, and user management to your web and mobile apps. Amazon Cognito provides authentication for applications with millions of users and supports sign-in with social identity providers such as Apple, Facebook, Google, and Amazon, and enterprise identity providers via standards such as SAML 2.0 and OpenID Connect.

AWS WAF is a web application firewall that helps to protect your web applications from common web exploits and malicious bots that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules.

Amazon Cognito provides built-in protection for securing your public-facing applications such as a compromised credentials check and adaptive authentication. For additional protection, you can now use AWS WAF to protect Amazon Cognito user pools from web-based attacks and unwanted bots.

Amazon Cognito’s integration with AWS WAF enables you to define rules that enforce rate limits, gain visibility into the web traffic to your applications, and allow or block traffic to Cognito user pools based on business or security requirements, and optimize costs by controlling bot traffic.

Application Insights adds AppRegistry support and faster problem reporting

AWS has further enhanced the monitoring set up experience through Amazon CloudWatch Application Insights' integration with AWS Service Catalog AppRegistry. With this feature, you can now easily select a registered AWS application or register a new one with AppRegistry directly from Application Insights and automatically set up monitoring for the newly registered applications.

Registered applications are available for other services that make use of registered applications so you can seamlessly interact with your applications in these AWS services as well.

Included in this release is a new summary application health dashboard that helps you visualize the wellness of your applications and any discovered problems. Another new enhancement to the set up process is the added ability to do account level resource discovery and set up via Cloud Formation (in addition to the existing console capability.)

Also part of the service update are performance improvements that significantly improve the time to process data, providing near real-time notification of problems.

Amazon AppFlow now supports data transfers from SAP applications to AWS Services using SAP Operational Data Provisioning (ODP)

Amazon AppFlow, a fully managed integration service that helps customers to securely transfer data between AWS services and software-as-a-service (SaaS) applications in just a few clicks, now supports data transfers from SAP applications to AWS Services using SAP Operational Data Provisioning (ODP) framework. With this launch AppFlow customers can use the AppFlow SAP OData connector to perform full and incremental data transfers, including Change Data Capture using SAP Operational Delta Queue, from SAP ERP/BW applications (including ECC, BW, BW/4HANA and S/4HANA) to AWS services such as Amazon S3.

This launch enhances the AppFlow SAP OData connector by adding support for SAP ODP framework and enables a variety of use cases such as initial transfer of Master Data (like Material or Plant Information) or Transactional Data (like Purchase or Consignment Orders) from SAP ERP (S/4HANA, ECC) to a data lake on Amazon S3, and subsequent data transfers for any changes in the source SAP system, allowing customers to have the most current view of their data in their data lake on Amazon S3.

AWS Private 5G is now generally available

This week, AWS are announcing the general availability of AWS Private 5G, a managed service that helps enterprises set up and scale private mobile networks in their facilities in days instead of months. With only a few clicks in the AWS Management Console, you can specify where to build a mobile network and the number of devices you want to connect.

AWS then delivers and maintains the small cell radio unit, the mobile network core and radio access network (RAN) software, and subscriber identity modules (SIM cards) required to set up a private mobile network and connect devices. AWS Private 5G automates the setup and deployment of the network. No upfront fees or per-device costs are incurred with AWS Private 5G, and you pay only for the network capacity that you request.

Increased video content, new applications that require ultra-low latency connectivity to end-user devices, and thousands of smart IoT devices demand extended coverage, more capacity, better reliability, and robust security and access control. Enterprises want to build their own private mobile networks to address these limitations.

However, private mobile network deployments require enterprises to invest considerable time, money, and effort to design their network for anticipated peak capacity and procure and integrate software and hardware components from multiple vendors. Even if enterprises are able to get the network running, current private mobile network pricing models charge for each connected device that makes it cost prohibitive for use cases that involve thousands of connected devices.

AWS Private 5G simplifies the procurement and deployment so that you can deploy your own private mobile network within days instead of months, scale up and down the number of connected devices rapidly, and benefit from a familiar on-demand cloud pricing model.

AWS Private 5G is currently available in the following AWS Regions: US East (Ohio), US East (N. Virginia), and US West (Oregon).

AWS Console Mobile Application adds support for Cost Explorer service

AWS Console Mobile Application users can now use AWS Cost Explorer on both the iOS and Android applications. The Console Mobile Application provides a secure on-the-go solution to visualize, understand, and manage AWS costs and usage over time. Customers can analyze total costs and usage across all regions and services for preceding eight weeks, identify trends, pinpoint cost drivers, and detect anomalies.

The Console Mobile Application lets customers view and manage a select set of resources to support incident response while on-the-go. The login process leverages biometrics authentication (on supported devices), making access to AWS resources simple and quick.

Amazon EKS and Amazon EKS Distro now support Kubernetes version 1.23

You can now use Amazon EKS and Amazon EKS Distro to run Kubernetes version 1.23. Highlights of Kubernetes version 1.23 release include graduation of PodSecurity and Ephemeral containers to beta, and graduation of HorizontalPodAutoscaler to GA.

Additionally, Kubernetes version 1.23 turns on CSI migration feature for Amazon EBS by default. You can find more details about Kubernetes 1.23 release in the EKS blog post, EKS release notes, and in the Kubernetes project release notes. Support for version 1.23 will be available in Amazon EKS Anywhere in the next couple of weeks.
You can learn more about the Kubernetes versions available on Amazon EKS and instructions to update your cluster to version 1.23 by visiting EKS documentation. Amazon EKS Distro builds of Kubernetes 1.23 are available through ECR Public Gallery and GitHub.

Introducing the AWS Transfer Family Delivery Program

AWS are excited to announce the new AWS Transfer Family Delivery Program for AWS Partners that help customers build sophisticated Managed File Transfer (MFT) and Business-to-Business (B2B) file exchange solutions with AWS Transfer Family.

AWS Transfer Family enables you to migrate, automate, and monitor your file transfer workflows into and out of Amazon S3 and Amazon EFS using the SFTP, AS2, FTPS, and FTP protocols. With your data in AWS, you can leverage a rich set of data analytics and processing services.

AWS Transfer Family is the only fully managed cloud-native file transfer service currently available, enabling AWS Partners to build customized, validated solutions such as integrating the customer’s identity provider of choice, enhancing file transfer monitoring, and securing endpoints.
A

WS Transfer Family Delivery Program partners are vetted by AWS Solutions Architects to ensure partners in the program have deep technical knowledge, experience, and proven success delivering solutions with AWS Transfer Family to customers.

The AWS Transfer Family Delivery Program simplifies locating experts who can build and manage customized B2B file exchange and MFT workflows for customers.

Amazon DocumentDB (with MongoDB compatibility) now supports the Decimal128 data type

Amazon DocumentDB (with MongoDB compatibility) is a database service that is purpose-built for JSON data management at scale, fully managed and integrated with AWS, and enterprise-ready with high durability.

This week, Amazon DocumentDB added support for the Decimal128 data type. Decimal128 is a BSON data type and provides 128 bits of decimal representation supporting 34 decimal digits of precision and an exponent range of -6143 to +6144.

For more information on supported MongoDB APIs, operations, and data types, see the documentation. The Decimal128 data type is supported in all regions where DocumentDB is available.

Amazon SageMaker Canvas expands capabilities to better prepare and analyze data for machine learning

AWS are excited to announce expanded capabilities for data preparation and analysis in Amazon SageMaker Canvas including replacing missing values, replacing outliers, and the flexibility to choose different sample sizes for your datasets.

Amazon SageMaker Canvas is a visual point-and-click interface that enables business analysts to generate accurate ML predictions on their own — without requiring any machine learning (ML) experience or having to write a single line of code. SageMaker Canvas makes it easy to access and combine data from a variety of sources, automatically clean data, and build ML models to generate accurate predictions with a few clicks.

Starting this week, SageMaker Canvas enables you to replace missing values to prepare your data faster, replace outliers in your data to build more accurate ML models, and the flexibility to choose the size of your dataset sample for quicker data analysis.

Replace missing values: Missing values are a common occurrence in datasets and can impact accuracy of ML models. This new capability in SageMaker Canvas helps you replace (also referred to as impute) missing values in your data with custom values and prepare your data faster, while keeping your dataset intact . As an example, you can replace missing values in numeric columns with the mean or median of your data, or a custom value. This ensures your data is ready prior to building ML models.

Replace outliers: Outliers or rare values in the range of your data can lead to a large variance or bias to build ML models. SageMaker Canvas now enables you to detect outliers in numeric columns and helps replace them with values within a specific range. You can choose either the standard deviation or a custom range and replace outliers with minimum and maximum values in this specified range.

Choice of sizes for dataset samples: SageMaker Canvas now allows you to choose the size of your dataset sample to better analyze your data. Sampling is a statistical technique to identify patterns and trends in a large dataset by working with a small and manageable amount of data, while enabling accurate data analysis to build ML models.

SageMaker Canvas uses the random sampling method that enables quicker insights into your data. By default, Canvas uses a sample size of 20,000 rows from your dataset. You can now choose between 500 rows to 40,000 rows for the sample data depending on your size of your dataset, giving you flexibility and control.

AWS Glue now supports Flex execution option

AWS Glue now supports a new execution option that allows customers to reduce the costs of their pre-production, test, and non-urgent data integration workloads by up to 34%. With Flex, Glue jobs run on spare capacity in AWS.

Flex is ideal for customer workloads that don’t require fast jobs start times. The start and execution times of jobs using Flex vary because spare compute resources are not always available immediately and may be reclaimed during the execution of a job. Regardless of the execution option used, Glue jobs have the same capabilities, including access to custom connectors, a visual job authoring experience, and job scheduling system.

Using Flex execution option, customers can optimize the costs of their data integration workloads by configuring the execution option based on the workloads’ requirements, using the standard execution option for time-sensitive workloads, and Flex for non-urgent workloads.

Amazon Aurora Serverless v1 now supports PostgreSQL 11 and In-Place upgrade from PostgreSQL 10

Amazon Aurora Serverless v1 now supports PostgreSQL major version 11. PostgreSQL 11 includes improvements to partitioning, parallelism, and performance enhancements such as faster column additions with a non-null default.

Aurora Serverless v1 also supports in-place upgrade from PostgreSQL 10 to 11. Instead of backing up and restoring the database to the new version, you can upgrade with just a few clicks in the AWS Management Console or using the latest AWS SDK or CLI. No new cluster is created in the process which means you keep the same endpoints and other characteristics of the cluster. The upgrade completes in minutes and can be applied immediately or during the maintenance window. Your database cluster will be unavailable during the upgrade.

Amazon SageMaker Pipelines now supports sharing of pipeline entities across accounts

You can now use Amazon SageMaker Model Building Pipelines with AWS Resource Access Manager (AWS RAM) to securely share pipeline entities across AWS accounts and access shared pipelines through direct API calls. A multi-account strategy helps achieve data, project, and team isolation while supporting software development lifecycle steps.

Cross-account pipeline sharing can support a multi-account strategy without the added hassle of logging in and out of multiple accounts. For example, cross-account pipeline sharing can improve machine learning testing and deployment workflows by sharing resources across staging and production accounts.

Starting this week, cross-account pipeline sharing is supported in all regions where Sagemaker Pipelines is available. For more information about availability, see Supported Regions and Quotas.

Set up cross-account pipelines sharing to share pipeline resources with AWS accounts or business groups within your organization. Start using pipeline resources that were shared with your account using the SageMaker Python SDK. Pipeline resources can be shared with either read-only or extended pipeline execution permissions.

Users with read-only permissions can see the details of any shared pipelines, including pipeline definitions and execution steps. Users with extended pipeline execution permissions can also start, stop, and retry pipeline executions.

Amazon SageMaker Automatic Model Tuning now supports alternate SageMaker training instance types for more robust tuning

Amazon SageMaker Automatic Model Tuning now supports specifying multiple alternate SageMaker training instance types to make tuning jobs more robust when the preferred instance type is not available due to insufficient capacity.
SageMaker Automatic Model Tuning finds the best version of a model by running many training jobs on the dataset using the specific ranges of hyperparameters that you provide for your algorithm. It then chooses the optimal hyperparameter values that result in a model that performs the best, as measured by a metric that you choose.

Previously, when creating SageMaker Automatic Model Tuning jobs, you were able to define only one SageMaker training instance type. If the capacity for this instance type was low, you would face increased job runtime and high chances of tuning job failures. This was particularly undesirable as hyperparameter tuning involves running multiple and potentially long-running training jobs, which would have to be re-started from scratch in the event of such failures.

With this launch, you can now specify up to 5 additional alternate instance types in the order of your preference so that the hyperparameter tuning job can automatically fall back to the next alternate instance type in the event of insufficient capacity. This makes tuning jobs resilient to insufficient capacity scenarios and allows you to tune your models without any runtime increase or failure due to the low availability of some specific SageMaker training instances.

AWS IoT Greengrass v2 updates Stream Manager to report new telemetry metrics and more

AWS IoT Greengrass is an Internet of Things (IoT) edge runtime and cloud service that helps customers build, deploy, and manage device software. AWS are excited to announce version 2.7 release with the following features:

  • System Telemetry Enhancements - The Stream Manager agent component now has the ability (enabled by default) to send system telemetry metrics to Amazon EventBridge. System telemetry data is diagnostic data that can help you monitor the performance of critical operations on your AWS IoT Greengrass core devices. You can create projects and applications to retrieve, analyze, transform, and report telemetry data from your edge devices. Domain experts, such as process engineers, can use these applications to gain insights into their fleet health based on device data uploaded through Stream Manager to AWS Services such as Amazon Kinesis, Amazon Simple Storage Service (Amazon S3), AWS IoT Analytics, AWS IoT SiteWise, and more. For more information, see Gathering System Telemetry section in the developer guide.
  • Local Deployment Improvements - The new improvements now enable AWS IoT Greengrass nucleus to send near real time deployment status updates to AWS IoT Greengrass cloud service. For instance, using the ListInstalledComponents API, customers can now observe the status of locally deployed components for a connected device.
  • Additional Support for Client Certificates - Certificate signed by a custom certificate authority (CA), where the CA isn't registered with AWS IoT, is now supported. This allows customer the flexibility to use a custom certificate authority that is not registered with AWS IoT. To use this feature, you can set the new greengrassDataPlaneEndpoint configuration option to iotdata

AWS Graviton2-based Amazon EC2 C6g, C6gd, and M6gd are now available in additional regions

Starting this week, Amazon EC2 C6g and C6gd instances are available in Asia Pacific (Osaka) region. Additionally, M6gd instances are now available in Europe (Stockholm) region.

C6g and C6gd instances are ideal for compute-intensive workloads such as high performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modelling, distributed analytics, and CPU-based machine learning inference.

M6gd instances are ideal for general purpose applications such as application servers, microservices, mid-size data stores, and caching fleets. C6gd and M6gd instances offer up to 50% more NVMe storage GB/vCPU over comparable x86-based instances and are ideal for applications that need high-speed, low latency local storage.

These instances are powered by AWS Graviton2 processors. The AWS Graviton processors are custom-designed by AWS to enable the best price performance in Amazon EC2. AWS Graviton2 processors are the second-generation Graviton processors and deliver a major leap in performance and capabilities over first-generation AWS Graviton processors, with 7x performance, 4x the number of compute cores, 2x larger caches, and 5x faster memory.

AWS Graviton2 processors feature always-on 256-bit DRAM encryption and 50% faster per core encryption performance compared to the first-generation AWS Graviton processors. These instances are built on the AWS Nitro System, a collection of AWS-designed hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.

These instances offer up to 25 Gbps of network bandwidth, and up to 19 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). The C6gd and M6gd instances also offer up to 3.8 TB of NVMe-based SSD storage.

AWS Graviton processors are supported by many Linux operating systems including Amazon Linux 2, Red Hat Enterprise Linux, SUSE, and Ubuntu. Many popular applications and services for security, monitoring and management, containers, and continuous integration and delivery (CI/CD) from AWS and software partners also support AWS Graviton-based instances.

The AWS Graviton Ready program provides customers with certified solutions from partner software vendors that can be used on AWS Graviton-based instances. Many AWS services also support Graviton-based instances, making it quick and easy for you to realize the price performance gains. AWS services that support Graviton-based instances include Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Elastic Container Service (Amazon ECS), AWS Lambda, AWS Fargate, Amazon Aurora, Amazon Relational Database Service (Amazon RDS), Amazon EMR, and Amazon ElastiCache.

Amazon S3 adds a new policy condition key to require or restrict server-side encryption with customer-provided keys (SSE-C)

The new Amazon S3 condition key enables you to write policies that help you control the use of server-side encryption with customer-provided keys (SSE-C). Using Amazon S3 condition keys, you can specify conditions when granting permissions in the optional ‘Condition’ element of a bucket or an IAM policy. One such condition is to require server-side encryption (SSE) using your preferred encryption method.

When you use SSE-C, you supply and manage the encryption keys, while S3 implements the encryption and decryption of your object data. Most customers take advantage of S3’s built-in support for encryption keys with either S3-managed (SSE-S3) or AWS Key Management Service (KMS) keys (SSE-KMS).

However, some customers choose SSE-C to get an additional layer of control for sensitive data stored in S3 or to satisfy compliance regulations. In these cases, you may want all uploads to your buckets to use SSE-C. In other cases, you may want to prevent object uploads using SSE-C so that you and your customers do not have to maintain encryption keys. With the new condition key, customers can choose to either require or restrict use of SSE-C.

The S3 condition key for SSE-C encrypted objects is available at no additional cost in all commercial AWS Regions, including the AWS GovCloud (US) Regions, the AWS China (Beijing) Region, operated by Sinnet, and the AWS China (Ningxia) Region, operated by NWCD.

Amazon DocumentDB (with MongoDB compatibility) now supports DML query auditing with Amazon CloudWatch Logs

Amazon DocumentDB (with MongoDB compatibility) is a database service that is purpose-built for JSON data management at scale, fully managed and integrated with AWS, and enterprise-ready with high durability.

This week, Amazon DocumentDB added additional query auditing support for database events. Now, with auditing enabled, in addition to Data Definition Language (DDL) events, DocumentDB will record Data Manipulation Language (DML) events to Amazon CloudWatch Logs.

This includes events corresponding to insert(), insertMany(), update(), updateMany(), delete(), deleteMany(), bulkWrite(), find(), count(), distinct(), replaceOne(), aggregates and more. You can use Amazon CloudWatch Logs to analyze, monitor, and archive your DocumentDB DML query events.

Amazon RDS Custom for Oracle now supports promotion of managed in-region read replica

Amazon Relational Database Service (Amazon RDS) Custom for Oracle now supports the promotion of a managed replica that was created using the replica function. When you promote a managed replica, it is converted from a physical standby database and activated as a standalone read/write primary database instance.

The RDS Custom for Oracle instance supports creation of up to five managed replicas. Managed replicas are initially created in mount mode but can be manually changed to read-only mode to offload read workloads. Running replicas in read-only mode requires Oracle Active Data Guard licenses. The replica promotion feature supports managed replicas running in either mode.

AWS Direct Connect expands AWS Transit Gateway support at more connection speeds

AWS Direct Connect now supports connections to AWS Transit Gateway at speeds of 500 megabits per second (Mbps) and lower, providing more cost-effective choices for Transit Gateway users when higher speed connections are not required. With this change, customers using Direct Connect at connection speeds of 50, 100, 200, 300, 400, and 500 Mbps can now can connect to their Transit Gateway.

With Direct Connect, you can transfer data privately from your data center, office, or colocation environment into and out of AWS. Connections through Direct Connect bypass the public internet to help decrease network congestion and unpredictability. You can make these connections at over 115 Direct Connect locations (points-of-presence) worldwide. Transit Gateway connects your Amazon Virtual Private Clouds (Amazon VPCs) to a central Regional hub. Transit Gateway simplifies your network and helps put an end to complex peering relationships, making the process of adding new Amazon VPCs easier. With this change, customers using both services have more options and can find the right connection speed for their needs.

 

 
Google Cloud Releases and Updates
Source: cloud.google.com

Anthos clusters on AWS
  • Anthos clusters on AWS (previous generation) aws-1.12.1-gke.0 is now available.

    You can now launch clusters with the following Kubernetes versions:

    • 1.23.8-gke.2000
    • 1.22.12-gke.300
    • 1.21.14-gke.2100

Anthos Service Mesh

Anthos Service Mesh 1.14.3-asm.0 includes the features of Istio 1.14.3 subject to the list of Anthos Service Mesh supported features.

Anthos Service Mesh 1.12.9-asm.0 includes the features of Istio 1.12.9 subject to the list of Anthos Service Mesh supported features.

Apigee Integration

On August 10, 2022 GCP released an updated version of the Apigee Integration software.

Support for VPC Service Controls (Preview)

VPC Service Controls lets you define a security perimeter around the Apigee Integration Google Cloud service. For more information, see Set up VPC Service Controls for Apigee Integration.

BigQuery

You can now set default configurations at a project or organization level. This feature is now generally available (GA).

Chronicle

The following changes are available in the Unified Data Model:

  • The File.ashash field was deprecated and replaced with the File.authentihash field.
  • The day_max field was added to the Prevalence type.

Descriptions of the File.FileType Enum values are now available in the Unified Data Model field list document.

For a list of all fields in the Unified Data Model, and their descriptions, see the Unified Data Model field list.

Cloud Bigtable

New tooling is available to help you migrate to Cloud Bigtable from HBase clusters that are hosted on another Google Cloud service. For more information, see Migrate from HBase on Google Cloud.

Cloud Domains

Importing a domain from Google Domains to Cloud Domains is available in Preview. 

Cloud Load Balancing

External TCP/UDP network load balancers can now be configured to handle IPv6 traffic from clients. To enable this, you must configure your subnet, backend VMs, and the forwarding rules to handle IPv6 traffic.

This feature is only available for backend service-based network load balancers.

For details, see:

This feature is available in General Availability.

Cloud Monitoring

You can now prevent Cloud Monitoring from sending notifications or creating incidents during specific time periods. For general information, see Snooze notifications and alerts. For information about how to create, view, and modify a snooze, see Create and manage snoozes.

You can now update older versions of the Ops Agent from the Cloud Monitoring VM Instances page and from the Details panel for a selected Compute Engine instance. The "Install" option for a new agent now also supports "update" for upgrading an older agent.

You can now create uptime checks for Cloud Run public endpoints by using the Monitoring API and specifying the Cloud Run Revision monitored-resource type.

The organization of the SLO monitoring Services Overview page has been improved. The new layout provides a better experience when you don't yet have any services. When you have services, the new Supported Services list indicates how many of each type you have. You can also use the list to filter the services table to include all services of a selected type. For more information, see Services Overview dashboard. 

Compute Engine

Generally Available: Internal and external IPv6 addresses for Google Compute Engine instances are available in all regions.

For more information, see Configuring IPv6 for instances and Creating instances with multiple network interfaces. 

GKE

Newly created GKE Clusters on version 1.24 or later using Services without .spec.ports field defined will cause a crash-loop of the ingress-gce controller (l7lbcontroller pod). This will result in not being able to provide L7 Ingress, L4 Internal LoadBalancer Service with Subsetting turned on, and L4 Network LoadBalancer based on Regional Backend Services in the cluster.

To recover from this situation, delete the Service without a port specified or recreate the cluster without any Service with .spec.ports undefined.

Network Intelligence Center

Connectivity Tests now includes a feature that performs live data plane analysis by testing connectivity between a VM and a Google network edge location. This feature is available for the following traffic flows:

  • Between VM and non-Google Cloud network
  • Between VM and Cloud SQL instances

In the Google Cloud console, you can see the results of this analysis in the column labeled Last live data plane analysis result. In the gcloud command-line and API responses, you can see the results in the probingDetails object.

Security Command Center

Event Threat Detection, a built-in service of Security Command Center, launched the following rules to Preview.

  • Discovery: Can get sensitive Kubernetes object check
  • Privilege Escalation: Changes to sensitive Kubernetes RBAC objects
  • Privilege Escalation: Create Kubernetes CSR for master cert
  • Privilege Escalation: Creation of sensitive Kubernetes bindings
  • Privilege Escalation: Get Kubernetes CSR with compromised bootstrap credentials
  • Privilege Escalation: Launch of privileged Kubernetes container

These rules detect scenarios where a malicious actor attempted to query for or escalate privileges in Google Kubernetes Engine. For more information, see Event Threat Detection rules.

Storage Transfer Service

Storage Transfer Service now supports transfers from AWS S3 using self-hosted transfer agents. This feature provides a way to configure the data transfer path between AWS and Google Cloud and offers more control over performance.

See the documentation for details

VPC

Internal and external IPv6 addresses are available in all regions in General Availability:

VPC Service Controls

Beta stage support for the following integration:

 

 


Microsoft Azure Releases And Updates
Source: azure.microsoft.com

 

Generally available: Azure Site Recovery update rollup 63 - August 2022

This update provides the improvements for the latest version of Azure Site Recovery components.

Generally available: Azure Databricks in West US 3

Azure Databricks, a data analytics platform optimized for the Microsoft Azure cloud services platform, is now generally available in West US 3.

General availability: Upgrade VMware VMs protected by Site Recovery to modernized experience

Introducing the migration capability to move existing replications from classic to modernized experience for disaster recovery of VMware virtual machines, enabled using Azure Site Recovery.

Public preview: Update management center in Azure

Update management center is the next iteration of the Azure Automation Update Management solution. It works out of the box, with zero onboarding steps and has no dependency on Azure Automation and Log Analytics.

 

Public preview: Serverless SQL for Azure Databricks

Serverless SQL for Azure Databricks provides instant compute to users for their BI and SQL workloads without waiting for clusters to start up or scale out.

 

Public preview: Azure Dedicated Host restart

Azure Dedicated Host restart is now in public preview.

 

General availability: Azure Lab Services August 2022 update

Improve performance, reliability, and scalability for Azure Lab Services with the latest update.

 


 

Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes.  Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
 
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free. 

When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
 
Check out the 14 day free trial here: