This week's roundup of all the cloud news.
Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday August 19th 2022.
This week at Hava we've rolled out a stack of enhancements to the Azure diagrams, including new connections and the ability to lay out diagrams by resource group or virtual network.
You can see more about the Azure Diagram enhancements in this post
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
AWS Updates and Releases
This week, Amazon Personalize is excited to announce the Trending-Now recommender for Video-on-Demand domain to highlight catalogue items that are gaining popularity at the fastest pace.
Amazon Personalize is a fully managed machine learning service that makes it easy for customers to deliver personalized experiences to their users. Recommenders help reduce the time needed for you to deliver and manage these personalized experiences, and help ensure that recommendations are relevant to your users. User interests can change based on a variety of factors, such as external events or the interests of other users.
It is critical to tailor recommendations to these changing interests to improve user engagement. With Trending-Now, you can surface items from your catalogue that are rising in popularity faster than other items, such as a newly released movie or show. Amazon Personalize looks for items that are rising in popularity at a faster rate than other catalogue items and highlights them to users to provide an engaging experience. Amazon Personalize automatically identifies trending items every 2 hours based on the most recent interactions data from your users.
Getting started with Trending Now is easy. You can create a Trending Now recommender in your existing domain dataset group, or create a new domain dataset group and then create a new recommender with the Trending Now use case.
AWS App Mesh is now available in Asia Pacific (Osaka) and Asia Pacific (Jakarta) AWS Regions. AWS App Mesh is a service mesh that provides application-level networking to make it easier for your services to communicate with each other across multiple types of compute infrastructure. AWS App Mesh standardizes how your services communicate, giving you end-to-end visibility and options to tune for high-availability of your applications.
Amazon DynamoDB now makes it easier for you to migrate and load data into new DynamoDB tables by supporting bulk data imports from Amazon S3. Now, you can import data directly into new tables to help you migrate data from other systems, load test data to help you build new applications, facilitate data sharing between tables and accounts, and simplify your disaster recovery and business continuity plans.
The new DynamoDB import from S3 feature simplifies the import process so you do not have to develop custom solutions or manage instances to perform imports. DynamoDB bulk import also does not consume your table’s write capacity so you do not need to plan for additional capacity during the import process.
Bulk import supports CSV, DynamoDB JSON and Amazon Ion as input formats. Combined with the DynamoDB to Amazon S3 export feature, you can now more easily move, transform, and copy your DynamoDB tables from one application, account, or region to another. You can get started with DynamoDB import with just a few clicks in the AWS Management Console or API calls.
Bottlerocket, a Linux-based operating system that is purpose built to run container workloads, now has a Center for Internet Security (CIS) Benchmark. The CIS Benchmark is a catalog of security-focused configuration settings that help Bottlerocket customers configure or document any non-compliant configurations in a simple and efficient manner. The CIS Benchmark for Bottlerocket includes both Level 1 and Level 2 configuration profiles.
AWS Customers operating in the industries with strict compliance requirements – such as financial services, healthcare, and federal government – need to show that their Bottlerocket hosts are compliant with a range of compliance certifications, one being the CIS Benchmark. Using the CIS Benchmark guidance document in the PDF format, customers can determine the configuration requirements and complete the configuration and hardening process for Bottlerocket hosts before deploying them in production.
The CIS Benchmark can be accessed free of charge for non-commercial use from the CIS website. It applies to all official releases of Bottlerocket starting with version 1.9.0.
Bottlerocket is an open-source distribution with an open development model and community participation. It’s available at no additional cost and is fully supported by Amazon Web Services.
You can now enable CloudWatch Contributor Insights on your AWS PrivateLink-powered VPC Endpoint Services. AWS PrivateLink is a fully-managed private connectivity service that enables customers to access AWS services, third-party services or internal enterprise services hosted on AWS in a secure and scalable manner while keeping network traffic private.
CloudWatch Contributor Insights analyzes time-series data to report the top contributors and number of unique contributors in a dataset.
As a PrivateLink Service owner, you can use Contributor Insights rules to monitor and troubleshoot performance of your service. For example, in the event of a rapid increase in traffic, you can enable a rule for the BytesProcessed metric to discover the customer endpoints sending the highest traffic volume to your service.
Similarly, you can enable rules to track customer endpoints with the highest number of active connections, new connections, and resets (RSTs). Contributor Insights for AWS PrivateLink can also help you get data that can be used for cost allocation of your VPC Endpoint Service across different customer endpoints. You will pay a monthly charge for each rule. See the CloudWatch pricing page for details.
PrivateLink Contributor Insights rules can be enabled from the AWS Console, CLI, SDK and CloudFormation. The feature is available in all public AWS Regions and GovCloud Regions except Asia Pacific (Jakarta). To learn more, visit AWS Privatelink in Amazon VPC Developer Guide.
Amazon Chime SDK announces the launch of live connector pipelines that send real-time video from applications to streaming platforms such as Amazon Interactive Video Service (IVS) or AWS Elemental MediaLive. Amazon Chime SDK enables multi-party video sessions by letting developers add real-time voice and video to their web and mobile apps. Live connector helps to simplify the process of live streaming these sessions through a single API. Customer can send real-time video to streaming platforms such as AWS Elemental MediaLive, Amazon IVS, Twitch, YouTube Live, Facebook Live, and more.
AWS Customers can use live connector to broadcast to large audiences for use cases such as webinars, town hall meetings, events, online lectures and classes, and live product demonstrations. Live connector creates a single video stream of a WebRTC session and sends it to the streaming platform.
For example, customers who use AWS Elemental can now ingest WebRTC content using the Amazon Chime SDK live connector and AWS Elemental MediaLive. WebRTC content can be sourced from mobile application and web browsers, making it easier to bring live guests into the production using whatever personal device they have available.
Live connector composites a multi-party WebRTC video session into a single RTMP stream and sends it to a streaming platform. Developers choose whether to livestream an active speaker video or a composited video stream. Developers can enhance the live streaming experience by using pre-formatted video layouts or combining multiple video streams into a single view. The video stream can be captured into a single file for video on demand (VoD) playback, offline consumption, or archiving, which simplifies the recording and distribution process.
AWS Cost Categories now support the categorization of Out-of-Cycle costs such as AWS Enterprise Support and AWS Proserve charges. Previously, cost categories bundled all Out of Cycle costs into the “Default value” grouping. With this launch, you can use cost categories dimensions to capture and categorize Out-of-Cycle costs.
For example, you can now create a rule to capture the enterprise support costs by creating a cost category rule with dimension as “Service” and Service code as “AWSSupportEnterprise”. You can then use existing cost category features such as split charge feature to split the enterprise support charge with other cost categories.
AWS Cost Categories is a feature within AWS Cost Management product suite that enables you to group cost and usage information into meaningful categories based on your needs. You can create custom categories and map your cost and usage information into these categories based on the rules defined by you using various dimensions such as account, tag, service, charge type, and even other cost categories. Once cost categories are set up and enabled, you will be able to view your cost and usage information by these categories starting at the beginning of the month in AWS Cost Explorer, AWS Budgets, and AWS Cost and Usage Report (CUR).
This feature is available in all regions. There is no additional cost for using this feature. To get started with grouping your Out-of-Cycle costs today, visit the AWS Cost Categories page.
Amazon Rekognition Custom Labels is an automated machine learning (AutoML) service that allows customers to build custom computer vision models to classify and identify objects in images that are specific and unique to their business. Custom Labels does not require customers to have any prior computer vision expertise or knowledge.
Starting this week, AWS customers will have the ability to copy trained Custom Labels models from one AWS account to another AWS account within the same region. This new capability makes it easier for customers to move Custom Labels models through various environments such as development, quality assurance, integration, and production without requiring to copy the original training/test dataset and re-training the model. Custom Labels models can be copied across accounts in less than 10 minutes.
AWS partners and customers often use multiple AWS accounts that are provisioned based on software development phases (e.g. build, test, stage, deploy), or business function (e.g. data science, engineering), or combination of both. Previously, Custom Labels models could only be used in the AWS account in which it was trained. Customers who wanted to develop the model in development environment and then deploy it in production were required to copy their training dataset to each AWS account and train the model from scratch. This was a time consuming process slowing down the model deployment to production.
With this new capability, AWS partners can develop and optimize a model in a development AWS account and copy over the latest version of the model to a customer operated production account. Similarly, enterprise customers with global operations can incorporate ML-Ops best practices, by developing and testing Custom Labels models in a development AWS account and move them over to a production account when ready for deployment, without having to retrain the model from scratch. Customers in regulated industries such as financial and insurance, can now share models across their development, testing, and production accounts without sharing sensitive training datasets.
AWS is excited to announce that the AWS Well-Architected Tool is now available in AWS GovCloud (US) Regions. AWS GovCloud (US) is an isolated region designed to host sensitive data and regulated workloads in the cloud. The AWS Well-Architected Tool is used to help customers review the state of their applications and workloads against architectural best practices, and improve decision-making, minimize risks, and reduce costs. With this Region expansion, customers with specific regulatory and compliance requirements and AWS Partners in both the public and commercial sectors can now conduct self-service Well-Architected Reviews.
This new capability is available to customers and AWS partners at no additional charge in the AWS Management Console.
Amazon Web Services (AWS) announces expansion of wideband Digital Intermediate Frequency (DigIF) support for Software Defined Radios (SDRs) to enable customers to downlink more data in less time and reduce costs. AWS Ground Station regions Africa (Cape Town), Europe (Ireland), and Asia Pacific (Singapore) join the Middle East (Bahrain) region to offer customers a total of four locations where expanded support for SDRs is now available in Preview.
AWS Ground Station originally supported SDRs for narrowband frequencies less than 54 MHz. By expanding SDR support to 400 MHz, AWS can now support wideband customers who rely on new modulation and encoding schemes, helping customers like Earth imaging businesses, universities, and governments optimize their operational costs. The expansion to new regions provides customers with additional contact opportunities to align with satellite data processing requirements. With AWS Ground Station providing a digital intermediate frequency (DigIF) output, customers can select an SDR of their choice to work with AWS Ground Station and benefit from the speed of market innovation. These SDRs perform the modulation and encoding steps in the customer’s Virtual Private Cloud (VPC), giving the customer more control over their data and allowing more flexibility to move to different configurations, including higher data rates, as they scale their constellation.
AWS Ground Station is a fully managed service that lets customers control satellite communications, process satellite data, and scale your satellite operations. Customers can integrate their space workloads with other AWS services in real-time using Amazon’s low-latency, high-bandwidth global network. Customers can stream their satellite data to Amazon EC2 for real-time processing, store data in Amazon S3 for low cost archiving, or apply AI/ML algorithms to satellite images with Amazon SageMaker. With AWS Ground Station, customers pay only for the actual antenna time that you use.
Starting this week, customers of AWS Cost Anomaly Detection will see a new interface in the console, where they view and analyze anomalies and their root causes. AWS Cost Anomaly Detection monitors customers’ spending patterns to detect and alert on anomalous (increased) spend, and to provide root cause analyses.
The main benefits from this update are:
- Clearer separation between the sections in the Anomaly Details page that detail the identified anomaly and its potential underlying root causes
- Information on both the date where an anomaly was first identified and where it was last observed in the anomaly history sections of the Overview and Monitor Details pages
- An explicit on-screen error message calling out permission gaps in the Activate IAM Access setting of the root user (instead of a generic error message)
- Updated link to the relevant FAQ page in instances where no root causes are identified for a given anomaly
For more details, visit the Cost Anomaly Detection documentation.
Amazon Elastic Kubernetes Service (Amazon EKS) now allows you to more easily run workloads from various Kubernetes namespaces on AWS Fargate serverless compute with a single EKS Fargate Profile. Using Amazon EKS on AWS Fargate enables you to use Kubernetes without having to worry about compute infrastructure configuration and maintenance. Previously, you had to specify all the namespaces at the time you created the EKS Fargate Profile and were limited to a total of 5 namespace selectors or label pairs.
Many customers use the same Kubernetes cluster, but different namespaces for each team or application. With Fargate Profile selector wildcards, you can use simple wildcard characters, like * and ?, to specify that Kubernetes workloads in any matching namespace or with any matching label should run on Fargate serverless compute. For example, you can specify that all Kubernetes pods in any namespace matching *-staging run on Fargate. This namespace selector would match both my-team-staging and other-team-staging, since * matches any number of characters, including none.
AWS Trusted Advisor Priority is now generally available for AWS Enterprise Support customers, helping IT leaders focus on key cloud optimization opportunities through curated recommendations prioritized by their AWS account teams. AWS Trusted Advisor provides recommendations that help you follow AWS best practices across cost optimization, performance, security, reliability, and service quotas.
On 3/1/2022, Trusted Advisor Priority launched in preview, using the growing roster of best practice checks to determine the most helpful recommendations with business context. This week, Trusted Advisor Priority is generally available with new features. These features include allowing delegation of Trusted Advisor Priority admin rights to up to five AWS Organizations member accounts, sending daily or weekly email digests to alternate contacts in the account, allowing you to set IAM access policies for Trusted Advisor Priority, and more. Customers with an AWS Enterprise Support plan can now log in to their management or delegated administrator account and start viewing, tracking, and taking actions on curated, prioritized recommendations today.
You can now use AWS Resilience Hub with Elastic Load Balancing (ELB) and Amazon Route 53 Application Recovery Controller readiness checks to help meet your application’s recovery objectives. Resilience Hub provides you with a single place to define, validate, and track the resilience of your applications so that you can avoid unnecessary downtime caused by software, infrastructure, or operational disruptions.
Using Resilience Hub, you can now assess your application’s ELB configuration, including Application Load Balancer, Gateway Load Balancer, Network Load Balancer, and Classic Load Balancer. ELB automates the distribution of incoming application traffic across multiple targets and virtual appliances in one or more Availability Zones. In addition to assessments, Resilience Hub provides ELB configuration recommendations to help meet your application’s recovery time objective (RTO) and recovery point objective (RPO).
Resilience Hub also integrates with Route 53 Application Recovery Controller readiness checks. Route 53 Application Recovery Controller gives you insights into whether your applications and resources are ready for recovery, and helps you manage and coordinate failover. With this integration, Resilience Hub can now assess your Route 53 Application Recovery Controller configuration and recommend improvements to help achieve your application’s RTO and RPO.
The newly supported services are available in all of the AWS Regions where Resilience Hub is supported. See the AWS Regional Services List for the most up-to-date availability information.
Amazon SageMaker Canvas now enables faster onboarding allowing users to import data from their local disk automatically, without additional steps. SageMaker Canvas is a visual point-and-click interface that enables business analysts to generate accurate ML predictions on their own — without requiring any machine learning experience or having to write a single line of code. SageMaker Canvas makes it easy to access and combine data from a variety of sources, automatically clean data, and build ML models to generate accurate predictions with a few clicks.
SageMaker Canvas allows users to import data from a variety of sources including Amazon S3, Amazon Redshift, Snowflake, and local disk. Starting today, users can directly upload datasets from their local disk to SageMaker Canvas, without the need to contact their administrators as the required permissions are enabled by default. For administrators, the “Enable Canvas permissions” setting is enabled while setting up a domain, allowing SageMaker to attach a cross-origin resource sharing (CORS) policy to the default Amazon S3 bucket for uploading local files. If administrators don’t want domain users to upload local files automatically, they can choose to disable this setting.
Amazon OpenSearch Service now provides improved visibility into validation failures during domain updates. You can monitor the progress of a domain update, which could involve a blue/green deployment, from the OpenSearch Service console, or through the configuration APIs. OpenSearch Service will publish any validation failure events to Amazon EventBridge. You can also view these validation events in the Notifications tab of the OpenSearch Service console.
Depending on the type of change being made, OpenSearch Service might use a blue/green deployment process to update a domain. A blue/green deployment is the practice of creating a new environment for domain updates, and routing users to the new environment after the updates are complete. This practice minimizes downtime and maintains the original environment in the event that deployment to the new environment is unsuccessful. Blue/green deployments are required in OpenSearch Service for activities such as service software updates and for certain types of configuration changes, such as changing the instance type or modifying advanced settings. For a complete list of changes that require a blue/green deployment, see Making configuration changes in Amazon OpenSearch Service.
In January 2022, AWS started providing better visibility into the progress of blue/green deployments and its various stages, such as creating a new environment, provisioning instances, and copying shards. With this launch, AWS have introduced a new validation stage, where OpenSearch Service checks your domain for common issues that might cause a blue/green deployment to fail. If your domain matches any of these conditions, they do not trigger the blue/green deployment. Some of these scenarios include, but are not limited to, the presence of red indices in the cluster, unavailability of a chosen instance type, low disk space, and configuration issues related to Amazon Cognito or AWS Key Management Service (AWS KMS).
You can view the progress of a domain update, including details of any validation failures, under the Domain status tab in the OpenSearch Service console, or using the DescribeDomainChangeProgress API. You can also use the events that OpenSearch Service publishes to Amazon EventBridge to monitor failures. After you address the issue that caused the validation to fail, you can retry the configuration change from the OpenSearch Service console, or resubmit the change using the configuration APIs.
AWS are excited to announce the availability of Amazon EC2 P4d instances in the AWS GovCloud (US) Region. P4d instances are optimized for applications in Machine Learning (ML) and High Performance Computing (HPC).
P4d instances are powered by NVIDIA A100 Tensor Core GPUs and feature second generation Intel® Xeon® Scalable (Cascade Lake) processors, 1.1TB of system memory, and 8TB of local NVMe storage. These instances deliver 400 Gbps instance networking with support for Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect RDMA (remote direct memory access) to enable efficient scale-out of multi-node ML training and HPC workloads.
EC2 P4d instances are optimized for machine learning training workloads such as natural language understanding, perception model training for autonomous vehicles, image classification, object detection and recommendation engines. HPC customers can use P4d’s increased processing performance and GPU memory for applications such as computational fluid dynamics simulations, seismic analysis, drug discovery, DNA sequencing, and financial risk modeling.
SageMaker Pipelines is a tool that helps you build machine learning pipelines that take advantage of direct SageMaker integration. SageMaker Pipelines now supports creating and testing pipelines in your local machine (e.g. your computer). With this launch, you can test your Sagemaker Pipelines scripts and parameters compatibility locally before running them in on SageMaker in the cloud.
Sagemaker Pipelines Local Mode supports the following steps: processing, training, transform, model, condition, and fail. These steps give you the flexibility to define various entities in your machine learning workflow. Using Pipelines local mode, you can quickly and efficiently debug errors in the scripts and pipeline definition. You can seamlessly switch your workflows from local mode to Sagemaker’s managed environment by updating the session.
Amazon Redshift Query Editor v2 , a free web-based tool for data exploration and analysis using SQL, is now enhanced with additional ease of use and security capabilities. Amazon Redshift Query Editor v2 simplifies the process for admins and end-users to connect Amazon Redshift clusters using their Identity Provider(IdP). As an administrator, you now can integrate your Identity Provider(IdP) with Amazon AWS console to access the Query Editor v2 as a federated user. You need to configure your identity provider (IdP) to pass in database user and (optionally) database groups by adding specific principal tags as SAML attributes.
Amazon Redshift Query Editor v2 supports running multiple queries concurrently. You can now run parallel queries while they are performing specific tasks, such as copying large files to a table, or executing long running queries.
In addition to the above enhancements, as an administrator, you can now change global settings in Query Editor such as limiting the rows displayed per page, maximum database connections per user, enable/disable export result sets feature by users in your account and set the limit size displayed in the editor with limit switch. With this release, Amazon Redshift Query editor v2 also provides a simpler way for admins to check current KMS key ARN using account settings.
Amazon Aurora now offers customers the option to use Internet Protocol version 6 (IPv6) addresses in their Amazon Virtual Private Cloud (VPC) on new and existing Amazon Aurora instances. Customers moving to IPv6 can simplify their network stack by running their databases on a network that supports both IPv4 and IPv6.
The continued growth of the internet is exhausting available Internet Protocol version 4 (IPv4) addresses. IPv6 increases the number of available addresses by several orders of magnitude and customers no longer need to manage overlapping address spaces in their VPCs. Customers can standardize their applications on the new version of Internet Protocol by moving to IPv6 with a few clicks in the AWS Management Console.
Amazon Lex is a service for building conversational interfaces into any application using voice and text. With Amazon Lex, you can quickly and easily build conversational bots (“chatbots”), virtual agents, and interactive voice response (IVR) systems. Today, we are excited to announce support for conditional branching capability so you can easily design conversations for your users. Instead of implementing custom code, you can add simple conditions directly to your Lex bot, and manage the conversation path dynamically based on user input and business knowledge.
Conditions can be configured to handle scenarios at every turn of the conversation. For example, consider a bot designed to handle car rental calls. You can configure a customized welcome greeting based on customer profile (if customer = premium) or branch the interaction based on the user input (if rental = SUV). You can add up to four conditions and configure the bot response and next step for each condition. You can use a default condition to manage the conversation in case none of the conditions are triggered. By incorporating conditional branching in your bot, you reduce dependency on custom code and expedite the design and delivery of conversational interfaces.
AWS App Mesh adds support for multiple listeners, allowing you to run applications with several open ports in a mesh. This enables you to control and secure inbound and outbound traffic for different application ports, as well as to collect port-specific metrics for this traffic. AWS App Mesh is a service mesh that provides application-level networking to make it easier for your services to communicate with each other across multiple types of compute infrastructure. AWS App Mesh standardizes how your services communicate, giving you end-to-end visibility and options to tune for high-availability of your applications.
Now you can define a listener per each application port on your AWS App Mesh Virtual Gateways, Virtual Nodes and Virtual Routers, and define traffic routes to a specific listener. You can configure each listener independently, secure them with individual TLS certificates, and collect traffic metrics separately for each application port.
Starting today, Amazon EC2 High Memory instances with 12TB of memory (u-12tb1.112xlarge) are available in the US East (Ohio) region. Additionally, high memory instances with 6TB of memory (u-6tb1.56xlarge, u-6tb1.112xlarge) are now available in the South America (Sao Paulo) region and instances with 3TB of memory (u-3tb1.56xlarge) are now available in the South America (Sao Paulo) and Asia Pacific (Sydney) regions.
Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory.
All High Memory instances are available in US East (N. Virginia), US West (Oregon), and Europe (Frankfurt, Ireland) AWS regions. Additionally, instances with 3TB of memory (u-3tb1.56xlarge) are now available in the South America (Sao Paulo), Asia Pacific (Sydney) and Europe (Milan) regions. Instances with 6TB of memory (u-6tb1.56xlarge, u-6tb1.112xlarge) are also available in the US East (Ohio), AWS GovCloud (US), South America (Sao Paulo), Asia Pacific (Mumbai, Seoul, Singapore, Sydney), and Europe (Paris, Stockholm) regions. Instances with 12TB of memory (u-12tb1.112xlarge) are additionally available in AWS GovCloud (US-West) and Asia Pacific (Singapore) regions. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.
AWS Lambda now supports custom Consumer Group IDs when using Amazon Managed Streaming for Apache Kafka (MSK) or Self-Managed Kafka as an event source. Kafka uses Consumer Group IDs to identify consumer membership and record consumer checkpoints. Using a custom Consumer Group ID is ideal for customers with workloads that require disaster recovery or fail‑over support.
Lambda makes it easy to consume events from Kafka Topics at scale. When Lambda starts consuming from a topic, it presents a Consumer Group ID, this has always been a randomly generated unique value which ensures that Lambda will be identified as a new group consumer to the topic and that processing will start at the specified position (Latest or Trim horizon). Now, with a specified Consumer Group ID, Lambda does not need to be identified as a new consumer group. When Kafka identifies Lambda as an existing consumer group, consuming will instead start from where Kafka recorded the consumer group left off, or from the Trim horizon if the offset is no longer valid. In disaster recovery workflows, customers using Apache MirrorMaker2 can use Lambda with a custom Consumer Group ID to resume processing from a mirrored Kafka cluster.
You can get started with custom Consumer Group IDs for Amazon MSK and Self-Managed Kafka via AWS Management Console, AWS CLI, AWS SAM, or AWS SDK for Lambda. It can be used at no additional cost in all regions where AWS Lambda is available.
AWS App Mesh introduces support for customizable Envoy access log format for Virtual Nodes and Virtual Gateways. It enables you to diagnose your services with customized logging focusing on specific aspects that are important to you. AWS App Mesh is a service mesh that provides application-level networking to make it easier for your services to communicate with each other across multiple types of compute infrastructure. AWS App Mesh standardizes how your services communicate, giving you end-to-end visibility and options to tune for high-availability of your applications.
Now you can specify the desired access log pattern using any Envoy command operators. You can also use JSON or text format to define the pattern. This configuration makes it easier to export the Envoy access log file to other tools for further analysis, which may require a specific format or pattern.
AWS Glue has upgraded its serverless Python Shell jobs to add Python 3.9 support and an updated bundle of pre-loaded libraries. These jobs allow you to write complex data integration and analytics jobs in pure Python.
AWS Glue Python Shell jobs now offer 19 common analytics libraries out of the box, including Pandas, NumPy, and AWS Data Wrangler. AWS Glue also updated its existing libraries to newer versions that deliver bug fixes and performance enhancements. Customers can use the bundled functionality to connect to a variety of databases, data warehouses, and AWS services. They can also load more libraries at runtime, including high-performance libraries written in C.
You can now use an Amazon Interactive Video Service (Amazon IVS) basic channel for HD (720p) and Full HD (1080p) quality streams, in addition to SD (480p). The expanded functionality is designed to enable streamers and viewers globally to enjoy higher video quality when using a basic channel. A basic channel will deliver only the original input video quality to viewers, whereas a standard channel provides multiple qualities of output, allowing better playback quality across a range of devices and network conditions.
Pricing for video inputs for a basic channel remains the same whether using SD, HD, or Full HD inputs. Video output pricing will match the quality of the input used.
Amazon Rekognition Custom Labels is an automated machine learning (AutoML) service that allows customers to build custom computer vision models to detect objects and scenes specific to their business without in-depth machine learning expertise. Starting today, Custom Labels can automatically scale inference units of a trained model based on customer workload. This reduces model inference cost as customers no longer need to over-provision inference units to support spiky or fluctuating image volumes.
Previously, Custom Labels customers with unpredictable workloads had to set minimum inference units to support the peak volume of images they expected to process. This resulted in higher costs as the minimum inference units were consumed even if volumes were lower or absent. With autoscaling support, customers can now set both minimum and maximum inference units. Custom Labels dynamically adjusts inference units up or down within the specified minimum and maximum inference units based on image volumes. Customers are only charged for the inference units they consume.
Note that the minimum inference unit allowed is 1. As an example, say a customer specifies a minimum inference unit of 1 and maximum inference units of 5. If the customer workload consumed 5 inference units for 5 hours and 1 inference unit for the rest of the day, customer is now only charged $176 (19 hours x $4 per hour x 1 Inference unit + 5 hours x $4 per hour x 5 Inference units). Without autoscaling, the customer would have been charged $480 (24 hours x $4 per hour x 5 Inference units).
Amazon Managed Streaming for Apache Kafka Serverless (Amazon MSK Serverless) is now integrated with AWS CloudFormation and HashiCorp Terraform, which allows customers to describe and provision Amazon MSK Serverless clusters using code. These services make it easy to provision and configure Amazon MSK Serverless clusters in a repeatable, automated, and secure manner.
AWS IoT SiteWise now supports bulk import of historical measurements from sensors with the launch of three new APIs. The new bulk import APIs help customers to ingest historical data from their operations stored in diverse systems such as historians and time series databases in AWS IoT SiteWise.
With the bulk import APIs, industrial customers can perform file-based bulk data ingestion in order to ensure all their historical data for their assets is available in AWS IoT SiteWise. Once their data is in AWS IoT SiteWise customers can visualize it from applications such as AWS IoT SiteWise Monitor, Grafana, or retrieve it using AWS IoT SiteWise APIs to be consumed by other analytical applications.
To start a bulk import, customers will upload to Amazon S3 a CSV file with their historical data organized in a predefined format. Once the CSV file is uploaded, customers can start the asynchronous import using the CreateBulkImportJob API. Customers can monitor the progress of their job by using the DescribeBulkImportJob, and ListBulkImportJob APIs.
AWS Compute Optimizer is a service that recommends optimal AWS resources for your workloads to reduce costs and improve performance by using machine learning to analyze historical utilization metrics.
Starting this week, you can designate a member account in your organization to retrieve Compute Optimizer recommendations and manage Compute Optimizer preferences, giving you greater flexibility to identify resource optimization opportunities centrally.
You can centrally manage and govern your environment as you grow and scale your AWS resources with AWS Organizations, giving you the ability to programmatically create new accounts and allocate resources, consolidate your billing, create groups of accounts to organize your workflows, and apply policies to these groups for governance.
Compute Optimizer continuously analyzes the resource utilization of your Amazon EC2 instances, Amazon EBS volumes, and AWS Lambda functions for all accounts within the organization, helping identify rightsizing opportunities to reduce costs and improve performance.
To get started, you can designate a member account as the delegated administrator for Compute Optimizer using the Compute Optimizer console. After the account is registered as the delegated administrator, you can use that account to set up Compute Optimizer to identify resource optimization opportunities for all accounts within the organization.
Amazon WorkSpaces Web is now generally available in AWS Canada (Central), Europe (Frankfurt), and Europe (London) regions. Creating a WorkSpaces Web portal in a local region provides a more responsive experience for users when streaming web content. It also enables customers to meet local data residency obligations. WorkSpaces Web is now available in a total of 10 regions.
Amazon WorkSpaces Web is a low-cost, fully managed workspace built specifically to facilitate secure access to internal websites and software-as-a-service (SaaS) applications from existing web browsers, without the administrative burden of appliances or specialized client software. With WorkSpaces Web, you can provide users with access to web-based productivity tools from any browser while protecting internal content with enterprise controls.
WorkSpaces Web offers low, predictable, pay as you go pricing. Customers pay only a low, monthly price for employees who actively use the service, eliminating the risk of over-buying. There are no up-front costs, licenses, or long-term commitments. For more information, see Amazon WorkSpaces Web pricing page.
Amazon RDS for SQL Server expands support for M6i, R6i and R5b instances in additional AWS regions. M6i instances are available today in Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), Canada (Central), Europe (London), Europe (Paris) and South America (Sao Paulo) regions. R6i instances are available today in Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (London), Europe (Paris), South America (Sao Paulo) and US West (N. California). R5b instances are available today in Asia Pacific (Seoul), Asia Pacific (Sydney), Canada (Central), Europe (Ireland), Europe (London), South America (Sao Paulo).
M6i and R6i instances are powered by 3rd generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.5 GHz, delivering improved compute price performance over equivalent M5 and R5 instances. To meet customer demands for increased scalability, M6i and R6i instances provide a new instance size of 32xlarge with 128 vCPUs and 33% more memory than the largest M5 and R5 instances. M6i.32xlarge has 512 GiB of memory and R6i.32xlarge has 1,024 GiB of memory. They also provide up to 20% higher memory bandwidth per vCPU compared to the previous 5th generation instances. These instances give customers up to 50 Gbps of networking speed and 40 Gbps of bandwidth to the Amazon Elastic Block Store, 2x that of M5 and R5 instances.
R5b instance types, powered by the AWS Nitro System, are available in 8 sizes with up to 96 vCPUs and 768 gigabytes of memory. R5b instance types are ideally suited for relational database workloads, delivering up to 60 gigabytes per second of EBS-optimized bandwidth and 64,000 IOPS, an increase of up to 3X over R5 instance types. The increase in EBS-optimized performance enables large databases to migrate to Amazon RDS. In addition, existing Amazon RDS customers can migrate their DB Instances to smaller-sized R5b instances and still meet their workload IOPS requirements, saving on compute and licensing costs.
Amazon CloudFront now supports HTTP version 3 (HTTP/3) requests over QUIC for end user connections. HTTP/3 uses QUIC, a user datagram protocol (UDP) based, stream-multiplexed, secure transport protocol that combines and improves upon the capabilities of existing transmission control protocol (TCP), TLS, and HTTP/2. HTTP/3 offers several benefits over previous HTTP versions, including faster response times and enhanced security.
Customers are constantly looking to deliver faster and more secure applications to their users. As internet penetration increases globally and more users come online via mobile and from remote networks, the need for improved performance and reliability is greater than ever. HTTP/3 is an improvement over previous HTTP versions, and helps customers improve performance and end-viewer experience by reducing connection times and eliminating head of line blocking. CloudFront's HTTP/3 support is built on top of s2n-quic, a new open-source QUIC protocol implementation in Rust, with a strong emphasis on efficiency and performance.
CloudFront’s HTTP/3 implementation supports client-side connection migrations, allowing client applications to recover connections that are experiencing problematic events such as Wifi to cellular migration or persistent packet loss, with minimal or no interruption. Additionally, HTTP/3 provides enhanced security as it uses QUIC which encrypts the TLS handshake packets by default. CloudFront customers that have enabled HTTP/3 on their distributions have seen up to 10% improvement in time to first byte, and up to 15% improvement in page load times. Customers have also observed reliability improvements as handshake failures reduced when they enabled HTTP/3 on their distributions.
To enable HTTP/3 on your distributions, you can edit the distribution configuration through the CloudFront Console, the UpdateDistribution API action, or using a CloudFormation template. Clients that do not support HTTP/3 can still communicate with HTTP/3 enabled Amazon CloudFront distributions using previous HTTP versions.
HTTP/3 is now available on all 410+ CloudFront edge locations worldwide and there is no additional charge for using this feature. To learn more about Amazon CloudFront HTTP/3, refer the CloudFront Developer Guide.
AWS Config now supports 20 new resource types including Amazon SageMaker, Amazon Route 53, Amazon Elastic Kubernetes Service (Amazon EKS), AWS Global Accelerator, AWS Glue, and others. For the full list of newly supported resource types see .
With this launch, you can now use AWS Config to monitor configuration data for the newly supported resource types in your AWS account. AWS Config provides a detailed view of the configuration of AWS resources in your AWS account, including how resources were configured and how the configuration changes over time.
Get started by enabling AWS Config in your account using the AWS Config console or the AWS Command Line Interface (AWS CLI). Select the newly supported resource types for which you want to track configuration changes. If you previously configured AWS Config to record all resource types, then the new resources will be automatically recorded in your account. AWS Config support for the new resources is available to AWS Config customers in all regions where the underlying resource type is available. To view a complete list of all supported types, see supported resource types page.
 Newly supported resource types:
Google Cloud Releases and Updates
Anthos clusters on VMware 1.10.6-gke.36 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.10.6-gke.36 runs on Kubernetes 1.21.14-gke.2100.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.12, 1.11, and 1.10.
Policy Controller has been updated to include a more recent build of OPA Gatekeeper (hash: 8f1ef8c).
Updated the built-in Open Telemetry image to v0.54.0 to include a bug fix for using ADC with Cloud Spanner receiver.
Fixed the reconciler Pod CrashLoopBackoff issue caused by the
git-sync container starting before the
cluster-autoscaler.kubernetes.io/safe-to-evict: "true" annotation to the reconciler Pod so that it does not block Cluster Autoscaler scale down.
Anthos Service Mesh
1.14.3-asm.1 is now available.
This patch release contains a fix for the known issue with the signatures of the revisions released August 11, 2022.
On August 15, 2022, GCP released the preview version of the Oracle DB connector for Apigee. For more information, see Create a Oracle DB connection.
Cloud console updates: You can now copy BigQuery metadata to your clipboard by using the following options:
In the Schema view, to copy a table's schema, select any fields, and then click content_copy Copy.
In the Explorer pane, to copy the ID of a resource, click more_vert View actions, and then click Copy ID.
Cloud console updates: Improvements include the following:
Query results are now displayed in resizable columns.
Tab titles now expand when space is available for longer names.
Tooltips no longer display text immediately when you hold the pointer over them, avoiding unnecessary distraction.
In the Explorer pane, you can now access saved queries by expanding your project. The Saved Queries pane is no longer at the bottom of the console.
In the Explorer pane, you can now find a table by searching for
In the query editor, you can now press the F1 shortcut key to view more editor shortcuts.
Workforce identity federation lets you authenticate and authorize users from external identity providers to access supported Google Cloud products, including BigQuery resources. This feature is now in preview.
Previously, you could commit up to 100 GB in streaming bytes for every Storage Write API pending mode commit that you triggered in regions other than the US and EU multi-regions. This limit is now 1 TB. For more information, see Storage Write API quotas.
The following supported default parsers have changed. Each is listed by product name and ingestion label, if applicable.
- Akamai WAF (AKAMAI_WAF)
- Arista Switch (ARISTA_SWITCH)
- AWS CloudWatch (AWS_CLOUDWATCH)
- AWS GuardDuty (GUARDDUTY)
- AWS Macie (AWS_MACIE)
- AWS Route 53 DNS (AWS_ROUTE_53)
- AWS WAF (AWS_WAF)
- Azure AD (AZURE_AD)
- Azure AD Organizational Context (AZURE_AD_CONTEXT)
- Bitdefender (BITDEFENDER)
- Bluecat DDI (BLUECAT_DDI)
- Centrify (CENTRIFY_SSO)
- Check Point (CHECKPOINT_FIREWALL)
- Cisco Application Centric Infrastructure (CISCO_ACI)
- Cisco ISE (CISCO_ISE)
- Custom DNS (CUSTOM_DNS)
- Cylance Protect (CYLANCE_PROTECT)
- Elastic Windows Event Log Beats (ELASTIC_WINLOGBEAT)
- FireEye (FIREEYE_ALERT)
- Forcepoint Proxy (FORCEPOINT_WEBPROXY)
- FortiGate (FORTINET_FIREWALL)
- IBM z/OS (IBM_ZOS)
- Linux DHCP (LINUX_DHCP)
- Microsoft AD FS (ADFS)
- Microsoft Azure Resource (AZURE_RESOURCE_LOGS)
- Microsoft Defender for Endpoint (MICROSOFT_DEFENDER_ENDPOINT)
- Microsoft SQL Server (MICROSOFT_SQL)
- Nasuni File Services Platform (NASUNI_FILE_SERVICES)
- Palo Alto Prisma Cloud (PAN_PRISMA_CLOUD)
- Ping Identity (PING)
- Riverbed Steelhead (STEELHEAD)
- SiteMinder Web Access Management (CA_SSO_WEB)
- Snoopy Logger (SNOOPY_LOGGER)
- Stormshield Firewall (STORMSHIELD_FIREWALL)
- Symantec Endpoint Protection (SEP)
- Tanium Stream (TANIUM_TH)
- VMware ESXi (VMWARE_ESX)
- VMware Horizon (VMWARE_HORIZON)
- Windows Event (WINEVTLOG)
- Windows Sysmon (WINDOWS_SYSMON)
For details about changes in each parser, see Supported default parsers.
Chronicle curated detections provide out-of-the-box threat detection content curated, built, and maintained by Google Cloud Threat Intelligence (GCTI) researchers. This release of curated detections cover the following range of threats:
- Windows-based threats: Coverage for several classes of threats including infostealers, ransomware, RATs, misused software, and crypto activity.
- Cloud attacks and cloud misconfigurations: Secure cloud workloads with additional coverage around exfiltration of data, suspicious behavior, and additional vectors.
You can now configure new data feeds for your Chronicle account using Feed Management. This feature makes it possible for you to setup your own data feeds without the assistance of Chronicle support personnel. You can setup new data feeds using either the Feed Management user interface or the Feed Management API. Chronicle returns error messages in the event you have misconfigured a feed and need to make changes.
loud Bigtable-BigQuery federation is now generally available (GA). You can use BigQuery to query data from Cloud Bigtable and blend it with data from other federated data sources. For more information, see Querying Cloud Bigtable data.
Cloud Data Fusion
Changes in 6.7.1:
Enhanced the Dataproc provisioner to prevent unneeded Compute Engine calls, depending on the configuration settings.
For new Dataproc compute profiles, changed the default value of Master Machine Type from
Health checks for internal load balancers and automatic failovers in Cloud DNS routing policies are now available in Preview.
You can now manage an alias record, which maps an alias domain name to a canonical name at the zone apex, by using Cloud DNS.
Dedicated Interconnect support is available in the following colocation facilities:
- DATA4 Milan-Cornaredo, Milan
- Telehouse - Paris 2 (Voltaire - Léon Frot), Paris
For more information, see the Locations table.
Cloud Monitoring is introducing pricing for uptime checks, effective October 1, 2022. For more information, see Cloud Monitoring pricing summary.
The GKE Clusters List page now includes a new Observability tab that displays Monitoring data. This tab shows infrastructure health metric trends such as CPU, memory, container restarts and control-plane metrics. It also provides visibility into ingestion into Google Cloud Managed Service for Prometheus and Cloud Logging. For more information, see View observability metrics.
DISABLE_INLINE hint is now available to use in a Google Standard SQL function call. This allows a function to be computed once instead of each time another part of a query references it.
Cloud SQL for MySQL
By enabling instance deletion protection, you can prevent the accidental removal of Cloud SQL instances. This functionality is generally available.
For more information, see Prevent deletion of an instance.
Cloud SQL for PostgreSQL / SQL Server
By enabling instance deletion protection, you can prevent the accidental removal of Cloud SQL instances. This functionality is generally available.
For more information, see Prevent deletion of an instance.
Deep Learning Containers
- Tensorflow has been updated to 2.9.1, 2.8.1, and 2.6.5 to include upstream changes.
- Regular package refreshment and bug fixes.
Deep Learning VM Images
- Tensorflow has been updated to 2.9.1, 2.8.1, and 2.6.5 to include upstream changes.
- Updated to the latest NVIDIA driver version: 510.47.03.
- The latest NVIDIA driver version does not support K80 GPUs. To use K80 GPUs, you must use an M94 or earlier environment.
- Fixed bug in which the user is prompted with the warning
JupyterLab build is suggestedon startup for TensorFlow Deep Learning VMs.
- Regular package refreshment and bug fixes.
Dialogflow CX and ES have new tutorials that walk through the steps of deploying a Dialogflow agent on Google Cloud, integrating with Cloud Functions, Spanner, and App Engine:
Version 1.21.13-gke.900 is now the default version in the Stable channel.
Version 1.20.15-gke.11400 is now available in the Stable channel.
Version 1.20.15-gke.9900 is no longer available in the Stable channel.
Control planes and nodes with auto-upgrade enabled in the Stable channel will be upgraded from version 1.19 to version 1.20.15-gke.11400 with this release.
Control planes and nodes with auto-upgrade enabled in the Stable channel will be upgraded from version 1.20 to version 1.21.13-gke.900 with this release.
Control planes and nodes with auto-upgrade enabled in the Stable channel will be upgraded from version 1.21 to version 1.21.13-gke.900 with this release.
The GKE Clusters List page now includes a new Observability tab. This tab shows infrastructure health metric trends such as CPU, Memory, container restarts and Control Plane metrics. It also provides visibility into ingestion into Google Cloud Managed Service for Prometheus and Cloud Logging. For more information, see View observability metrics.
Workforce identity federation now lets users from external identity providers sign in to the Google Cloud workforce identity federation console, also known as the console (federated). The console (federated) provides UI access to supported Google Cloud products. This feature is available in Preview.
Cloud IoT Core will be retired on August 16, 2023. After August 15, 2023, the documentation for IoT Core will no longer be available.
- deps: allow protobuf < 5.0.0 (#762) (260bd18)
- deps: require proto-plus >= 1.22.0 (260bd18)
- set stream_ack_deadline to max_duration_per_lease_extension or 60 s, set ack_deadline to min_duration_per_lease_extension or 10 s (#760) (4444129)
- Update stream_ack_deadline with ack_deadline (#763) (e600ad8)
Edge Appliance is now generally available (GA).
Edge Appliance is a Google Cloud-managed, secure, high-performance appliance for edge locations. It provides local storage, ML inference, data transformation, and export.
You can now place your Transfer Appliance into suspend mode before moving it to a new location. Suspend mode removes access to data on the device and suspends any transfers.
Learn more from the Suspend section of the documentation.
Vertex Explainable AI
Vertex Explainable AI now offers Preview support for example-based explanations. For more information, see Configure example-based explanations for custom training.
Microsoft Azure Releases And Updates
JetStream DR enables cost-effective disaster recovery by consuming minimal resources at the DR site and using cost-effective cloud storage. JetStream DR now also automates recovery to Azure NetApp Files datastores.
Network security groups (NSGs) support for private endpoints is now generally available.
User-defined routes (UDRs) support for private endpoints is now generally available.
General availability enhancements and updates released for Azure SQL.
Hierarchical forecasting, now generally available, offers you the capability to produce consistent forecasts for all levels of your data.
The latest PostgreSQL minor versions 14.5, 13.8, 12.12, and 11.17 are available for Azure Database for PostgreSQL – Hyperscale (Citus).
Enable server logs and log types for detailed insights of server activities to help you identify and troubleshoot potential issues.
Gain up to a 55 percent discount with reserved pricing on the Enterprise tiers of Azure Cache for Redis by committing to spend for one- or three-years.
Use S3-compatible object storage as a backup and restore destination.
Standard tier subscribers can synchronize any updates from the configuration store to replicas in other Azure regions.
Announcing public preview for enabling container insights with managed identity authentication.
Azure Monitor Logs is in public preview in four new regions: US Virgina Gov, US Arizona Gov, China East 3, and China North 3.
Container insights agent is being renamed from OMSAgent to Azure Monitor agent, along with some related resource name changes. Please update your alerts, queries, and scripts to avoid any issues.
Easily create Dockerfiles, deployment files, and GitHub Actions through Visual Studio Code using the AKS DevX extension.
You can now use Azure Dedicated Hosts with AKS.
Securely manage your keys in production using bring your own key (BYOK) encrypted etcd and key management system (KMS).
You can now use built in storage classes for NFS and BlobFuse to provision new persistent volumes backed by Azure Blob storage.
You can now hook up a GitHub Action to your Kubernetes application and deploy it to AKS.
Support for 11 policy definitions increases security of Azure API Management services and APIs that they expose.
Enable a secure OAuth 2.0 authorization code flow when using Azure Active Directory or Azure Active Directory B2C identity providers.
Investigate alert incidences with Azure Monitor Logs connector scoped to the exact time range of alert.
You can now use Dapr release 1.8 with Azure Container Apps.
Self-service, high-performance workstations in the cloud that are preconfigured and ready-to-code for developers’ projects and tasks.
The first hyperscale datacenter region in Qatar is now available.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here: