This week's roundup of all the cloud news.
Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday 24th June 2022.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
AWS Updates and Releases
Amazon SageMaker Ground Truth helps you build high-quality training datasets for your machine learning (ML) models. With SageMaker Ground Truth, you can use workers from Amazon Mechanical Turk, a vendor company that you choose, or your own private workforce to create labeled datasets for training ML models.
Starting this week, you can now use SageMaker Ground Truth to create and run a labeling job inside an Amazon Virtual Private Cloud (VPC), instead of connecting over the internet. This allows you to use Ground Truth while keeping your data in S3 buckets that are logically isolated and secure in your Amazon VPC.
Starting this week, compute-optimized Amazon EC2 C7g instances are available in US East (Ohio) and Europe (Ireland). C7g instances are the first instances powered by the latest AWS Graviton3 processors and deliver up to 25% better performance over Graviton2-based C6g instances for a broad spectrum of applications such as application servers, micro services, batch processing, electronic design automation (EDA), gaming, video encoding, scientific modelling, distributed analytics, high performance computing (HPC), CPU-based machine learning (ML) inference, and ad serving.
AWS Graviton3 processors are the latest generation of custom-designed AWS Graviton processors that enable the best price performance for workloads in Amazon Elastic Compute Cloud (Amazon EC2). They offer up to 2x better floating-point performance, up to 2x faster crypto performance, and up to 3x better ML performance, including support for bfloat16, compared to AWS Graviton2 processors. Graviton3-based C7g instances are the first generally available instances in the cloud to feature the latest DDR5 memory, which provides 50% more memory bandwidth compared to DDR4 to enable high-speed access to data in memory. Graviton3-based instances also use up to 60% less energy for the same performance than comparable EC2 instances, enabling you to reduce your carbon footprint in the cloud. Amazon EC2 C7g instances are built on the AWS Nitro System, a collection of AWS designed hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage. They offer up to 30 Gbps enhanced networking bandwith and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).
Amazon QuickSight now supports monitoring of QuickSight assets by sending metrics to Amazon CloudWatch. QuickSight developers and administrators can use these metrics to observe and respond to the availability and performance of their QuickSight ecosystem in near real time. They can monitor dataset ingestions, dashboards, and visuals to provide their readers with a consistent, performant, and uninterrupted experience on QuickSight. For more information, visit here.
There are four categories of QuickSight CloudWatch metrics:
- Ingestion metrics can be used to monitor performance and availability of data ingestions (IngestionInvocationCount, IngestionRowCount, IngestionErrorCount and IngestionLatency).
- Dashboard metrics can be used to monitor performance of dashboards when viewed by readers (DashboardViewLoadTime and DashboardViewCount).
- Visual metrics can be used to monitor the performance and availability of visuals (VisualLoadTime and VisualLoadErrorCount).
- Aggregate Metrics can be used to monitor performance and availability of all ingestions, dashboards and visuals in a region of the account.
Administrators and developers can also use the CloudWatch console to graph metric data generated by Amazon QuickSight. For more information, see Graphing metrics in the Amazon CloudWatch User Guide. They can also create a CloudWatch alarm that monitors CloudWatch metrics for their QuickSight assets. CloudWatch will automatically send a notification when the metric reaches a specified threshold.
Starting this week, the Amazon Elastic Compute Cloud (Amazon EC2) G5 instances powered by NVIDIA A10G Tensor Core GPUs are now available in Asia Pacific (Mumbai, Tokyo), Europe (Frankfurt, London), and Canada (Central). G5 instances can be used for a wide range of graphics intensive and machine learning use cases. They deliver up to 3x higher performance for graphics-intensive applications and machine learning inference, and up to 3.3x higher performance for training simple to moderately complex machine learning models when compared to Amazon EC2 G4dn instances.
G5 instances feature up to 8 NVIDIA A10G Tensor Core GPUs and 2nd generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.6 TB of local NVMe SSD storage. With eight G5 instance sizes that offer access to single or multiple GPUs, customers have the flexibility to pick the right instance size for their applications.
We are excited to announce that Amazon SageMaker Ground Truth now provides support so you can generate labeled synthetic data without collecting large amounts of real-world, manually labeled data. Amazon SageMaker provides two data labeling offerings, Amazon SageMaker Ground Truth Plus and Amazon SageMaker Ground Truth. You can use both options to identify raw data (such as images, text files, and videos) and add informative labels to create high-quality training datasets for your machine learning (ML) models.
SageMaker Ground Truth can generate labeled synthetic data on your behalf so that you can use synthetic data with real-world data to train ML models across a wide range of computer vision use cases. You specify your synthetic image requirements or provide 3D assets and baseline images, and AWS digital artists can generate hundreds of thousands of synthetic images that are automatically labeled. The generated images imitate pose and placement of objects, include object or scene variations, and optionally add specific inclusions, such as scratches, dents, and other alterations that are not often included in ML training datasets.
When writing code, developers must keep up with multiple programming languages, frameworks, software libraries, and popular cloud services. However, they can accelerate the development process with CodeWhisperer by simply writing a comment in their IDE’s code editor. CodeWhisperer automatically analyzes the comment, determines which cloud services and public libraries are best suited for the specified task, and recommends a code snippet directly in the source code editor. CodeWhisperer code recommendations are based on ML models trained on various data sources, including Amazon and open-source code. Developers can accept the top recommendation, view more recommendations, or continue writing their own code.
CodeWhisperer provides security scans (for Java and Python) to help developers detect vulnerabilities in their projects and build applications responsibly. The service also includes a reference tracker that detects whether a code recommendation might be similar to particular training data. Developers can then easily find and review the code example and decide whether to use the code in their project. Additionally, CodeWhisperer empowers developers to avoid bias by removing code recommendations that might be considered biased and unfair.
Starting this week, Amazon EC2 C6gd instances are available in Asia Pacific (Seoul) Region. C6gd instances are ideal for compute-intensive workloads such as high performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modelling, distributed analytics, and CPU-based machine learning inference. C6gd instances offer up to 50% more NVMe storage GB/vCPU over comparable x86-based instances and are ideal for applications that need high-speed, low latency local storage.
Amazon EC2 C6gd instances are powered by AWS Graviton2 processors. The AWS Graviton processors are custom-designed by AWS to enable the best price performance in Amazon EC2. AWS Graviton2 processors are the second-generation Graviton processors and deliver a major leap in performance and capabilities over first-generation AWS Graviton processors, with 7x performance, 4x the number of compute cores, 2x larger caches, and 5x faster memory. AWS Graviton2 processors feature always-on 256-bit DRAM encryption and 50% faster per core encryption performance compared to the first-generation AWS Graviton processors. C6gd instances are built on the AWS Nitro System, a collection of AWS-designed hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage. These instances offer up to 25 Gbps of network bandwidth, up to 19 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS), and up to 3.8 TB of NVMe-based SSD storage.
The Amazon Relational Database Service (Amazon RDS) Multi-AZ deployment option with one primary and two readable standby database (DB) instances across three Availability Zones (AZs) now supports M5d and R5d instances. This deployment option gives you up to 2x lower transaction commit latency, automated fail overs typically under 35 seconds, and readable standby instances.
M5d and R5d DB instances are powered by Intel Xeon Platinum 8000 series processors in the cloud and increase DB instance choices for Amazon RDS Multi-AZ with two readable standbys. R5d DB instances offer up to 768 GiB of memory for applications that balance high performance writes and reads. Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Amazon RDS DB instances, making them a natural fit for production database workloads. Although continuing to use network storage for durability, Multi-AZ deployments with two readable standbys optimize transaction commit performance using local instance storage. This configuration supports up to 2x faster transaction commits than a Multi-AZ DB instance deployment with one standby, without compromising data durability. Automated failovers in this configuration typically take under 35 seconds. In addition, the standby DB instances can also serve read traffic without needing to attach additional read replica DB instances. This deployment option is ideal when your workloads require lower write latency, automated failovers, and more read capacity.
Starting this week, AWS Site-to-Site VPN supports the ability to deploy IPSec VPN connections over Direct Connect using private IP addresses. With this change, customers can encrypt DX traffic between their on-premises network and AWS without the need for public IP addresses, thus enabling enhanced security and network privacy at the same time.
AWS Site-to-Site VPN is a fully-managed service that creates a secure connection between your data center or branch office and your AWS resources using IP Security (IPSec) tunnels. Until now, you were required to use a public IP address to connect your on-premises networks to AWS VPCs. Many customers require robust encryption of network traffic over Direct Connect and at the same time are not allowed to use public IP addresses for this communication. With this launch, you can configure private IP addresses (RFC1918) on their IPSec VPN tunnels over Direct Connect and ensure that traffic between AWS and on-premises networks is both encrypted and private. This feature improves your overall security posture and allows you to better comply with any regulatory or security mandates.
To get started, create a private IP VPN connection to an AWS transit gateway over Direct Connect, and specify the outside IP address type to be a private IP. You need to specify the appropriate Transit Gateway Direct Connect attachment that you wish to use as transport for this private IP VPN connection. You can route traffic over the Private IP VPN connection between AWS and your remote network using either BGP (dynamic) or by configuring static routes in Transit Gateway route tables. This feature is available through the AWS Management Console, the Amazon Command Line Interface (Amazon CLI), and the Amazon Software Development Kit (Amazon SDK).
Amazon Aurora PostgreSQL-Compatible Edition now supports PostgreSQL major version 14 (14.3). PostgreSQL 14 includes performance improvements for parallel queries, heavily-concurrent workloads, partitioned tables, logical replication, and vacuuming. PostgreSQL 14 also improves functionality with new capabilities. For example, you can cancel long-running queries if a client disconnects and you can close idle sessions if they time out. Range types now support multiranges, allowing representation of non-contiguous data ranges, and stored procedures can now return data via OUT parameters. This release includes new features for Babelfish for Aurora PostgreSQL version 2.1. Please refer to Amazon Aurora PostgreSQL updates for more information.
To use the new version, create a new Aurora PostgreSQL-compatible database instance with just a few clicks in the Amazon RDS Management Console. You can also upgrade existing database instances. Please review the Aurora documentation to learn more about upgrading. PostgreSQL 14 is available in all regions supported by Aurora PostgreSQL. Refer to the Aurora version policy to help you to decide how often to upgrade and how to plan your upgrade process.
Amazon Quantum Ledger Database (Amazon QLDB) launches a new Console Query Editor providing an improved interface for authoring queries, debugging transactions, and exploring results. The new editor supports tabs for simple management of multiple queries, PartiQL syntax highlighting, query performance statistics, multi-statement transactions, and a timer to track the transaction duration limit. You can search and filter your results across the table view, Ion document view, or CSV view for easy exploration of your results in the format you prefer. Query results can also be downloaded in Ion and CSV formats.
Amazon Relation Database Service (Amazon RDS) Custom is now available in Asia Pacific (Mumbai) and Europe (London) AWS Regions.
Amazon RDS Custom is a managed database service for legacy, custom, and packaged applications that require access to the underlying OS and DB environment. Amazon RDS Custom is available for the Oracle and SQL Server database engines. Amazon RDS Custom automates setup, operation, and scaling of databases in the cloud while granting access to the database and underlying operating system to configure settings, install drivers, and enable native features to meet the dependent application's requirements.
Amazon Relational Database Service (Amazon RDS) for PostgreSQL and for MySQL now supports Multi-AZ deployment option with one primary and two readable standby database (DB) instances in Europe (Frankfurt) and Europe (Stockholm) Regions. This deployment option gives you up to 2x lower transaction commit latency, automated fail overs typically under 35 seconds, and readable standby instances.
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Amazon RDS DB instances, making them a natural fit for production database workloads. Although continuing to use network storage for durability, Multi-AZ deployments with two readable standbys optimize transaction commit performance using local instance storage on M5d and R5d instances. This configuration supports up to 2x faster transaction commits than a Multi-AZ DB instance deployment with one standby, without compromising data durability. Automated failovers in this configuration typically take under 35 seconds. In addition, the standby DB instances can also serve read traffic without needing to attach additional read replica DB instances. This deployment option is ideal when your workloads require lower write latency, automated failovers, and more read capacity.
Amazon Relational Database Service (Amazon RDS) Custom for Oracle now supports Oracle Database versions 12.2 and 18c. Amazon RDS Custom is a managed database service for applications that require customization of the underlying operating system and database environment. With support now added for 12.2 and 18c, you can now run your legacy, packaged and customized applications that are dependent on these database versions on Amazon RDS Custom for Oracle.
To create an Amazon RDS Custom for Oracle DB instance, you start by building a custom engine version (CEV) by supplying your own database installation media files for a given version. You can create a CEV by choosing the ‘Create’ operation under Custom engine version menu in AWS Management Console. Alternatively, you can create a CEV using the AWS Command Line Interface (AWS CLI).
This week, AWS announced AWS Direct Connect support for all AWS Local Zones in the United States. Your network traffic now takes the shortest path between Direct Connect point of presence (PoP) locations and AWS resources running in Local Zones. This feature reduces the distance network traffic must travel, decreasing latency and helping make applications more responsive.
With Direct Connect, you can transfer data privately and directly from your data center, office, or colocation environment into and out of AWS. Connections through Direct Connect bypass the public internet to help decrease network congestion and unpredictability. Local Zones are a type of infrastructure deployment that places compute, storage, database, and other select AWS services close to large population and industry centers. Previously, connectivity from a Direct Connect PoP to a resource running in a Local Zone, except for the Los Angeles Local Zone, would flow through the parent AWS Region of the Local Zone. With today’s release, network traffic takes the shortest path between Direct Connect PoPs and Local Zones in the United States, helping you deliver applications that require single-digit millisecond latency to end-users or on-premises resources.
AWS Lake Formation is a service that allows you to set up a secure data lake in days. A data lake is a centralized curated, and secured repository that stores all your data, both in its original form and prepared for analysis. A data lake enables you to break down data silos and combine different types of analytics to gain insights and guide better business decisions.
Creating a data lake with Lake Formation allows you to define where your data resides and what data access and security policies you want to apply. Lake Formation then collects and catalogs data from databases and object storage, moves the data into your new Amazon S3 data lake, cleans and classifies data using machine learning algorithms, and secures access to your sensitive data. Your users can then access a centralized catalog of data which describes available data sets and their appropriate usage. Your users then leverage these data sets with their choice of analytics and machine learning services, like Amazon EMR for Apache Spark, Amazon Redshift Spectrum, AWS Glue, Amazon QuickSight, and Amazon Athena.
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 3.1.1 and 3.2.0 for new and existing clusters. Apache Kafka 3.1.1 and Apache Kafka 3.2.0 includes several bug fixes and new features that improve performance. Some of the key features include enhancements to metrics and the use of topic IDs. MSK will continue to use and manage Zookeeper for quorum management in this release for stability. For a complete list of improvements and bug fixes, see the Apache Kafka release notes for 3.1.1 and 3.2.0.
Amazon MSK is a fully managed service for Apache Kafka that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is 100% compatible with Apache Kafka, which enables you to quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you can spend more time innovating on applications and less time managing clusters. To learn how to get started, see the Amazon MSK Developer Guide.
AWS CodeBuild is now available in the AWS Asia Pacific (Jakarta) Region. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can get started quickly by using prepackaged build environments, or you can create custom build environments that use your own build tools. Using CodeBuild, you are charged by the minute for the compute resources you use.
AWS Well-Architected Tool now allows customers to preview custom lens content before publishing, add additional URLs to helpful resources and improvement plans, and use tags to assign metadata to their custom lenses.
AWS Well-Architected Tool is designed to help you review the state of your applications and workloads, and it provides a central place for architectural best practices and guidance. Customers who use the AWS Well-Architected Tool often have internal best practices they follow, and custom lenses provide the ability to create content based on their internal best practices. With a custom lens, customers can create their own pillars, questions, best practices, helpful resources, and improvement plans. Customers can then share them across their entire organization to measure workloads consistently, specify rules to determine which options result in high or medium risk, and provide guidance on how to resolve those risks.
To help lens authors validate their lenses, they can now preview their custom lens content before publishing. To allow authors to provide more resource information for best practices, they can now add additional URLs to helpful resources and improvement plans within a custom lens. Additionally, customers can now use AWS tags to assign metadata to a custom lens. With custom lenses, AWS Well-Architected Tool becomes a single place for customers to review and measure best practices while performing associated operational reviews for all technology across their organization.
AWS App2Container (A2C) now supports Azure DevOps for setting up a CI/CD pipeline to automate building and deploying container applications on AWS. With this release, customers can leverage App2Container to automate the setup of Azure DevOps service pipeline for managing automated build and deployment of containerized applications. App2Container automates the build pipeline setup by installing the required tooling such as AWS Toolkit and the Docker engine. In addition, App2Container also sets up the release pipeline using existing Azure DevOps Service accounts to deploy the containerize image to AWS container services. This is in addition to AWS CodePipeline and Jenkins support already included in App2Container.
AWS App2Container (A2C) is a command-line tool for modernizing .NET and Java applications into containerized applications. A2C analyzes and builds an inventory of all applications running in virtual machines, on premises, or in the cloud. You simply select the application you want to containerize, and A2C packages the application artifact and identified dependencies into container images, configures the network ports, and generates the ECS task and Kubernetes pod definitions.
Today, Amazon Elastic Container Registry (Amazon ECR) launched the support for AWS PrivateLink in the Asia Pacific (Osaka) Region. Now you can access Amazon ECR API from your Amazon Virtual Private Cloud (Amazon VPC) in Osaka region without using public IPs and without requiring the traffic to traverse across the internet.
AWS PrivateLink is a networking technology designed to enable access to AWS services in a highly available and scalable manner, while keeping the network traffic within the AWS network. When you create an AWS PrivateLink endpoint for Amazon ECR in the Osaka Region, the service endpoints appear as elastic network interfaces with a private IP address in your Amazon VPC. Using the AWS PrivateLink endpoint, workloads are able to send traffic from ECR APIs in and out of the Osaka region without exposing it to the public internet.
We are excited to announce the general availability of hardware connectivity modules powered by AWS IoT ExpressLink, which are developed and offered by AWS Partners such as Espressif, Infineon, and u-blox. These modules enable easy AWS cloud-connectivity and implement AWS-mandated security requirements for device to cloud connections. Integrating these wireless modules into their hardware design, customers can now accelerate the development of their Internet of Things (IoT) products, including consumer products, industrial and agricultural sensors and controllers.
Developers of all skill levels can now quickly and easily transform their products into IoT devices without having to merge large amounts of code or have a deep understanding of the underlying implementation. The connectivity modules come pre-provisioned with security credentials, allowing you to off-load complex networking and cryptography tasks to the module and develop IoT products that connect securely to the cloud in weeks rather than months. Through seamless integration with a range of AWS IoT services such as AWS IoT Core, AWS IoT Device Shadow, and more, modules that use AWS IoT ExpressLink can easily access over 200 AWS cloud services.
Amazon ECS now fully supports multiline logging powered by AWS for Fluent Bit for both AWS Fargate and Amazon EC2. AWS Fluent Bit is an AWS distribution of the open-source project Fluent Bit, a fast and a lightweight log forwarder. Amazon ECS users can use this feature to re-combine partial log messages produced by your containerized applications running on AWS Fargate or Amazon EC2 into a single message for easier troubleshooting and analytics.
The best practice for containerized applications is to send logs to the standard output of the operational system, such as stdout or stderr. AWS Fargate container runtime splits long log messages exceeding 16KB max buffer size into partial messages for optimal performance results. As result, users can face challenges working with long application logs messages such as stack traces when they arrive at the final destination, like analytics solutions or logs storage.
AWS Fluent Bit now supports a multiline filter, a capability that helps concatenate partial log messages that originally belong to one context but were split across multiple records or log lines for both ECS EC2 and Fargate. Customers can use AWS for Fluent Bit to route logs from their containerized applications to AWS Services, such as Amazon CloudWatch and Amazon Kinesis Data Firehose or partner solutions for log analytics and storage. Amazon ECS customers can use FireLens to configure AWS for Fluent Bit or setup AWS for Fluent Bit as a sidecar or daemon manually.
Amazon Textract is a machine learning service that automatically extracts text, handwriting, and data from any document or image. We continuously improve the underlying machine learning models based on customer feedback to provide even better accuracy. Today, we are pleased to announce a quality enhancement to our Forms extraction feature.
Amazon Textract now provides enhanced key-value pair extraction accuracy for standardized documents with consistent layouts like select CMS (Center for Medicare and Medicaid) healthcare, IRS tax and ACORD insurance forms. These documents have traditionally been challenging to extract information from due to their dense and complex layouts. Textract is now able to utilize its knowledge of these standardized forms to provide higher accuracies in key-value pair extraction. Customers across industries like insurance, healthcare and banking utilize these documents in their business processes and will automatically see the benefits of this update when they use Textract’s Forms extraction feature.
Amazon QuickSight launches custom subtotals at all levels on Pivot Table. QuickSight authors can now customize how subtotals are displayed in Pivot Table, with options to display subtotals for last level, all levels or selected level. This customization is available for both rows and columns. To learn more about custom subtotals, see here.
Additionally, Amazon QuickSight introduces ability to show/hide columns for Pivot Table. QuickSight authors can now hide column, row and value fields in Pivot Table similar to that of Table to support advanced analysis use cases from the fields well context menu.
AWS WAF Captcha is now available for all customers. AWS WAF Captcha helps block unwanted bot traffic by requiring users to successfully complete challenges before their web request are allowed to reach AWS WAF protected resources. You can configure AWS WAF rules to require WAF Captcha challenges to be solved for specific resources that are frequently targeted by bots such as login, search, and form submissions. You can also require WAF Captcha challenges for suspicious requests based on the rate, attributes, or labels generated from AWS Managed Rules, such as AWS WAF Bot Control or the Amazon IP Reputation list. WAF Captcha challenges are simple for humans while remaining effective against bots. WAF Captcha includes an audio version and is designed to meet WCAG accessibility requirements.
AWS WAF Captcha launched on 4th Nov 2021 in the US East (N. Virginia), US West (Oregon), Europe (Frankfurt), South America (Sao Paulo), and Asia Pacific (Singapore) AWS Regions and supports Application Load Balancer, Amazon API Gateway, and AWS AppSync resources. AWS WAF Captcha is now available in all commercial AWS regions, AWS GovCloud (US) Regions and supports Amazon CloudFront resources.
Anyone can now search and find publicly available data sets on AWS Data Exchange along with more than 3,000 existing data products from category-leading data providers across industries, all in one place.
Researchers and data enthusiasts can now find on AWS Data Exchange more than 100 petabytes of high-value, cloud-optimized data sets available for public use from leading organizations such as NOAA, NASA, or the UK Met Office. These include open data sets hosted by the AWS Open Data Sponsorship Program and in the Amazon Sustainability Data Initiative (ASDI) catalog, which aims to accelerate sustainability research and innovation by minimizing the cost and time required to acquire and analyze large sustainability data sets.
Once on AWS Data Exchange, you can explore the data catalog to find open data sets, along with other no-cost and paid products, all in one place to reduce time and without needing an AWS account. You can filter on the affiliated program of your choice to view data products part of the AWS Open Data Sponsorship Program or ASDI.
WS WAF now supports evaluating multiple headers in the HTTP request, without the need to specify each header individually in AWS WAF rules. You can also use this new capability to easily inspect all cookies in the HTTP request, without the need to specify each cookie in WAF rules. This capability helps you protect your applications or API endpoints from attacks that try to exploit a custom header or cookie, or a common header for which you may not have created a WAF rule. You can also limit the scope of inspection to only included or excluded headers, and inspect only the keys or only the values for the headers or cookies you want to inspect.
For HTTP requests that may include more headers than WAF can inspect, you can provide oversize handling instructions when you define your rule statement. Oversize handling tells WAF what to do with a web request when the number or size of request headers is over the limits. With oversize handling, you can choose whether to continue inspection or skip inspection and mark the request as matching or not matching the WAF rule.
Starting today, the AWS Bills page has a re-designed user experience, making it easier to understand your AWS spend. The Bills page provides an overview of your AWS charges and the ability to drill into key details; the re-design provides a refreshed user interface, new views of your savings and taxes, and enhanced sorting and filtering capabilities.
The re-designed Bills page makes it easier to understand the usage amounts, prices, and discounts that comprise your AWS charges. Drill in by service, Region, or usage type (such as instance type, request type, or database engine) to view the details of your charges. New sorting and filtering capabilities make it easier to find the exact information you’re looking for, and new views make it easier to understand your savings and taxes. You can download invoice documents for your records or export a CSV file for additional analysis. For users of AWS Organizations, you can view aggregated charges for your Organization or view charges by member account. For users of AWS Billing Conductor, the AWS Bills page provides pro forma data to member accounts and the primary accounts of a billing group; management accounts can toggle between chargeable and pro forma data views.
AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) now provides you the flexibility to update your directory settings. This makes it easier to meet your specific security and compliance requirements across all new and existing directories. Starting today, you can update your directory settings and AWS Managed Microsoft AD applies the updated settings to all domain controllers, automatically. You accomplish this using the AWS console or automating with AWS Command Line Interface (AWS CLI) and/or API.
Now, you can update fine-grained secure channel configuration for protocols and ciphers of your directory. For example, you can enable or disable individual encryption ciphers, such as RC4, and secure channel protocols, such as TLS 1.0, based on your security and compliance requirements.
We are excited to announce general availability of automatic chatbot designer in Amazon Lex, enabling developers to automatically design chatbots from conversation transcripts in hours rather than weeks. Introduced at re:Invent in December 2021, the automated chatbot designer enhances the usability of Amazon Lex by automating conversational design, minimizing developer effort and reducing the time it takes to design a chatbot.
The automated chatbot designer uses machine learning (ML) to analyze conversation transcripts and provide a bot design. Developers can iterate on the design, add chatbot prompts and responses, integrate business logic to fulfill user requests, and then build, test, and deploy the chatbot in Amazon Lex. Since the preview launch, we have improved the quality of intent recommendations and diversity of utterances, introduced a click-through experience, and added usability enhancements. These updates further reduce the time and effort it takes to design a chatbot.
Starting this week, Amazon EC2 D3 instances, the latest generation of the dense HDD-storage instances, are available in the AWS Canada (Central) Region. D3 instances are powered by 2nd generation Intel Xeon Scalable Processors (Cascade Lake) with a sustained all core frequency up to 3.1 GHz. D3 instances provide up to 2.5x higher networking speed and 45% higher disk throughput compared to D2 instances. These instances are an ideal fit for workloads including distributed / clustered file systems, big data and analytics, and high capacity data lakes. With D3 instances, you can easily migrate from previous generation D2 instances or on-premises infrastructure to a platform optimized for dense HDD storage workloads.
D3 instances are available in 4 sizes ranging from 4 to 32 vCPUs, 32 to 256 GiB of memory, and 6 to 48 TB of local HDD storage. D3 instances include up to 25 Gbps of network bandwidth and up to 4.6 GiB/s of disk throughput and are optimized for access to the Amazon Elastic Block Store (EBS).
With this regional expansion, D3 instances are now available in the following AWS regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney) and GovCloud (US-West) Regions.
Amazon Connect Cases provides built-in case management capabilities that make it easy for your contact center agents to create, collaborate on, and quickly resolve customer issues that require multiple customer conversations and follow-up tasks, all without having to build custom applications or integrate with third-party products. Cases provides your agents with a unified timeline view of all activities associated with a customer case, including individual tasks that can be assigned and tracked across multiple agents. Additionally, case information can be used to answer customer questions in self-service IVR and chatbot interactions.
With Cases, businesses have the tools and information they need to be more productive, resolve issues faster, and improve customer satisfaction. For example, when a call or chat comes in, the flow can identify the customer, find the relevant case, and provide an update to the customer without agent interaction. Contact center managers can access these capabilities and configure case templates, case fields, and permissions from the Amazon Connect administrator website.
Amazon Connect outbound campaigns now offers organizations an embedded, cost-effective way to contact up to millions of customers daily for communications such as delivery notifications, marketing promotions, appointment reminders, or debt collection, without having to integrate with third-party tools. With outbound campaigns, formerly known as high-volume outbound communications, you can proactively communicate across voice, SMS, and email to quickly serve your customers and improve agent productivity. The new communication capabilities also include features to support compliance of local regulations such as TCPA through point-of-dial checks and calling controls for time of day, time zone, number of attempts per contact, and time required to connect to an available agent.
Additionally, outbound campaigns includes a predictive dialer and machine learning (ML)–powered answering machine detection, which optimize agent productivity and increase live-party connections by not wasting agents’ time with unanswered calls. Furthermore, you can use outbound campaigns to measure and monitor compliance metrics for regulatory requirements.
QuickSight Q can now accept full questions as input without requiring users to type them in when used in embedded mode. This new feature allows developers to create question as widgets at appropriate placements on their web applications making it easy for their users to discover the capability to ask questions about data within the current context of their user journey.
Developers who have embedded Q in their web application can now can optionally call setQBarQuestion() to submit a question to the Q bar based on user interactions such as clicks. Such questions are submitted to Q and are answered instantly opening the Q answer panel automatically. Developers can also optionally close Q answer panel with a call to closeQPopover() that closes the Q answer panel.
Google Cloud Releases and Updates
Anthos clusters on bare metal
Apigee Integrated Portal
On June 21, GCP released an updated version of Apigee integrated portal.
On June 21, 2022 GCP released an updated version of the Apigee UI,
The Data Collectors UI is now generally available.
A search bar has been added to the new Proxy Editor Develop view. This lets you search for items within a proxy or sharedflow bundle.
Query queues are now available in preview for on-demand and flat-rate customers. When query queues are enabled, BigQuery automatically determines the query concurrency rather than setting a fixed limit. Flat-rate customers can override this setting with a custom concurrency target. Additional queries beyond the concurrency target are queued until processing resources become available.
Preview: You can now get cost insights in the Recommender API, and use them to detect anomalies in your costs. For example, you see a cost insight in the API if your costs for a day are significantly higher or lower than your typical daily costs.
- Learn about using cost insights to detect anomalies in your costs.
- Read an overview of insights in Recommender.
In July 2022, Cloud Composer 2 environments created in Cloud console will use Private Service Connect configuration by default.
In July 2022, Cloud Composer 1 environments created in Cloud console will use the latest available version of Airflow 2 by default.
Cloud Load Balancing
Cloud Load Balancing introduces a new version of the external HTTP(S) load balancer. The new global external HTTP(S) load balancer with advanced traffic management capabilities contains many of the features of our existing classic HTTP(S) load balancer, but with an ever-growing list of traffic management capabilities such as weighted traffic splitting, request mirroring, outlier detection, fault injection, and so on.
For details on the new load balancer, see:
- External HTTPS(S) Load Balancing overview
- Load balancer features (External HTTP(S) > Global )
- Setting up a global external HTTP(S) load balancer
- Traffic management for global external HTTP(S) load balancers
This load balancer is available in General Availability.
The PostgreSQL interface is now generally available, making the capabilities of Cloud Spanner accessible from the PostgreSQL ecosystem. It includes a core subset of the PostgreSQL SQL dialect, support for the psql command-line tool, native language clients, and integration into existing Google tools. For more information, see PostgreSQL interface.
Cloud SQL for PostgreSQL
CloudSQL for PostgreSQL now supports replication from an external server.
The following PostgreSQL minor versions and extension versions are now available:
- 14.2 is upgraded to 14.3.
- 13.6 is upgraded to 13.7.
- 12.10 is upgraded to 12.11.
- 11.15 is upgraded to 11.16.
- 10.20 is upgraded to 10.21.
If you use maintenance windows, then you might not yet have these versions. In this case, you'll see the new versions after your maintenance update occurs. To find your maintenance window or to manage maintenance updates, see Find and set maintenance windows.
Cloud SQL for SQL Server
You can enable an instance to publish to a subscriber that is external (or internal) to Cloud SQL. In this scenario, Cloud SQL for SQL Server can act as a publisher to an external subscriber. This functionality, which is generally available, uses transactional replication.
For more information, see Configure external replicas.
In Cloud SQL, you can use SQL Server Audit capabilities to track and log server-level and database-level events. This functionality is generally available.
To deliver a better default price-performance for applications, all GKE clusters created with control plane version 1.24 and later have the Balanced Persistent Disk (PD) by default for attached volumes. Additionally, the node boot disk default has also been changed to Balanced Persistent Disk (PD).
The new default for attached volumes is applied to all clusters running control plane version 1.24 and later. The new default node boot disk is applied to all new node pools of any node pool version created in a cluster with control plane version 1.24 and later. Existing preferences will not be changed.
For more information on boot disks, see Configuring a custom boot disk.
For more information on attached volumes see Persistent volumes and dynamic provisioning.
The Recommendations AI documentation set at https://cloud.google.com/retail/recommendations-ai/docs will be removed on July 5, 2022. This documentation set describes how to use the Recommendations console to manage and monitor Recommendations AI. AWS no longer recommend this console. After July 5, 2022, links to this documentation will redirect to the equivalent page in the Retail documentation at https://cloud.google.com/retail/docs.
AWS recommend that you use the Retail console to manage Recommendations AI. Find the documentation for the Retail console at https://cloud.google.com/retail/docs.
If you have not yet switched from the Recommendations console to the Retail console, see Switch to the Retail console.
Private Service Connect supports publishing a service that is hosted on an internal TCP proxy load balancer in a service producer VPC network. The backends can be located in Google Cloud, in other clouds, in an on-premises environment, or any combination of these locations.
This feature is available in Preview.
Microsoft Azure Releases And Updates
Public preview updates made in late June 2022 for Azure SQL.
Generally available updates made in late June 2022 for Azure SQL.
Enable fetching secrets from an Azure Key Vault directly into the workload running on an Arc connected Kubernetes cluster.
Windows IoT on Arm64 brings increased functionality to devices.
New features include MLflow enhancements and train and deploy models in Azure hybrid and multi-cloud.
SQL Server 2016 customers can now use Azure without migrating through offloading analytics or read-only workloads to Azure SQL Managed Instance (MI).
Get personalized recommendations based on your Azure Database for MySQL Flexible Server usage.
Use the new Azure Data Studio MongoDB extension for Azure Cosmos DB to manage all your MongoDB resources.
Azure Cosmos DB accounts can now take advantage of continuous backup with seven-day data retention and point-in-time restore capabilities.
Use new Go SDK features in your Azure Cosmos DB SQL API account including authentication with Azure Active Directory, the ability to execute single partition queries, and transactional batch support.
Query your Azure Cosmos DB containers even more efficiently with new query engine optimizations.
Get started using Azure Cosmos DB for free.
AKS release tracker allows you to see the status of AKS releases across Azure regions.
You can now use App Service, Container Apps, or API Management as an API backend in Azure Static Web Apps.
Develop Python 3.10 apps locally and deploy them to all Azure Functions plans.
You can now add custom certificate authorities to your AKS cluster(s).
Benefit from the latest features in Kubernetes release 1.24.
You can now use HTTP proxy feature with all AKS add-ons.
Deploy and scale .Net 7-based web apps on an enterprise-grade service.
Linux users can now use hybrid connections manager without maintaining a Windows client.
Microsoft and STMicroelectronics have partnered to address the growing concern for security as a key barrier for cloud IoT market adoption with a security platform targeting microcontroller-based devices (MCUs).
The Embedded Wireless Framework for Azure RTOS defines a platform consistent API for embedded IoT developers to use wireless services in their applications.
Edge Secured-Core is a certification program that extends the Secured-Core label into IoT and Edge devices.
Send, receive, and peek messages on queues, topics, and subscriptions.
Support for OpenTelemetry and a new deployment option with Helm.
API Management Content Security Policy detects and mitigates common attacks in the developer portal and enables Captcha and OAuth in self-hosted portals.
You can now read and write properties without having to first create a device model/template.
Version 2 of the Node.js SDK for Durable Functions is now available.
Durable Functions are now supported when building Java applications in Azure Functions.
New feature added to Azure NetApp Files.
Independently throttle your event streaming workloads using application groups and client applications information.
Load real-time streaming data in Azure Event Hubs to data lakes, warehouses, and other storage services in Parquet format.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here: