This week's roundup of all the cloud news.
Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday September 30th 2022.
Well the massive news from Hava this week is that you can use Hava for free for a single data source. This new tier is part of an overhaul of our pricing and plans which give much more flexibility. You can now add as many data sources as you want on a low pay per source model and you can extend the data retention of versioning.
You can find out more about the plan details and new flexible pricing here: https://www.hava.io/blog/pricing-and-plan-updates
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
AWS Updates and Releases
Source: aws.amazon.com
Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now available in the Asia Pacific (Jakarta) AWS Region.
Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easy for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is fully compatible with Apache Kafka, which enables you to quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch.
With Amazon MSK, you spend more time building innovative applications and less time managing clusters.
AWS X-Ray launches support for tracing PHP applications via OpenTelemetry in public preview
This week, AWS X-Ray adds support for tracing end-to-end requests in PHP applications via the AWS Distro for OpenTelemetry (ADOT) in public preview.
ADOT is a secure, AWS-supported distribution of the OpenTelemetry project. With this launch, you can now use the OpenTelemetry PHP SDK to instrument your PHP applications for tracing with X-Ray. X-Ray will help you to visualize and debug your distributed PHP applications in the CloudWatch console with its service map and trace analytics tools.
AWS customers can now use the AWS-supported OpenTelemetry components necessary to export trace data to AWS X-Ray, such as the X-Ray ID generator, X-Ray propagator, and resource detectors for Amazon ECS, Amazon EKS, Amazon EC2, and AWS Lambda services.
AWS customers can also use the AWS SDK for PHP to trace requests made to the AWS SDK without the need to manually record each request. By using ADOT, applications can be easily configured to export traces to AWS X-Ray.
AWS CloudShell is now generally available in the South America (São Paulo), Canada (Central), and Europe (London) regions.
AWS CloudShell is a browser-based shell that makes it easier to securely manage, explore, and interact with your AWS resources. CloudShell is pre-authenticated with your console credentials.
Common development tools are pre-installed so no local installation or configuration is required. With CloudShell you can run scripts with the AWS Command Line Interface (AWS CLI), define infrastructure with the AWS Cloud Development Kit (AWS CDK), experiment with AWS service APIs using the AWS SDKs, or use a range of other tools to increase your productivity.
Amazon SageMaker Canvas supports mathematical functions and operators for richer data exploration
Amazon SageMaker Canvas now supports mathematical functions and operators for richer data exploration, allowing you to define new features in your data. SageMaker Canvas is a visual point-and-click service that enables business analysts to generate accurate ML predictions on their own — without requiring any machine learning experience or having to write a single line of code.
SageMaker Canvas supports a number of data transformations to filter, join, and modify datasets, and advanced visualizations to understand the relationships between variables in your data.
Starting this week, you can use mathematical functions and logical operators with these data transformations to understand the distribution of your data better prior to building ML models. The results from these functions and operators allow you to create new features that can be visualized for particular attributes.
Supported mathematical functions include add, subtract, multiply, divide, mean, standard deviation, variation, exponent, and log. Additionally, SageMaker Canvas supports logical operators such as if-then-else statements to define specific conditions and gives you the flexibility to understand, distribute, and explore your data better.
With this new capability, SageMaker Canvas enables binning - a data pre-processing technique. Binning is a method to group related numerical or categorical values into a smaller number of sets called bins.
As an example, if you have a dataset tracking furniture items, you can group them in different bins such as office furniture, living room furniture, or bedroom furniture. Binning helps you identify outliers, invalid values, and reduces non-linearity in datasets, thereby improving the accuracy of your ML models.
Bottlerocket is now supported by Amazon Inspector
Bottlerocket, a Linux-based operating system that is purpose built to run container workloads, is now integrated with Amazon Inspector. Customers that have Inspector EC2 scanning already enabled do not need to take any additional action. If Amazon Inspector discovers a vulnerability, it will recommend an update to the version of Bottlerocket that fixes that vulnerability.
Amazon Inspector is a vulnerability management service that scans EC2 and container workloads for software vulnerabilities and unintended network exposure. Amazon Inspector leverages the AWS System Manager (SSM) agent to scan for vulnerabilities. In Bottlerocket hosts, the SSM agent runs within the control host container, so you need to make sure it is enabled in your hosts.
Integration with Amazon Inspector is available in AWS Commercial Regions for Bottlerocket versions starting from 1.7.0. Standard pricing rates for Amazon Inspector apply. Bottlerocket is an open-source Linux distribution with an open development model and community participation. It’s available at no additional cost and is fully supported by Amazon Web Services.
You can learn more about Bottlerocket by visiting the AWS product page and Bottlerocket’s Github repository. For support, please contact the Bottlerocket team through your designated AWS representative or by opening a new issue on GitHub.
AWS announces Amazon WorkSpaces Core
Amazon WorkSpaces Core is a new, fully managed Virtual Desktop Infrastructure (VDI) service that combines the security, global reliability, and cost efficiency of AWS with existing VDI management solutions.
WorkSpaces Core removes the need for capacity planning or infrastructure refreshing, and simplifies the provision, deployment, and management of VDI environments. With WorkSpaces Core, you can migrate your on-premises VDI solutions to AWS or deploy in a hybrid environment spanning on-premises resources and WorkSpaces Core infrastructure.
Manage your entire user base from a single console without disrupting end users or retraining your IT staff.
WorkSpaces Core allows you to migrate to the cloud at your own pace. You can extend the life of existing infrastructure investments while benefiting from the administrative and end user experience of your existing VDI software solution.
WorkSpaces Core comprises purpose-built VDI infrastructure that helps you optimize spending with predictable, fixed rate, pay-as-you-go pricing, and with no upfront costs.
Amazon WorkSpaces Core is available in all AWS Regions except Africa (Cape Town).
AWS Cloud Control API now supports AWS PrivateLink
AWS Cloud Control API now supports AWS PrivateLink, providing access for customers to leverage AWS Cloud Control API through private Virtual Private Cloud (VPC) endpoints within their virtual private network.
Customers can now manage their cloud infrastructure in a consistent manner and use the latest AWS capabilities faster using Cloud Control API’s common application programming interfaces (APIs) through private IP addresses in their Amazon VPC. These customers can use AWS Cloud Control API without having to use public IPs, firewall rules, or an internet gateway.
With AWS PrivateLink, you can provision and use VPC endpoints to access supported AWS services. AWS PrivateLink provides private connectivity between VPCs, AWS services, and your on-premises networks, without exposing your traffic to the public internet.
For example, customers from regulated industries that prefer to keep their VPCs private with no internet connectivity can now use Cloud Control API to create, read, update, delete, and list (CRUDL) AWS and third-party service resources with a consistent API.
These customers can now benefit from a uniform method to manage hundreds of AWS resources and over a dozen third-party solutions available on the CloudFormation Registry spanning monitoring, databases, or security management resources.
Furthermore, as Cloud Control API is up to date with the latest AWS resources as soon as they are available on the CloudFormation Registry, you can now adopt the latest AWS innovation through private VPC endpoints.
AWS Cloud Control API support for AWS PrivateLink is generally available in all AWS Regions where Cloud Control API is available. These include the following AWS Regions: US East (Ohio, N. Virginia), US West (Oregon, N. California), Canada (Central), Europe (Ireland, Frankfurt, London, Stockholm, Paris, Milan), Asia Pacific (Jakarta, Hong Kong, Mumbai, Osaka, Singapore, Sydney, Seoul, Tokyo), South America (Sao Paulo), Middle East (Bahrain), Africa (Cape Town), and AWS GovCloud (US)
There is no additional charge for using AWS Cloud Control API with native AWS resource types. You will only pay for the usage of underlying AWS resources. When you use AWS Cloud Control API with third-party resource types, you will incur charges based on the number of handler operations you run per month and handler operation duration (refer to the pricing page for more details). To learn more about AWS PrivateLink pricing, please refer to its pricing page.
AWS Compute Optimizer now provides cost and performance optimization recommendations for 37 new EC2 instances types, including bare metal instances (m6g.metal) and compute optimized instances (c7g.2xlarge, hpc6a.48xlarge).
Additionally, AWS Compute Optimizer now analyzes the “Available Mbytes”, “Available Kbytes”, and “Available Bytes” metrics to deliver more accurate EC2 instance recommendations for EC2 Windows instances.
With this launch, AWS Compute Optimizer now supports a total of 425 EC2 instance types. Previously, AWS Compute Optimizer only analyzed “Memory % Committed Bytes in Use” as memory metric for EC2 Windows instances.
AWS Compute Optimizer now uses “Available Mbytes”, “Available Kbytes”, and “Available Bytes” as preferred memory metrics, which are more accurate measures of physical memory utilization excluding the impact of pagefile. When “Available Mbytes”, “Available Kbytes”, and “Available Bytes” metrics are not available, AWS Compute Optimizer will fall back to use “Memory % Committed Bytes in Use” metric as memory metric.
You can configure Amazon CloudWatch Agent to report “Available Mbytes”, “Available Kbytes”, and “Available Bytes” to AWS Compute Optimizer.
To learn more about the new feature updates, please visit AWS Compute Optimizer’s product page and user guide. Get your optimal performance and saving recommendations today on AWS resources via AWS Compute Optimizer Console.
Snow Amazon Linux 2 (AL2) Amazon Machine Image (AMI) available on all Snow Family jobs
AWS Snow Family now offers the Snow Amazon Linux 2 (AL2) Amazon Machine Image (AMI) for all Snowball Edge and Snowcone jobs. With this launch, customers can get started running edge workloads on Snow devices using the Snow AL2 AMI.
The Snow AL2 AMI is maintained and provided by AWS Snow, making it easier for customers who do not want to create their own AMI for Snow devices.
Prior to this launch, customers had a 2-step process to provisioning AMIs on Snow jobs.
First, customers had to create their own AMI from Ubuntu or CentOS offerings from AWS Marketplace and then select these AMIs as part of the Snow job order. This 2-step process provided more control to the customers in selecting the right AMIs for their jobs.
For customers not familiar with the AMI creation process, the 2-step process required more preparation before placing a Snow job order with AMIs on them. Now, customers who do not want to bring their own AMIs to a Snow job can add a Snow AL2 AMI to their Snow job with 1-click selection at the time of job ordering.
To use this feature, select Snow AL2 AMI in the ‘Compute using EC2 instances’ section of the Snow console and the AMI will be installed on the device. Once the device reaches the customer site, customer will have the option to inject new SSH keys or use an existing SSH keys using OpsHub software or CLI.
Amazon S3 Replication Time Control (S3 RTC) now provides a predictable replication time backed by a Service Level Agreement (SLA) in the AWS China (Beijing) and AWS China (Ningxia) Regions. S3 RTC helps customers meet compliance or business requirements for data replication, and provides visibility into the replication process with Amazon CloudWatch Metrics.
Customers use S3 Replication to replicate billions of objects across buckets to the same or different AWS Regions.
S3 Replication Time Control is designed to replicate 99.99% of objects within 15 minutes after upload, with the majority of new objects replicated in seconds. S3 RTC is backed by an SLA with a commitment to replicate 99.9% of objects within 15 minutes during any billing month.
S3 RTC also provides S3 Replication metrics (via CloudWatch) that allow customers to monitor the time it takes to complete replication, as well as the total number and size of objects that are pending replication.
Using S3 RTC, customers in China can now replicate objects to buckets in the same or different AWS China Regions, and to one or more destination buckets. S3 RTC can be configured through the S3 console, SDK, or API.
With Amazon EBS Snapshots, AWS customers can now archive 25 snapshots concurrently to the Snapshots Archive tier by default, an increase from 5. This default limit increase makes it easier to move snapshots to the Snapshot Archive tier at scale.
EBS Snapshots Archive provides a low-cost storage tier to archive full, point-in-time copies of EBS Snapshots that you must retain for 90 days or more for regulatory and compliance reasons, or for future project releases. Customers use Snapshots Archive to save up to 75% on snapshot storage costs when archiving from the standard snapshots tier. With the new default limit, customers can now move more of their rarely accessed snapshots to the EBS Snapshots Archive at the same time, without requesting a service quota limit increase.
The new default limit applies to all AWS Regions where EBS Snapshots Archive is available. The higher default limit is automatically reflected in your account. If your account already has an approved limit for in-progress snapshot archives per account that is higher than 25, you will continue to have your higher approved limit. If you require more than 25 in-progress snapshot archives per account in a region, you can submit a limit increase request using the AWS Support Center.
AWS are excited to announce that Amazon SageMaker Canvas is now generally available in the AWS Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), and Australia (Sydney) regions.
Amazon SageMaker Canvas is a visual point-and-click interface that enables business analysts to generate accurate machine learning (ML) predictions on their own - without writing any code or requiring ML expertise.
SageMaker Canvas makes it easy to access and combine data from a variety of sources, automatically clean data and apply a variety of data adjustments, and build ML models to generate accurate predictions with a few clicks. You can also easily publish results, explain and interpret models, and share models with others within your organization to review.
AWS Certificate Manager Private Certificate Authority is now AWS Private Certificate Authority
This week, AWS renamed AWS Certificate Manager Private Certificate Authority to AWS Private Certificate Authority (AWS Private CA).
This change helps customers differentiate between AWS Certificate Manager (ACM) and AWS Private CA.
ACM and AWS Private CA have distinct roles in the process of creating and managing the digital certificates used to identify resources and secure network communications over the internet, in the cloud, and on private networks.
ACM manages the lifecycle of certificates: creating, storing, deploying, and managing renewals for AWS services such as Elastic Load Balancing, Amazon CloudFront, and Amazon API Gateway.
AWS Private CA enables customers to create customizable private certificates for a broad range of scenarios. AWS services such as ACM, Amazon Managed Streaming for Apache Kafka (MSK), IAM Roles Anywhere and Amazon Elastic Kubernetes Service (EKS) can all leverage private certificates from Private CA.
It also supports creating private certificates for Internet of Things (IoT) devices as well as enterprise users, systems and services.
This launch coincides with the launch of AWS Private CA’s updated console.
The workflow of creating CAs has been simplified to a single page wizard, the listing CAs view now supports filtering and search, and all pages have a sidebar with contextual documentation help. The console also has accessibility improvements to enhance screen reader support and additional tab key navigation for people with motor impairment.
Announcing unique place IDs for Amazon Location Service
Amazon Location Service now enables developers to retrieve known geographical locations using a unique identifier. For example, when using autocomplete functionality to help end-users complete their searches, developers can use the unique identifier of the end-user’s selection to retrieve the location in future queries. Using this unique identifier, developers can create consistent search experiences, filter search results, and display relevant search results on a map.
Developers can use the new getPlace API with the unique PlaceID to retrieve the details of the end-user’s selected location, providing a consistent search experience, every time. Furthermore, they can identify unique places, filter search categories, and present relevant end-user search results on a map. Developers can also use the getPlace API to retrieve latitude and longitude coordinates of the location and then, request the drive time using the Amazon Location route calculator.
Amazon Location Service is a location-based service that helps developers easily and securely add maps, points of interest, geocoding, routing, tracking, and geofencing to their applications without compromising on data quality, user privacy, or cost. With Amazon Location Service, you retain control of your location data, protecting your privacy and reducing enterprise security risks. Amazon Location Service provides a consistent API across high-quality location-based service data providers (Esri and HERE), all managed through one AWS console.
Amazon EMR Serverless is now available in 12 additional AWS Regions
Amazon EMR is excited to announce that Amazon EMR Serverless is now available in Asia Pacific (Mumbai, Seoul, Singapore, and Sydney), Canada (Central), Europe (Frankfurt, London, Paris, and Stockholm), South America (Sao Paulo), US West (N. California), and US East (Ohio) regions. These regions are in addition to the existing US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland).
Amazon EMR Serverless is a serverless deployment option in Amazon EMR that makes it easy and cost effective for data engineers and analysts to run petabyte-scale data analytics in the cloud.
With EMR Serverless, you can run your Spark and Hive applications without having to configure, optimize, tune, or manage clusters. EMR Serverless offers fine-grained automatic scaling, which provisions and quickly scales the compute and memory resources required by the application.
Amazon EC2 Is4gen and Im4gn Instances are now available in Asia Pacific (Tokyo) region
Starting this week, Amazon EC2 Is4gen and Im4gn instances, the latest generation storage-optimized instances, are available in Asia Pacific (Tokyo) region.
Is4gen and Im4gn instances are built on the AWS Nitro System and are powered by AWS Graviton2 processors. They feature up to 30TB of storage with the new AWS Nitro SSDs that are custom-designed by AWS to maximize the storage performance of I/O intensive workloads such as SQL/NoSQL databases, search engines, distributed file systems and data analytics which continuously read and write from the SSDs in a sustained manner.
AWS Nitro SSDs enable up to 60% lower latency and up to 75% reduced latency variability in Im4gn and Is4gen instances compared to the third generation of storage optimized instances.
These instances maximize the number of transactions processed per second (TPS) for I/O intensive workloads such as relational databases (e.g. MySQL, MariaDB, PostgreSQL), and NoSQL databases (KeyDB, ScyllaDB, Cassandra) which have medium-large size data sets and can benefit from high compute performance and high network throughput. They are also an ideal fit for search engines, and data analytics workloads that require very fast access to data sets on local storage.
The Im4gn instances provide the best price performance for storage-intensive workloads in Amazon EC2. They offer up to 40% better price performance and up to 44% lower cost per TB of storage compared to I3 instances for running applications such as MySQL, NoSQL, and file systems, which require dense local SSD storage and higher compute performance. They also feature up to 100 Gbps networking and support for Elastic Fabric Adapter (EFA) for applications that require high levels of inter-node communication.
The Is4gen instances provide the lowest cost per TB and highest density per vCPU of SSD storage in Amazon EC2 for applications such as stream processing and monitoring, real-time databases, and log analytics, that require high random I/O access to large amounts of local SSD data. These instances enable 15% lower cost per TB of storage and up to 48% better compute performance compared to I3en instances.
With this regional expansion, Im4gn and Is4gen instances are now available in the following AWS regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Tokyo) and Europe (London) Regions and are purchasable On-Demand, as Reserved instances, as Spot instances, or as part of Savings Plans. Im4gn is available in 6 sizes providing up to 64 vCPUs, 30 TB SSD storage, 256 GB memory, 100 Gbps of networking bandwidth, and 38 Gbps of Amazon Elastic Block Store (Amazon EBS) bandwidth. The Is4gen instances will be available soon in 6 sizes providing up to 32 vCPUs, 30 TB SSD storage, 192 GB memory, 50 Gbps of networking bandwidth, and 19 Gbps of Amazon EBS bandwidth.
AWS IoT FleetWise is now generally available
AWS IoT FleetWise is now generally available to help automotive companies collect, transform, and transfer vehicle data to the cloud in near real time. Automotive companies can use the data in the cloud to extensively analyze vehicle fleet health. Insights gained allow companies to quickly identify potential maintenance issues, make in-vehicle infotainment systems smarter, or use analytics and machine learning (ML) to improve autonomous driving and advanced driver assistance systems (ADAS).
Automakers, suppliers, fleet operators, and technology solution vendors in the automotive industry can use AWS IoT FleetWise to collect and organize vehicle data, and store the data in a standardized way for analysis in the cloud. With AWS IoT FleetWise intelligent data collection controls, you can limit the amount of data transferred to the cloud by creating conditional rules (for example, sensor data from hard-braking events associated with a vehicle make and model).
AWS IoT FleetWise with standard vehicle data collection is generally available today in US East (N. Virginia) and Europe (Frankfurt), with availability in additional AWS Regions coming soon.
Announcing 1-Click templates and tutorials in AWS Budgets
Starting this week, you can create budgets using a simplified 1-click workflow for common budgeting scenarios as well as take step-by-step tutorials to learn about creating different kinds of budgets. AWS Budgets helps you track against expected spend by sending alerts when your cost and usage exceeds (or is forecasted to exceed) thresholds you define, or when your Reserved Instances and Savings Plans' utilization and/or coverage rate drops below a threshold. 1-click templates and tutorials are intended for new and existing users seeking guidance on what budgets they can set to control costs in AWS.
With this launch, you can now choose from three templates: 1/ Zero spend budget, 2/ Monthly cost budget, and 3/ Daily Savings Plans coverage budget. We’ve also provided tutorials on how to use each template that are available in the info panel on the right side of the AWS Budgets console.
Using the “Create a monthly budget (simplified)” tutorial, you can step through the “Monthly cost budget” template experience to help you learn how to create monthly budgets quickly. The simplified experience provides suggestions on how to use AWS Budgets to automate monitoring of your Savings Plans coverage on a daily basis which helps you save money by alerting you when drops occur.
In this scenario, you might consider purchasing a new Savings Plan to cover the eligible compute or SageMaker usage. AWS Budgets 1-Click Templates and Tutorials are generally available in all public AWS Regions.
Amazon Kendra releases Dropbox connector
Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting today, AWS customers can use the Amazon Kendra Dropbox connector to index and search documents from Dropbox data source.
Critical information can be scattered across multiple data sources in an enterprise, including collaborative content sharing platfoms. For many organizations, Dropbox is a core part of their content storage and lifecycle management strategy. A Dropbox account often contains artifacts such as documents, presentations, knowledge articles etc.
Enterprise users use Dropbox to upload, transfer, and store documents to the cloud. Now, with the new Amazon Kendra data source connector for Dropbox, these documents (HTML, PDF, MS Word, MS PowerPoint, and Plain Text) can be indexed by Amazon Kendra’s intelligent search service to allow users to search through this content for answers.
Amazon CloudWatch Container Insights launches lifecycle events for Amazon ECS
Amazon CloudWatch Container Insights now provides lifecycle events for Amazon Elastic Container Service (ECS). With Container Insights you can monitor, isolate, and diagnose your containerized applications running on ECS. Now with the addition of lifecycle events, you can easily correlate metrics, logs and events in a single view for more complete operational visibility.
Container environments have short lived resources and lifecycle events help to track the state changes on these resources throughout their lifespan, for example when tasks start or stop. ECS lifecycle events provide useful signals that indicate task health, such as tasks successfully moving from pending state to started.
You can now display ECS lifecycle events in Container Insights, which provides a centralized health and performance view for ECS containers. You can now correlate spikes in CPU or memory of their ECS clusters, tasks or services with lifecycle events of the related resources.
To get started, first enable Container Insights on your ECS cluster. Once enabled, navigate to the CloudWatch Container Insights console and toggle to Performance Monitoring view in the dropdown. The Lifecycle Events table is displayed on the cluster, service or task views. Click the Configure Lifecycle Events button to enable the event stream.
Clicking the configure button will automatically create the event rules and log groups to collect lifecycle events for your ECS clusters. You can also select Log Insights from the Lifecycle Events table to run custom analytics and create additional visualizations.
Amazon Connect launches updated flows UI and improved flow language
Amazon Connect now has an updated flow designer UI, that makes it easier and faster to build personalized and automated end-customer experiences.
With this launch, you can now easily filter the available blocks to quickly find what you need to build the flow, create color coded connections between blocks, view additional block metadata (e.g., the full text prompt in the Play Prompt block) at a glance, and more.
Additionally, with the improved flow language, you can now import and export flows in your Amazon Connect instance making it easy to interchangeably build and edit between the UI and APIs.
AWS Cost Categories now support retroactive rules application
AWS Cost Categories now support retroactive application of cost category rules. Previously, while creating a cost category, the rules applied from the start of current month. Similarly, while editing a cost category rule, the changes applied from the start of current month.
With this launch, while creating a cost category, you can apply cost category rules starting any month from the previous 12 months. Similarly, while editing a cost category, you can apply cost category rules starting any month from the previous 12 months. These changes will also be reflected in AWS Cost Explorer.
AWS Cost Categories is a feature within the AWS Cost Management product suite that enables you to group cost and usage information into meaningful categories based on your needs. You can create custom categories and map your cost and usage information into these categories based on the rules defined by you using various dimensions such as account, tag, service, charge type, and even other cost categories.
Amazon DevOps Guru for RDS now available in seven more regions
Amazon DevOps Guru for RDS is now available in the Europe (London and Paris), Asia Pacific (Mumbai and Seoul), US West (N. California), South America (São Paulo), and Canada (Central) regions.
Amazon DevOps Guru for RDS is a Machine Learning (ML) powered capability for Amazon Relational Database Service (Amazon RDS) that automatically detects and diagnoses database performance and operational issues, enabling you to resolve bottlenecks in minutes rather than days.
Amazon DevOps Guru for RDS is a feature of Amazon DevOps Guru, which detects operational and performance related issues for Amazon RDS engines and dozens of other resource types. Amazon DevOps Guru for RDS expands upon the existing capabilities of Amazon DevOps Guru to detect, diagnose, and provide remediation recommendations for a wide variety of database-related performance issues, such as resource over-utilization and misbehavior of SQL queries.
When an issue occurs, Amazon DevOps Guru for RDS immediately notifies developers and DevOps engineers and provides diagnostic information, details on the extent of the problem, and intelligent remediation recommendations to help customers quickly resolve the issue. Amazon DevOps Guru for RDS is available for Amazon Aurora MySQL and PostgreSQL–Compatible Editions in these regions.
Amazon DevOps Guru is an ML powered service that helps improve an application’s operational performance and availability. By analyzing application metrics, logs, events, and traces, Amazon DevOps Guru identifies behaviours that deviate from normal operating patterns and creates an insight that alerts developers with issue details. When possible Amazon DevOps Guru, also provides proposed remedial steps via Amazon Simple Notification Service (Amazon SNS), Amazon EventBridge, and partner integrations. To learn more, visit the Amazon DevOps Guru product and documentation pages.
AWS Copilot, a CLI for the containerized apps, adds IAM permission boundaries and more
AWS customers can now download and use the latest version, v1.22, of AWS Copilot, a command line interface (CLI) for containerized applications.
AWS Copilot makes it easier for AWS customers to build, deploy, and operate containerized applications on AWS by providing a common application architecture and infrastructure patterns, user-friendly operational workflows, and set up of continuous delivery pipelines.
The new release helps Copilot users leverage IAM permission boundaries to comply with their organization’s security requirements and service control policies, build event-driven services when order of operations and events is critical, and achieve a low-latency content delivery for their public HTTPS web services.
With the new AWS Copilot release (v1.22), users now can enable IAM permission boundaries for their applications by passing an IAM policy name to the --permissions-boundary flag of the app init CLI command.
With this, Copilot will properly configure the IAM permission boundaries for all IAM roles it creates, enabling companies with service control policies to use Copilot. In addition, this release adds support for FIFO SNS/SQS for the Copilot worker-service pattern.
This feature allows users to create their ordered publish subscribe architecture using Amazon SNS FIFO and SQS FIFO. And finally, with v1.22, customers can leverage Amazon CloudFront for low-latency content delivery and fast TLS-termination for public load-balanced web services.
Customers can add cdn.tls_termination: true to an environment’s manifest file to enable the feature.
Amazon MSK Serverless is now HIPAA eligible
Amazon MSK Serverless is now a HIPAA (Health Insurance Portability and Accountability Act) eligible service. This enables you to use Apache Kafka managed by Amazon MSK Serverless to store, process, and access protected health information (PHI) and power secure healthcare and life sciences applications.
MSK Serverless is a cluster type for Amazon MSK that makes it easy for you to run Apache Kafka without having to manage and scale cluster capacity.
HIPAA eligibility applies to all AWS Regions where MSK Serverless is available.
Amazon Connect now provides a queue dashboard
Amazon Connect now provides a dashboard that enables you to view and compare real-time queue performance through time series graphs.
You can review key metrics including service level, contacts queued, and average handle time to track and improve queue performance. For example, you can see when the current week service levels are lower than the prior week and determine if this is correlated with an increase in incoming contacts or average handle time to determine and implement corrective actions.
AWS App Runner now supports Node.js 16 managed runtime
AWS App Runner adds Node.js v16 managed runtime for building and running node.js based web applications and APIs.
App Runner allows you to deploy your application from source code or a container image directly in the AWS cloud. Regardless of the source type, App Runner takes care of starting, running, scaling, and load balancing your service. App Runner provides convenient platform-specific managed runtimes.
Each one of these runtimes builds a container image from your source code and adds language runtime dependencies into your image. App Runner uses the managed runtime to build and deploy your application alleviating the need for you to develop and manage your own Dockerfiles.
Starting today, you can build and run your node.js v16 based web application and APIs directly from your source code on App Runner. Node.js v16 is the active long-term support (LTS) node.js major version.
Amazon Textract is a machine learning service that automatically extracts text, handwriting, and data from any document or image. This week, AWS were excited to announce self-service quota management support for Amazon Textract through enhanced integration with AWS Service Quotas, a new self-service Amazon Textract quota calculator, and higher default Transactions Per Second (TPS) throughput quotas in select AWS Regions.
Below is a detailed description of these new capabilities and enhancements:
Enhanced integration with AWS Service Quotas - Customers told AWS they want to easily self-manage TPS configuration changes and want quicker turnaround times to process their quota increase requests so they can continue to scale their Amazon Textract usage.
With this launch, you can now proactively manage all your Amazon Textract service quotas via the AWS Service Quotas console. Using Service Quotas, your quota increase requests can now be processed automatically, speeding up approval times in most cases.
In addition to viewing default quota values, you can now view the applied quota values for your accounts in a specific region, the historical utilization metrics per quota, and set up alarms to notify you when the utilization of a given quota exceeds a configurable threshold.
Launch of Textract Service Quota Calculator - You can now use the Amazon Textract Quota Calculator to easily estimate the quota requirements for your workload prior to submitting a quota increase request directly from the AWS Service Quotas console.
Increased default service quotas for Amazon Textract - Amazon Textract now has higher default service quotas for several asynchronous and synchronous API operations in multiple major AWS Regions.
Specifically, higher default service quotas are now available for AnalyzeDocument and DetectDocumentText API asynchronous and synchronous operations in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), and Europe (Ireland) Regions.
For example, the default TPS quota value for the synchronous AnalyzeDocument API in US East (Ohio) has increased by 900% to 10 TPS. Similarly, the default TPS quota value for the asynchronous StartDocumentTextDetection API in Asia Pacific (Mumbai) has increased by 400% to 5 TPS.
In effect, customers using Textract in these Regions will receive a 50%-900% increase in the default quota values across these API operations. A table summarizing the before and after default quota values can be found in our blog. The new default values have automatically been applied to all accounts and no further action is required to obtain them.
Amazon Textract self-service quota management and the quota calculator are now available in all AWS Regions where Amazon Textract is available at no additional cost
Amazon Aurora supports in-place upgrades from MySQL 5.7 to 8.0
Starting this week, you can perform an in-place upgrade of your Amazon Aurora database cluster from Amazon Aurora MySQL-Compatible Edition 2 (with MySQL 5.7 compatibility) to Aurora MySQL-Compatible Edition 3 (with MySQL 8.0 compatibility). Instead of backing up and restoring the database to the new version, you can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS SDK or CLI.
MySQL 8.0 offers improved performance functionality from enhancements such as instant DDL and adds developer productivity features such as window functions. It also includes JSON functionality additions, new security capabilities, and more. MySQL 8.0 on Aurora MySQL-Compatible Edition supports popular Aurora features including Aurora Serverless v2, Global Database, RDS Proxy, Performance Insights, and Parallel Query.
To upgrade to Aurora MySQL Version 3 (compatible with MySQL 8.0), select the "Modify" option on the AWS Management Console corresponding to the database cluster you want to upgrade, choose the version of Aurora MySQL Version 3 you want to upgrade to, and proceed with the wizard.
The upgrade may be applied immediately (if you select the "Apply Immediately" option), or during your next maintenance window (by default). Please note that in either case, your database cluster will be unavailable during the upgrade and your database instances will be restarted. This feature is available in all AWS regions with Aurora MySQL.

Google Cloud Releases and Updates
Source: cloud.google.com
Anthos clusters on Azure
You can now launch clusters with the following Kubernetes versions:
- 1.24.3-gke.2100
- 1.23.9-gke.2100
- 1.22.12-gke.2300
In Kubernetes version 1.24 and later, Google Cloud Managed Service for Prometheus (GMP) is available as an invite only private preview. GMP lets you monitor and alert on workloads, using Prometheus, without having to manually manage and operate Prometheus at scale.
Anthos clusters on Azure now supports Cloud Monitoring for Windows node pools from Kubernetes version 1.24 and later. To learn more about monitoring in Anthos clusters on Azure, see Cloud monitoring.
Starting from Kubernetes version 1.24, virtual machines launched by Anthos clusters on Azure support System Assigned Managed Identities.
Anthos clusters on bare metal
Anthos clusters on bare metal 1.11.6 is now available for download. To upgrade, see Upgrading Anthos on bare metal. Anthos clusters on bare metal 1.11.6 runs on Kubernetes 1.22.
Anthos clusters on bare metal 1.13.0 is now available for download. To upgrade, see Upgrading Anthos on bare metal. Anthos clusters on bare metal 1.13.0 runs on Kubernetes 1.24.
The dockershim component in Kubernetes enables cluster nodes to use the Docker Engine container runtime. However, Kubernetes 1.24 removed the dockershim component. Since Anthos clusters on bare metal version 1.13 runs on Kubernetes 1.24, version 1.13 and higher clusters can no longer use Docker Engine as a container runtime. All clusters must use the default container runtime containerd.
Improved cluster lifecycle functionalities:
-
Upgraded from Kubernetes version 1.23 to 1.24:
-
Reverted some of the changes Kubernetes and the
kubeadm
tool made to certain labels and taints on control plane nodes. Changes were reverted so that older versions of Anthos clusters on bare metal remain supported. As a result, control plane nodes have the following labels and taints:node-role.kubernetes.io/master
labelnode-role.kubernetes.io/control-plane
labelnode-role.kubernetes.io/master:NoSchedule
taint
-
Upgraded from
kubeadm.k8s.io/v1beta2
tokubeadm.k8s.io/v1beta3
since the former is deprecated. -
Stopped automatic generation of Secret API objects containing service account tokens for every Service Account. For more information, see the
LegacyServiceAccountTokenNoAutoGeneration
section of the upgrade notes.
-
-
Breaking change: Version 1.12 clusters that use Docker Engine can upgrade to 1.13 only if the new container runtime is specified as
containerd
. Blocked the creation of new 1.13 clusters that use Docker Engine as the container runtime. -
Preview: Added feature so that upgrades of an admin/hybrid/standalone cluster can proceed without a bootstrap cluster. Management of Anthos clusters on bare metal is now fully conformant to the Kubernetes Resource Model.
-
Added support of Red Hat Enterprise Linux (RHEL) 8.6.
-
Removed an erroneous
CustomResourceDefinition
(app.k8s.io.Application
) from inclusion in the cluster creation process. -
Fixed vulnerability to YAML injection by switching to safetext/yamltemplate.
-
GA: Added support for installing Anthos clusters on bare metal, using your own registry service, instead of
gcr.io
. For instructions and additional information, see Use a registry mirror to create clusters. -
Eliminated false error messaging when the
bmctl create cluster
is run. The message erroneously reported anInvalid value
in thespec.labels
field of NodePool specifications. -
Added feature so that resetting a user cluster doesn't require the cluster configuration file.
-
Reduced
containerd
disk usage by havingcontainerd
store just the uncompressed layers of an image rather than both the compressed and uncompressed layers. -
Upgraded
containerd
to version 1.6.6.
Networking:
-
GA: Enabled Dynamic Flat IP with Border Gateway Protocol (BGP) support. This feature lets you configure flat mode using BGP in clusters by leveraging Network Gateway Group and BGP. In this mode the Pod's IP address is visible and routable without masquerading across multiple subdomains. Currently supports advertising IPv4 and IPv6 routes over IPv4 sessions.
-
GA: Added BGP-based Load Balancer support for IPv6. Added ability to disable the Bundled Ingress feature. Customers should disable this feature if they are using full Anthos Service Mesh (ASM) instead. (Bundled Ingress is unnecessary when full ASM is installed).
Observability:
-
Preview: Added support of multi-line parsing for Go and Java logs.
-
GA: Added support for Google Cloud Managed Service for Prometheus (GMP) for application metrics.
-
Refined
kube-state-metrics
so that only core metrics are collected by default.
Security:
-
GA: Added Google Groups support for Connect Gateway.
-
Switched distroless base image for Node Problem Detector.
-
Changed
anet-operator/cilium-operator
to run as non-root container. -
Secured communication between
metrics-server
andapi-server
using the Transport Layer Security (TLS) protocol.
VM Runtime:
-
Fixed a memory leak in
libvirt-go
, which caused unbounded memory growth and risked crashing long-running VMs. -
Provided guaranteed compute support so that customers can get Guaranteed Quality of Service (QoS)for the VM when needed.
-
Preview: Enabled Anthos VM to be allocated dedicated host cores. Each VM virtual core can be pinned to a dedicated host core.
-
Separated GPU installation and deletion logic. If only the container GPU workload is needed, customers can enable the GPU without having to enable VM Runtime.
-
Added support for the T4 GPU card.
-
Enabled automatic use of the
VirtualMachineDisk
name as the disk serial number. This change makes it easier for customers to identify the disk in the VM. -
Enabled KubeVM
cloud-init
API and startup script API. -
Added new CLI command (
Virtctl
) for resetting Windows VM password. -
Fixed the following container image security vulnerability: CVE-2022-1798
-
Added feature that stops NVIDIA device plugins from crashing if a GPU card hasn't been allocated to a container.
-
Added support for automatic VM restarts after a configuration update. Previously, customers needed to stop the VM, apply the change, and then re-start the VM. To use the feature, set the
autoRestartOnConfigurationChange
flag to true in theVirtualMachine
custom resource. -
Improved the Kubernetes audit log of VM operations so that it contains detailed VM configuration and update information.
-
Fixed flooding of logs with cluster events that arise when a VM encounters disk I/O errors.
-
Added KubeVM roles. By binding with these roles, customers are granted permission to resources that manage VMs.
Anthos clusters on VMware
Anthos clusters on VMware 1.12.2-gke.21 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.12.2-gke.21 runs on Kubernetes 1.21.4-gke.200.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.12, 1.11, and 1.10.
BigQuery
In addition to standard rounding, BigQuery now supports the rounding mode ROUND_HALF_EVEN
for parameterized NUMERIC
or BIGNUMERIC
columns. The ROUND()
function also accepts the rounding mode as an optional argument. This feature is now in preview.
With Datastream for BigQuery, you can now replicate data and schema updates from operational databases directly into BigQuery. This feature is now in preview.
The totalItems
field returned by the projects.list
API method now returns the number of items per page, rather than an approximate total number of projects across all pages.
In the Explorer pane, you can now open tables in Connected Sheets. This feature is now generally available (GA).
Chronicle
The following changes are available in the Unified Data Model:
- A new field, risk_score, was added to Noun.investigation.
- A new field, data_tap_config_name, was added to Event.metadata.tags.
- The following new fields were added to Network:
- application_protocol_version
- sent_packets
- received_packets
- A new ENUM value, CHALLENGE, was add to SecurityResult.Action
- A new ENUM value, ANALYST_UPDATE_RISK_SCORE, was added to Metadata.EventType
For a list of all fields in the Unified Data Model, and their descriptions, see the Unified Data Model field list.
Context Aware Detections - Risk Dashboard
The Context Aware Detections - Risk dashboard provides insight into the current threat status of assets and users in your enterprise.
Contextual enrichment in events and entities
To enable a security investigation, Chronicle provides additional context about artifacts in a customer environment by calculating prevalence statistics, enriching events with geolocation data based on IP address, and ingesting data from Safe Browsing threat lists related to file hashes. For more information, see:
Cloud Bigtable
The Cloud Bigtable observability metric high-granularity CPU utilization of hottest node
is now generally available (GA). Because of more frequent sampling, this metric is more accurate than CPU utilization of hottest node
. For more information on using Bigtable metrics, see Monitoring.
Cloud Interconnect
Dedicated Interconnect support is available in the following colocation facilities:
- True IDC - North Muang Thong, Bangkok
For more information, see the Locations table
Cloud Logging
Using Log Analytics, you can run SQL queries that analyze your log data to generate useful insights. Log Analytics also let you use BigQuery to query your log data. For more information, see Log Analytics.
Cloud Monitoring
You can now collect additional Elasticsearch metrics from the Ops Agent, starting with version 2.21.0. For more information, see Monitoring third-party applications: Elasticsearch.
You can now collect additional PostgreSQL metrics from the Ops Agent, starting with version 2.21.0. For more information, see Monitoring third-party applications: PostgreSQL.
You can now use Prometheus Query Language (PromQL) when creating charts and dashboards in Cloud Monitoring. For more information, see PromQL in Cloud Monitoring.
Cloud Spanner
The following SPANNER_SYS
statistical tables have been enhanced with new columns:
- Transaction statistics:
TOTAL_LATENCY_DISTRIBUTION
,OPERATIONS_BY_TABLE
, andATTEMPT_COUNT
. - Query statistics:
LATENCY_DISTRIBUTION
andRUN_IN_RW_TRANSACTION_EXECUTION_COUNT
. - Read statistics:
RUN_IN_RW_TRANSACTION_EXECUTION_COUNT
.
The number of mutations per commit that Cloud Spanner supports has increased from 20,000 to 40,000. For more information, see Quotas and limits.
The ARRAY_SLICE
function is now available to use in Google Standard SQL. This function returns an ARRAY
containing zero or more consecutive elements from an input array.
Cloud SQL for MySQL
Query insights is now generally available. Query insights helps you detect, diagnose, and prevent query performance problems for Cloud SQL databases. It provides self-service, intuitive monitoring, and diagnostic information that goes beyond detection to help you to identify the root cause of performance problems. To learn more, see Use Query insights to improve query performance.
Cloud SQL for MySQL now supports high-availability for self-service migration. Before starting replication, check the outgoing IP addresses of the Cloud SQL instance and make sure that the appropriate IP addresses are allowlisted on the external source. For more information, see Start replication on the Cloud SQL instance.
New buckets created using the Cloud Console now have public access prevention enabled by default.
- During the bucket creation process, you can choose to change this setting.
Config Connector version 1.95.0 is now available.
Added support for DLPDeidentifyTemplate
resource.
Added enableServiceLinks: false
to all the Pod configurations in Config Connector installation bundle. This is to fix the potential issue standard_init_linux.go:228: exec user process caused: argument list too long
in Config Connector Pods.
Config Controller now uses the following versions of its included products:
- Anthos Config Management v1.13.0, release notes
- Config Connector v1.94.0, release notes
Dataproc Auto Zone Placement now takes ANY reservation into account by default.
Dataproc Serverless for Spark now uses runtime version 1.0.19 and 2.0.0-RC4, which also upgrades both runtimes to Cloud Storage Connector to 2.2.8.
Deep Learning Containers
M97 Release
- Regular package updates.
The Calico CNI authentication errors that caused pods to get stuck in Terminating
or Pending
state (see August 19, 2022 release notes) are fixed in the following GKE versions in the Rapid release channel:
- 1.24.4-gke.500 or later
- 1.23.11-gke.300 or later
- 1.22.14-gke.300 or later
To fix the issue, upgrade your control plane to any of these versions. If you prefer not to use the Rapid channel, open a Google Cloud Support ticket to have your cluster patched internally.
GKE control plane metrics is now available for clusters running Kubernetes control plane version 1.22.13 or later.
The rule source for Cloud Armor preconfigured rules now includes ModSecurity Core Rule Set (CRS) 3.3 in General Availability. For more information, see Tuning Google Cloud Armor WAF rules.
Added new Memorystore region: Dallas (us-south1
).
Connectivity Tests now includes a feature that verifies connectivity from a Cloud Function (1st gen) to a VM or public IP address. For more information, see Create and run Connectivity Tests.
The Monitoring & Analytics page has been split into two separate pages. The contents of the old Monitoring tab appear on the new Monitoring page, and the contents for the old Analytics tab appear on the new Analytics page.
Security Command Center
The parentDisplayName
attribute was added to the Finding
object of the Security Command Center API.
The parentDisplayName
attribute provides the display name of the Security Command Center service or source that produced a finding.
For more information, see the Security Command Center API documentation for the Finding
object.
Vertex AI
Vertex AI Model Monitoring now offers Preview support for batch prediction jobs. For more details, see Vertex AI Model Monitoring for batch predictions.
Feature value monitoring is now generally available (GA). (Feature Store)
General Availability: You can monitor the following Private Service Connect producer metrics using Cloud Monitoring:
- Connected consumer forwarding rules
- Used NAT IP addresses
For more information, see Monitor Private Service Connect published services.
Microsoft Azure Releases And Updates
Source: azure.microsoft.com
Public preview: Policy analytics for Azure Firewall
You can now manage firewall rules over time by providing insights and analysis of traffic flowing through Azure Firewall.
General availability: ExpressRoute FastPath support for Vnet peering and UDRs
FastPath support for virtual network peering and user defined routes (UDRs) enables high-bandwidth connectivity to workloads deployed across hub-and-spoke architectures.
Azure SQL—General availability updates for late September 2022
General availability enhancements and updates released for Azure SQL in late September 2022.
General availability: Azure SQL Database Hyperscale reverse migration to general purpose tier
Easily move your previously migrated databases on Azure SQL Database Hyperscale back to the general purpose tier.
Azure SQL—Public preview updates for late September 2022
Public preview enhancements and updates released for Azure SQL in late September 2022.
General availability: Backup and restore in Azure Database for PostgreSQL – Flexible Server
Restore a server to another geo-paired region with geo redundant and restore for Flexible Server on Azure Database for PostgreSQL, a managed service running the open-source Postgres database on Azure.
Public preview: Azure AD authentication with Azure Database for MySQL – Flexible Server
Simplify permission management by using Microsoft Azure Active Directory identities to authenticate with instances of Azure Database for MySQL – Flexible Server.
Generally available: Azure Functions Linux Elastic Premium plan increased maximum scale-out limits
Maximum scale-out limits for Functions Linux Premium plans have been increased in a number of regions.
Generally available: Azure Functions .NET Framework support in the isolated worker model
Build your Serverless Apps with Azure Functions in isolated worker model with .NET Framework 4.8.
Public preview: Java 17 Support in Azure Functions
Azure Functions now supports Java 17 in Public Preview
Public preview: Automatic backup for App Service Environment V2 and V3
Automatic backup and restore is in preview for App Service Environment V2 and V3
Generally available: Backup and restore updates for App Service
Utilize automatic backups or make on-demand custom backups in Azure App Service.
Generally available: Azure Red Hat OpenShift landing zone accelerator
The Azure Red Hat OpenShift guide delivers resources that help design, deploy, and maintain well architected Azure Red Hat OpenShift platforms.
Public preview: Billing has started for Azure Monitor Logs data archive
Billing has started for Azure Monitor Logs data archive on September 1, 2022.
The 22.09 release includes a decrease in cold boot time and optimized manufacturing scripts.
Public preview: 128 vCore option for Azure SQL Database standard-series hardware
Scale up your computing power by 60% with a new 128 vCore compute size option on Azure SQL Database.
General availability: Azure Policy built-in definitions for Azure NetApp Files
The new Azure Policy built-in definitions for Azure NetApp Files enable organization admins to restrict creation of unsecure volumes or audit existing volumes.
General availability: Azure NetApp Files new regions and cross-region replication
Azure NetApp Files is now available in additional regions and supports new cross-region replication pairs.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (includes forever free tier):