we are nearly at the end of January, where did that go. Hopefully you are getting back into the swing of things. If you have any questions about automating your cloud infrastructure or security diagrams, you can hit up the chat icon on the bottom of this page and ask away.
Anyhoo, here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday January 27th 2023.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
AWS Managed Services (AMS) achieves FedRAMP High Authorization
AWS Managed Services (AMS) Accelerate has achieved FedRAMP High authorization in AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions, which are operated by employees who are U.S. citizens on U.S. soil. You can now use AMS Accelerate with workloads that require FedRAMP High categorization level.
AWS Managed Services (AMS) helps you adopt AWS at scale and operate more efficiently and securely. We leverage standard AWS services and offer operational guidance with specialized automations, skills, and experience that are contextual to your environment and applications.
AMS provides proactive, preventative, and detective capabilities that raise the operational bar and help reduce risk without constraining agility, allowing you to focus on innovation. The AMS Accelerate Operations Plan extends your team with operational capabilities including monitoring, incident detection, security, patch, backup, and cost optimization.
Amazon Personalize simplifies onboarding with data insights
AWS are excited to announce that Amazon Personalize now provides analysis on your data to make onboarding easier than ever. Amazon Personalize enables developers to improve customer engagement through personalized product and content recommendations – no ML expertise required.
Amazon Personalize trains custom models for each customer using their unique data. With this launch, Amazon Personalize now analyzes the data you provide and offers suggestions to assist you in improving your data preparation.
The performance of personalization systems depends on providing models with high quality data about users and their interactions with items in your catalog. By identifying potential data deficiencies and providing suggestions to assist customers with remediation, Amazon Personalize makes training performant models easier and reduces the need for troubleshooting.
Generating insights on your data is easy. Simply visit the Amazon Personalize console, open a Dataset from within your Dataset groups, and then choose “Data analysis.“
Amazon SageMaker is now available in AWS GovCloud (US-East) Region
Amazon SageMaker is now available in AWS GovCloud (US-East) Region. Starting today, you can build, train, and deploy machine learning (ML) models in the region.
Amazon SageMaker is a fully managed platform that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models.
AWS Compute Optimizer is now available in AWS GovCloud (US) Region
AWS GovCloud (US) Region customers can now use AWS Compute Optimizer to reduce costs and improve performance.
By using machine learning to analyze historical utilization metrics, AWS Compute Optimizer helps customers choose optimal configurations for three types of AWS resources: Amazon Elastic Compute Cloud (EC2) instance types, Amazon Elastic Block Store (EBS) volumes, and AWS Lambda functions.
AWS Compute Optimizer is now available in a total of 23 AWS Regions through the AWS Management Console, AWS CLI, or AWS SDK.
AWS Storage Gateway is now available in AWS Europe (Spain) and AWS Europe (Zurich) Regions
AWS Storage Gateway expands availability to the AWS Europe (Spain) and AWS Europe (Zurich) Regions enabling customers to deploy and manage hybrid cloud storage for their on-premises workloads.
AWS Storage Gateway is a hybrid cloud storage service that provides on-premises applications access to virtually unlimited storage in the cloud. You can use AWS Storage Gateway for backing up and archiving data to AWS, providing on-premises file shares backed by cloud storage, and providing on-premises applications low latency access to data in the cloud.
AWS Conversational AI Competency Partner's implement high-quality chatbot solutions
AWS are excited to highlight AWS Conversational AI Competency Partners, who enable enterprises to implement high-quality, highly effective chatbot, virtual assistant, and Interactive Voice Response (IVR) solutions.
The demand for conversational AI interfaces, like chatbots and voice assistants, continues to grow as users prefer to interact with businesses on digital channels. Organizations of all sizes are developing chatbots, voice assistants, and IVR solutions to increase user satisfaction, reduce operational costs, and streamline business processes. COVID-19 has further accelerated the adoption, due to social distancing rules and shelter-in-place orders.
AWS Conversational AI partners provide domain expertise, tools, and services to aid in selecting use cases, defining Natural Language Understanding (NLU) Intents and training phrases, designing effective conversational flows, integrating backend services, and testing, monitoring, and measuring in an iterative approach. AWS Conversational AI Competency partners have been vetted by AWS Partner Solution Architects to ensure customers have a high quality experience.
AWS Conversational AI partners enable customers to deploy high quality solutions on AWS, while accelerating time to market.
Announcing Porting Advisor for Graviton
AWS announces the general availability of Porting Advisor for Graviton. The Porting Advisor for Graviton is an open-source command line tool that analyzes source code and generates a report highlighting missing and outdated libraries and code constructs that may require modification along with recommendations for alternatives. It accelerates your transition to AWS Graviton-based instances by reducing the iterative process of identifying and resolving source code and library dependencies.
The Porting Advisor for Graviton is freely available on the AWS GitHub Repository to be built from source. This tool scans source code, however, it does not scan binary files. The Porting Advisor for Graviton does not make any code modifications, or API-level recommendations. Finally, it does not send any data back to AWS. The tool supports C/C++, Fortran, Go 1.11+, Java 8+, and Python 3+ and can run on x86-based and Arm64-based machines.
Amazon FSx for NetApp ONTAP is now available in the AWS Middle East (UAE) Region
AWS customers can now create Amazon FSx for NetApp ONTAP file systems in the AWS Middle East (UAE) Region.
Amazon FSx makes it easy and cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities.
Amazon FSx for NetApp ONTAP is a storage service that provides the familiar features, performance, capabilities, and APIs of ONTAP file systems with the agility, scalability, and simplicity of a fully managed AWS service.
Announcing the general availability of AWS Local Zones in Lagos, Lima, and Querétaro
AWS Local Zones are now available in three new metro areas—Lagos, Lima, and Querétaro. You can now use these AWS Local Zones to deliver applications that require single-digit millisecond latency or local data processing.
In early 2022, AWS announced plans to launch AWS Local Zones in over 30 metro areas across 27 countries outside of the US.
AWS Local Zones are also generally available in 12 metro areas outside of the US (Bangkok, Buenos Aires, Copenhagen, Delhi, Helsinki, Hamburg, Kolkata, Muscat, Perth, Santiago, Taipei, and Warsaw) and 16 metro areas in the US (Atlanta, Boston, Chicago, Dallas, Denver, Houston, Kansas City, Las Vegas, Los Angeles, Miami, Minneapolis, New York City, Philadelphia, Phoenix, Portland, and Seattle).
Amazon VPC IP Address Manager (IPAM) now manages IP Addresses in your network outside your AWS Organization
Amazon VPC IP address manager (IPAM) now lets you manage IP addresses in accounts outside of your AWS Organization using AWS Resource Access Manager (RAM). This simplifies your IP management workflows (e.g., plan, track, and monitor) by enabling you to use a single IPAM across all your AWS accounts.
Your network can extend to VPCs that are in accounts outside your organization. For example, these accounts can represent another line of business in your company or a managed service hosted by a partner on your behalf. With today’s release, you can share your IPAM managed IP addresses with accounts in other organizations.
Using your shared IP addresses, owners of these accounts can create VPCs or Elastic IP addresses, that conform to your connectivity needs. Today’s release also lets you monitor IP addresses (including Elastic IPs) in other organizations, enabling you to avoid connectivity issues that can be caused by misallocation of IP addresses, such as overlapping IPs, thereby improving your auditing workflows.
This new feature is available in all AWS regions where VPC IP Address Manager (IPAM) is available: Africa (Cape Town), Asia Pacific (Hong-Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Canada (Central), Europe (Dublin), Europe (Frankfurt), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (Sao Paulo), US West (Northern California), US East (N. Virginia), US East (Ohio), and US West (Oregon), AWS GovCloud (US-East) and AWS GovCloud (US-West).
AWS announces Amazon-provided contiguous IPv6 CIDR blocks
AWS announces the general availability of Amazon provided IPv6 contiguous Classless Inter-Domain Routing (CIDR) blocks with Amazon VPC IP Address Manager (IPAM). Within IPAM, customers can create IPv6 publicly scoped pools and provision with BYOIP CIDR blocks.
Now, AWS customers can provision Amazon provided IPv6 CIDR blocks from /52 up to /40 in size into separate pools for association to Virtual Private Clouds (VPCs). Contiguous CIDR blocks can be used for sequential VPC creation. CIDRs can then be aggregated in a single entry across networking and security constructs like access control lists, route tables, security groups, and firewalls.
Before today, AWS customers could use bring your own IP addresses (BYOIP) to have sequential VPCs. This means you could purchase an IP range from your Regional Internet Registry (RIR) and AWS would validate ownership prior to using the CIDR. Alternatively, customers could create a VPC directly with an Amazon provided IPv6 CIDR block.
In this case, the CIDR is not sequential with any existing customer CIDRs, and it is ephemeral, existing within the customer account for the life of the VPC. With Amazon provided contiguous IPv6 CIDR blocks, network administrators can provision an IPv6 CIDR block with just a few clicks.
Then they can plan, segment, and allocate IP space based on different use cases such as applications, teams, or environments. Now, customers will retain these CIDR blocks beyond the life of the VPC. Amazon provided IPv6 CIDR blocks are available in a default size of /52, which supports addressing for up to 16 VPCs. Customers can receive additional and larger CIDR block allocations by request.
Amazon provided contiguous IPv6 CIDR blocks are now available in Africa (Cape Town), Asia Pacific (Hong-Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Canada (Central), Europe (Dublin), Europe (Frankfurt), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (Sao Paulo), US West (Northern California), US East (N. Virginia), US East (Ohio), US West (Oregon), AWS GovCloud (US-East) and AWS GovCloud (US-West). It comes at no additional cost.
Amazon OpenSearch Serverless is now generally available
Now generally available, Amazon OpenSearch Serverless is a new serverless option for Amazon OpenSearch Service. OpenSearch Serverless streamlines the process of running petabyte-scale search and analytics workloads without having to configure, manage, or scale OpenSearch clusters. OpenSearch Serverless automatically provisions and scales the underlying resources to deliver fast data ingestion and query responses for even the most demanding and unpredictable workloads. With OpenSearch Serverless, you pay only for the resources consumed.
OpenSearch Serverless decouples compute and storage. It also separates the indexing (ingest) components from the search (query) components, with Amazon Simple Storage Service (Amazon S3) as the primary data storage for indexes. With this decoupled architecture, OpenSearch Serverless can scale search and index functions independently of each other and independently of the indexed data in Amazon S3.
Previously available in preview, there are a number of enhancements introduced in OpenSearch Serverless, including scale-in support and availability in three additional Regions. OpenSearch Serverless is now available in eight AWS Regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland).
AWS Managed Services (AMS) customers can now change response for Config Rules
We’re excited to announce the ability to change the default AMS response for Config Rules in Accelerate operations plan. With this release, customers can now choose whether they want AMS to remediate, ask for customer approval, or just add to a monthly report on the alerts from AMS supported security Config Rules.
By adjusting the default response, you can increase conformance by setting more Config Rules for remediation. When you select remediation of a finding, AMS response is quick and consistent. Findings can also create a case asking for your approval or just be reported during your next Monthly Business Review (MBR). You can set up multiple responses for a Config Rule that are matched to the account and resources based on tags.
With this launch, Accelerate customers can enforce the remediation of non-compliant resources and request to be contacted only when you want to take a second look. For example, customers can change default response of unencrypted S3 buckets to be ‘ask for approval’ for specific accounts.
You can also add additional responses like ‘remediate’ for unencrypted S3 buckets with the tag key value pair “Regulated: True” and ‘report-only’ for S3 buckets that have the tags “Regulated: False”. You can start with the default configuration provided by AMS while your Cloud Architect helps you modify responses according to your preferences per account.
AWS Pricing Calculator now supports optimized pricing estimation for EC2 Dedicated Hosts
Windows Server and SQL Server on Amazon EC2 calculator now supports Dedicated Hosts price estimation. With this feature, you can now generate optimized price estimate for bring-your-own-license (BYOL) scenarios on Dedicated Hosts.
You can use the Windows Server and SQL Server on Amazon EC2 Pricing Calculator to calculate price estimates for shared tenancy EC2 instances and for BYOL scenarios on Dedicated Hosts. To estimate prices for BYOL scenarios, first, you need to obtain a list of machine configurations along with vCPU and memory requirements as an input to the Windows Server and SQL Server on Amazon EC2 Pricing Calculator.
Based on this input, you will receive the recommended Dedicated Hosts, such as c5 or r5 instance family. The recommendations also include optimized instance packing within each Dedicated Host. On choosing a pricing model such as On Demand, Savings Plans or Reserved Instances you will be able to estimate an optimized price. This price estimate can be exported or shared with other users.
Increased field limits and performance improvements for Pivot table in Amazon QuickSight
Amazon QuickSight pivot tables now support more data than ever before with new the field limits, unlocking advance use cases. The value field well limits have been increased from 20 to 40, and Rows and Columns have been increased from 20 each to a combined limit of 40. For example, if the user has 34 fields in rows then they can add 40-34 = max 6 fields to the column field well. Refer here for more information.
To support this increase in field limits, we are also launching performance improvements to load pivot tables faster. Instead of fetching the entire data in the view port, it will only fetch the data for visible fields (expanded fields) along with a small subset of values under the collapsed field.
Thus, making sure that data fetched in every new query will be used to render new values that can be displayed immediately. We have seen customers improve their load time from 2X to 10X depending on the complexity of their dataset.
Announcing comprehensive controls management with AWS Control Tower
This week AWS are excited to announce the launch of comprehensive controls management in AWS Control Tower, a set of new features that enhances AWS Control Tower’s governance capabilities. You can now programmatically implement controls at scale across your multi-account AWS environments within minutes, so you can more quickly vet, allow-list, and begin using AWS services.
With comprehensive controls management in AWS Control Tower, you can reduce the time it takes to define, map, and manage the controls required to meet your most common control objectives such as enforcing least privilege, restricting network access, and enforcing data encryption.
As customers begin to use AWS services, many take an allow-list approach — only allowing use of AWS services that have been vetted and approved — to balance their security and compliance requirements with the need to be agile. This restricts developer access to AWS services until risks are defined and controls implemented. AWS Control Tower’s new proactive control capabilities leverages AWS CloudFormation Hooks to proactively identify and block noncompliant resources before they are provisioned by CloudFormation.
AWS Control Tower’s new proactive controls complement AWS Control Tower’s existing control capabilities, enabling you to disallow actions that lead to policy violations and detect noncompliance of resources at scale. AWS Control Tower provides updated configuration and technical documentation so you can more quickly benefit from AWS services and features. AWS Control Tower provides you a consolidated view of compliance status across your multi-account environment.
Announcing runtime management controls for AWS Lambda
This week AWS announced the general availability of runtime management controls for AWS Lambda. The operational simplicity of automatic runtime updates is one of the features customers most like about Lambda. This release provides customers running critical production workloads with more visibility and control over when runtime updates are applied to their functions.
For each runtime, Lambda provides a managed execution environment which includes the underlying Amazon Linux OS, programming language runtime, and AWS SDKs. Lambda takes care of applying patches and security updates to all these components. These runtime updates allow customers to delegate responsibility for patching from the customer to Lambda.
With this release, the updates made to the managed runtimes provided by Lambda are now visible to customers as distinct runtime versions. Customers also have more control over when Lambda updates their functions to a new runtime version, either automatically or synchronized with customer-driven function updates. In the very rare event of an unexpected runtime incompatibility with an existing function, they can also roll back to an earlier runtime version.
Amazon Polly launches five new male NTTS voices
Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk and build entirely new categories of speech-enabled products. Today, we are excited to announce the general availability of five new male neural Text-to-speech (NTTS) voices: Sergio for Castilian Spanish, Andrés for Mexican Spanish, Rémi for French, Adriano for Italian, and Thiago for Brazilian Portuguese.
This update leverages cutting-edge technology to use characteristics of existing NTTS voices to build new voice options in different languages. We applied some of the vocal characteristics of the US English Matthew voice to five new language variants creating an extra opportunity for customers to serve content in different languages using one and the same voice persona.
Amazon Polly now offers both male and female voices for each locale: Spain, Mexico, France, Italy, and Brazil. Andrés is our first male Mexican Spanish voice, while Sergio, Rémi, Adriano, and Thiago are new neural conversational male voices that are available alongside existing standard voices - Enrique, Mathieu, Giorgio, and Ricardo.
Amazon MQ now supports RabbitMQ version 3.8.34
Amazon MQ now provides support for RabbitMQ version 3.8.34, which includes several fixes to the previously supported version, RabbitMQ 3.8.30. Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. You can reduce your operational burden by using Amazon MQ to manage the provisioning, setup, and maintenance of message brokers. Amazon MQ connects to your current applications with industry-standard APIs and protocols to help you easily migrate to AWS without having to rewrite code.
If you are running RabbitMQ 3.8.30 or earlier, we encourage you to upgrade to RabbitMQ 3.8.34 or RabbitMQ 3.9.24. This can be accomplished with just a few clicks in the AWS Management Console. If your broker has automatic minor version upgrade enabled, AWS will automatically upgrade the broker to version 3.8.34 during a future maintenance window. To learn more about upgrading, please see - Managing Amazon MQ for RabbitMQ engine versions in the Amazon MQ Developer Guide.
Amazon SageMaker Automatic Model Tuning now allows you to specify environment variables for your tuning jobs
SageMaker Automatic Model Tuning allows you to find the most accurate version of your machine learning model by searching for the optimal set of hyperparameter configurations. Previously, you could only specify environment variables for your algorithm runtime in your SageMaker Training jobs, but not in your tuning jobs. Starting today, you have the flexibility to specify runtime environment variables for your scripts in your CreateTuningJob API.
With this launch, you can specify different behaviour and configurations for your training jobs through the environment variables you pass into the CreateTuningJob request. This also makes it easier for you to re-use your training job definitions to start a tuning job. For example, you can benefit from more fine-grained logging for all your training jobs by setting up an environment variable at the tuning level, or you can specify the source of your data and customize the train/test split directly through your environment variables.
The ability to provide environment variables in SageMaker Automatic Model Tuning is now available in all commercial AWS regions and applicable to all tuning jobs.
Amazon RDS Blue/Green Deployments now supports Aurora MySQL 3 (with MySQL 8.0 compatibility) as a source cluster
Amazon Aurora now supports Aurora MySQL 3 (with MySQL 8.0 compatibility) as a source cluster or blue environment within Amazon RDS Blue/Green Deployments. This enables you to use Blue/Green Deployments for minor version upgrades for Aurora MySQL 3 (with MySQL 8.0 compatibility).
Blue/Green Deployments help you with safer, simpler, and faster updates to your Amazon Aurora and Amazon RDS databases. Blue/Green Deployments create a fully managed staging environment that allows you to deploy and test production changes, keeping your current production database safe. With a single click, you can promote the staging environment to be the new production system in as fast as a minute, with no changes to your application and no data loss.
Amazon RDS Blue/Green Deployments is available for all versions of Amazon Aurora with MySQL, Amazon RDS for MySQL 5.7 and higher, and Amazon RDS for MariaDB 10.2 and higher in all AWS Regions, including the AWS GovCloud (US) Regions. In a few clicks, update your databases using Amazon RDS Blue/Green Deployments via the Amazon RDS Console. Learn more about Blue/Green Deployments on the Amazon RDS features page and AWS News Blog.
Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services.
Amazon RDS Multi-AZ with two readable standbys for RDS PostgreSQL now supports inbound replication
Amazon Relational Database Service (Amazon RDS) for PostgreSQL now supports inbound replication from Amazon RDS Single-AZ database (DB) instances and Amazon RDS Multi-AZ DB instances with one standby to Amazon RDS Multi-AZ deployments with two readable standbys. You can use this inbound replication to help migrate your existing Amazon RDS PostgreSQL deployments to Amazon RDS Multi-AZ deployments with two readable standbys, which have one writer instance and two readable standby instances across three availability zones.
By creating a Multi-AZ deployment with two readable standbys as a read replica of your existing RDS PostgreSQL database instance, you can promote the read replica to be your new primary, typically within minutes.
Amazon RDS Multi-AZ deployments provide enhanced availability and durability, making them a natural fit for production database workloads. Deployment of Amazon RDS Multi-AZ with two readable standbys supports up to 2x faster transaction commit latencies than a Multi-AZ deployment with one standby instance.
In this configuration, automated failovers typically take under 35 seconds. In addition, the two readable standbys can also serve read traffic without needing to attach additional read replicas.
For a full list of the Amazon RDS Multi-AZ with two readable standbys regional availability and supported engine versions, refer the Amazon RDS User Guide.
Amazon AppFlow announces 10 new connectors
Amazon AppFlow announces the release of 10 new data connectors for Software-as-a-Service (SaaS) applications. The new connectors enable you to transfer your data from Asana, Delighted, Google Calendar, Intercom, JDBC, PayPal, Pendo, Smartsheet, Snapchat Ads, and WooCommerce. These new connectors make it easier for customers to access their data for use cases such as data lake hydration, analytics and machine learning, and data retention.
Amazon AppFlow is a fully-managed integration service that enables you to securely transfer your data between Software-as-a-Service (SaaS) applications like Salesforce, SAP, Google Analytics, Facebook Ads, ServiceNow, and AWS services like Amazon S3 and Amazon Redshift without writing code.
AWS Elemental MediaLive adds timecode burn-in
While visual timecode is not generally used when delivering video to viewers, this feature is useful for testing, monitoring, and compliance. MediaLive allows configuration on an individual output basis, so you can separate audience delivered outputs from outputs with burned-in timecode for technical monitoring. For more information on how to enable this feature, visit the MediaLive documentation for timecode configuration. Timecode burn-in in MediaLive is available at no additional cost.
AWS Elemental MediaLive is a broadcast-grade live video processing service. It lets you create high-quality live video streams for delivery to broadcast televisions and internet-connected multiscreen devices, like connected TVs, tablets, smartphones, and set-top boxes.
The MediaLive service functions independently or as part of AWS Media Services, a family of services that form the foundation of cloud-based workflows and offer you the capabilities you need to transport, create, package, monetize, and deliver video. Visit the AWS region table for a full list of AWS Regions where AWS Elemental MediaLive is available.
Amazon Detective adds Amazon VPC Flow Logs visualizations for Amazon EKS workloads
Amazon Detective now adds visual summaries and analytics about your Amazon Virtual Private Cloud (VPC) flow logs from your Amazon Elastic Kubernetes Service (EKS) workloads. This new capability visualizes all network traffic from your EKS workloads and allows you to quickly answer questions like “what ports or network services were in use by my EKS workloads?”, “were there any large data transfers from my EKS workloads?”, and “what IP address were connected to my EKS workloads?” These details help security analysts investigate potential security issues, diagnose unexpected network behavior, and identify other AWS resources that might be affected.
Amazon Detective automatically collects VPC flow logs from your monitored AWS accounts. Before today, Detective would allow you to interactively examine VPC flow log information for your Amazon Elastic Compute Cloud (Amazon EC2) instances. Now Detective allows you to examine VPC flow log information for your EKS workloads, display visual summaries about these network flows, and aggregate information by EKS pods.
To take advantage of this new capability, you can enable EKS audit logs to your Detective behavior graph. The first 30 days of enabling EKS audit logs as a data source in Detective are available at no additional charge for existing Detective accounts. For new accounts, EKS audit logs as a data source is automatically enabled and part of the 30-day free trial.
During the trial period, you can see the estimated cost after the trial period ends in the Detective Management Console. If you have already enabled EKS audit logs as a data source, then you’ll see network visualizations under the profile panel for your EKS pods.
Amazon Aurora Supports PostgreSQL 14.6, 13.9, 12.13, 11.18
Following the announcement of updates to the PostgreSQL database by the open source community, AWS have updated Amazon Aurora PostgreSQL-Compatible Edition to support PostgreSQL 14.6, 13.9, 12.13, and 11.18. These releases contains product improvements and bug fixes made by the PostgreSQL community, along with Aurora-specific improvements. Refer to the Aurora version policy to help you to decide how often to upgrade and how to plan your upgrade process. As a reminder, if you are running any version of Amazon Aurora PostgreSQL 10, you must upgrade to a newer major version by January 31, 2023.
You can initiate a minor version upgrade manually by modifying your DB cluster. We recommend you enable the Auto minor version upgrade option when creating or modifying a DB cluster. For more details, see Automatic Minor Version Upgrades for PostgreSQL.
Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services.
AI Platform Training
Runtime version 2.11 is available. You can use runtime version 2.11 to train with TensorFlow 2.11, scikit-learn 1.0.2, or XGBoost 1.6.1. Runtime version 2.11 supports training with CPUs, GPUs, or TPUs.
See the full list of updated dependencies in runtime version 2.11.
Access Transparency supports Cloud NAT in GA stage. For the complete list of services that Access Transparency supports, see Supported services.
Anthos Clusters on VMware
Anthos clusters on VMware 1.12.5-gke.34 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.12.5-gke.34 runs on Kubernetes 1.23.15-gke.2400.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.
In the vSphere CSI driver, enabled
async-query-volume, and disabled
trigger-csi-fullsync. This enhances the vSphere CSI driver to ensure volume operations are idempotent.
If you specify a CIDR range (subnet) in the IP block file for your cluster nodes, the broadcast IP of the subnet, the network CIDR IP, and the network gateway IP will be excluded from the pool of addresses that get assigned to nodes.
Fixed a known issue where CIDR ranges cannot be used in the IP block file.
Fixed a bug where CA rotation appeared as an unsupported change during admin cluster update.
Anthos Config Management
The constraint template library's
K8sPSPForbiddenSysctls template now supports an allow-list of sysctls using the new
allowedSysctls parameter. For reference, see Constraint template library.
Config Sync now includes resource-related metrics labels in Google Cloud Monitoring. These labels were previously added to the Prometheus monitoring pipeline in Config Sync version 1.14.0. The labels are available under the "Group By" filter options in the Google Cloud Console. For more information on metrics, see Monitoring Config Sync.
Policy Controller has been updated to include a more recent build of OPA Gatekeeper (hash: c61db24).
Fixed an issue where the nomos image did not contain the nomos CLI.
Anthos Service Mesh
You can now download 1.13.9-asm.10 for in-cluster Anthos Service Mesh. It includes the features of Istio 1.13.9 subject to the list of supported features.
Cloud Build repositories (2nd gen) lets you easily create and manage repository connections, not only through Cloud Console but also through
gcloud and the Cloud Build API. Cloud Build repositories (2nd gen) is available for GitHub and GitHub Enterprise repositories at the preview release stage. To learn more, see the Repositories overview page.
Cloud Composer 1.20.4 and 2.1.4 release started on January 25, 2023. Get ready for upcoming changes and features as we roll out the new release to all regions. This release is in progress at the moment. Listed changes and features might not be available in some regions yet.
Airflow 2.4.3 is available in Cloud Composer images.
Images with Airflow 2.4.3 use the public version
8.6.0 of the
apache-airflow-providers-google package. For more information about changes, see the package's page in the Airflow documentation.
(Airflow 2.2.5 only) The
apache-airflow-providers-google package in images with Airflow 2.2.5 was upgraded to
2022.12.20+composer. Changes compared to version
Now the alerting policy can forecast, or predict, that the threshold will be violated within a configurable time window. These policies are designed to monitor constraint metrics like those that record quota, memory, and storage usage. Forecasting alerts is in Public Preview. For more information, see Forecast condition.
Announcing the General Availability (GA) release of the Dataproc driver node groups.
New Dataproc Serverless for Spark runtime versions:
New sub-minor versions of Dataproc images:
Added support for enabling Hive Metastore OSS metrics by passing
hivemetastore to --metric-sources property during cluster creation.
Added support for Dataproc Metastore integration with Trino.
Upgraded Parquet to 1.12.2 for 2.1 images.
The value of
hive.server2.builtin.udf.blacklist is now set by default to
hive-site.xml to prevent arbitrary code execution.
The Balanced compute class is now generally available in Autopilot clusters running GKE version 1.25 and later.
You can now specify a minimum CPU platform in the Balanced compute class in Autopilot clusters running GKE version 1.25 and later if your workloads have specialized CPU requirements such as a high base frequency or optimized power management functionality. For instructions, refer to Choose a minimum CPU platform.
You can now expose randomly assigned host ports in Pods on GKE Autopilot running version 1.24.7-gke.1200 and later or 1.25.3-gke.1100 and later.
Google Cloud VMware Engine
Removed ability to create stateful outbound firewall rules for new projects and projects that have not yet created stateful outbound rules. Customers can continue to create a firewall rule set in NSX-T Gateway or NSX-T Distributed Firewall rules to limit or control outbound access.
Network Intelligence Center
Connectivity Tests now includes a feature that verifies connectivity from a Cloud Run revision to a VM instance, an IP address, or a Google-managed service. For more information, see Create and run Connectivity Tests.
Connectivity Tests now includes a feature that verifies connectivity from an App Engine standard environment version to a VM instance, an IP address, or a Google-managed service. For more information, see Create and run Connectivity Tests.
You can now migrate your producers from Apache Kafka to Pub/Sub Lite with only configuration changes. To check the feasibility of the migration and to perform the migration workflow, refer to the Kafka to Pub/Sub Lite migration guide. This feature is in public preview.
VPC Service Controls
General availability for the following integration:
Microsoft Azure Releases And Updates
Public preview: Incremental snapshots for Ultra Disk Storage
To ensure business continuity, incremental snapshots for Ultra Disks are now available in public preview in the Sweden Central and US West 3 Azure region.
Generally available: Indirect enterprise agreement on Azure Cost Management and Billing
Manage your enrollment hierarchy, view account usage, and monitor costs directly from the Azure Cost Management and Billing menu on the Azure portal (for indirect enterprise agreement customers)
General availability: Application security groups support for private endpoints
Application security groups (ASGs) support for private endpoints is now generally available.
General availability: Mount Azure Storage as a local share in App Service Windows Code
Mounting Azure Storage File share as a network share in Windows code (non-container) in App Service is now generally available.
General Availability: 5 GB Put Blob
TARGET AVAILABILITY: Q4 2022
Maximum Put Blob size is now increasing to 5 GB
Public Preview: Container insights support for AKS hybrid clusters
Container insights now supports AKS hybrid clusters
Classic VM retirement: extending retirement date to September 1st 2023
TARGET RETIREMENT DATE: SEPTEMBER 01, 2023
Extended migration period for your IaaS VMs from Azure Service Manager to Azure Resource Manager. Please complete the migration by 1 September 2023.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (includes forever free tier):