All the team at Hava wish you a very merry Christmas and a Happy New Year.
There's lots of exciting things in the pipeline that will start to drop in the new year, so we hope you have a great break and are ready to hit the ground running (with the help of Hava) in 2023.
Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday December 23rd 2022.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
Red Hat OpenShift Service on AWS (ROSA) now provides an AWS Management Console experience that simplifies the process for satisfying the AWS account prerequisites for provisioning and operating ROSA clusters. The new AWS ROSA console page automatically checks whether ROSA prerequisites are met and provides automated configuration and step-by-step guidance where manual configuration is required.
Before a cluster administrator can provision ROSA clusters, their AWS account must have ROSA enabled, adequate service quotas, and the Elastic Load Balancing service-linked role. The new AWS ROSA console page automatically checks if these ROSA prerequisites are met, and provides automation and active step-by-step guidance for meeting the requirements. The new AWS ROSA console page helps you provision clusters faster.
AWS Security Hub has released 9 new controls for its AWS Foundational Security Best Practice standard (FSBP) to enhance your cloud security posture management (CSPM).These controls conduct fully-automatic checks against security best practices for your AWS account settings and for services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon SageMaker, Amazon API Gateway, Amazon CloudFront, AWS WAF, and AWS CodeBuild. If you have Security Hub set to automatically enable new controls and are already using AWS Foundational Security Best Practices, these new controls will run without having to take any additional action.
With this release, Security Hub now supports 237 security controls to automatically check your security posture in AWS.
The 9 FSBP controls that we launched are:
Starting this week, AWS are launching the AWS Network Firewall service in the AWS Asia Pacific (Jakarta) Region, enabling customers to deploy essential network protections for all their Amazon Virtual Private Clouds (VPCs).
AWS Network Firewall is a managed firewall service that is easy to deploy. The service automatically scales with network traffic volume to provide high-availability protections without a need to setup and maintain the underlying infrastructure. It is integrated with the AWS Firewall Manager to provide you with central visibility and control of your firewall policies across multiple AWS accounts.
This week, AWS are enhancing the AWS Organizations console to enable you to centrally view and update the region opt-in settings for your AWS accounts. With this launch, you can now use the console to easily perform these operations without logging into each account separately.
AWS already launched Organizations console support for alternate contacts and primary contact information, and support for additional account settings will be available in future releases.
AWS Organizations helps you centrally manage and govern your environment as you grow and scale your AWS resources. Your organization’s administrators can now use the console UI in the management account to centrally manage region opt-in settings for member accounts without requiring credentials for each AWS account.
AWS License Manager announces support for cross-region, cross-account tracking of commercial Linux subscriptions you run on AWS. This includes subscriptions purchased as part of EC2 subscription-included AMIs, on the AWS Marketplace, or brought to AWS via Red Hat Cloud Access Program.
You can track subscription usage by number of instances for Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and Ubuntu Pro distributions in the Linux subscriptions tab of the AWS License Manager console.
Once data is discovered and aggregated, you will have insight into all your instances using commercial Linux subscriptions. To view historical usage patterns and set alarms when key thresholds are met for a particular subscription type, you can view the data as Amazon CloudWatch dashboards in the AWS License Manager console and set Amazon CloudWatch alarms from within the AWS License Manager console for monitoring.
For data analysis, you can view and export the list of EC2 instances by commercial Linux subscription type (or across all subscriptions types) with instance attributes such as subscription type, AMI id, instance ID, account ID, region, usage operation, and product code.
This feature is available in all AWS Regions where AWS License Manager is available, except the AWS GovCloud (US) Regions.
AWS Cloud Map is now available in Europe (Spain), Europe (Zurich) and Asia Pacific (Hyderabad) AWS Regions. AWS Cloud Map is a cloud resource discovery service. With AWS Cloud Map, you can define custom names for your application resources, such as Amazon Elastic Container Services (Amazon ECS) tasks, Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon DynamoDB tables, or other cloud resources.
You can then use these custom names to discover the location and metadata of cloud resources from your applications using AWS SDK and authenticated API queries.
Amazon Relational Database Service (Amazon RDS) now supports renaming Amazon RDS Multi-AZ deployments with two readable standbys. The ability to rename the RDS Multi-AZ deployments with two readable standbys helps you to replace a deployment without having to change any application code that references the deployment by name such as the connection endpoint.
Renaming is used when you restore data from a DB snapshot, or when you do a point-in-time recovery (PITR).
Amazon RDS Multi-AZ deployments provide enhanced availability and durability, making them a natural fit for production database workloads. Deployment of Amazon RDS Multi-AZ with two readable standbys supports up to 2x faster transaction commit latencies than a Multi-AZ deployment with one standby instance.
In this configuration, automated failovers typically take under 35 seconds. In addition, the two readable standbys can also serve read traffic without needing to attach additional read replicas.
For a full list of the Amazon RDS Multi-AZ with two readable standbys regional availability and supported engine versions, refer the Amazon RDS User Guide.
Contact Lens for Amazon Connect now provides enhanced controls that enable businesses to select specific personal identifiable information (PII) types that they want to redact from call and chat transcripts. Today, Contact Lens redacts PII types such as name, email, SSN, and others from the transcripts.
With enhanced PII redaction, businesses now have the flexibility to choose specific PII types to redact (e.g. SSN) while not redacting other data that they want to see. Following are some of the common PII types supported by Contact Lens: name, email, account number, routing number, credit card details, SSN, PIN, phone, and more.
AWS ParallelCluster 3.4 is now generally available and introduces the ability to create HPC clusters that can access and aggregate compute capacity across multiple AWS Availability Zones (AZ) in a Region. Other important features in this release include:
For more details on the release, review the AWS ParallelCluster 3.4 release notes.
Amazon Connect now supports barge-in, a contact center capability that enables managers to help agents resolve customer issues quickly and ensure a superior customer experience. Using barge-in, managers can join and participate in an ongoing customer service call between a contact center agent and customer.
After joining, a manager can speak with the customer, add participants, and even choose to remove an agent, if needed. Using Contact Lens for Amazon Connect, a manager can even define specific rules, such as low sentiment score, in order to receive a real-time alert to join a call.
You can get started with just a couple of clicks by enabling the feature in the AWS Console and then adding permissions for your managers or agents by updating their security profiles. To initiate a new barge-in session, you can select a contact to join from the real-time monitoring page in the Amazon Connect website and then click “barge” after being connected to the call.
Amazon Connect barge-in is available in US East (N. Virginia), US West (Portland), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt) and Europe (London).
Amazon Connect Chat now supports JSON content-type messages. Using JSON you can enable rich personalized experiences by allowing customers and agents to send/receive structured chat messages using the JSON format.
For example, messages can be supplemented with additional data (metadata) to pass relevant information (e.g. recent purchases, order status), send custom interactive messages (e.g. buttons, carousels, cards), display language translations and more.
AWS Batch now enables you to gain visibility into the current state of a terminated or cancelled job. This means that once you terminate/cancel a job, a flag of isTerminated or isCancelled can be observed in the job payload throughout the queue before it moves to either a ‘Terminated’ or ‘Cancelled’ state.
With this new flag, you can easily track the status of all active jobs that will be run and separate them from the ones that will eventually be Terminated or Cancelled as they move to the head of the queue. This provides greater visibility over the jobs’ status and helps you track them better through their lifecycle.
You can use DescribeJobs API operation to query the up to date job state. When you call this operation, two new fields isCancelled and isTerminated can be observed in the payload. With these fields, you can get the info on whether that job will be Terminated or Cancelled when it reaches the head of the queue.
Now you can query jobs with a finer control on their current status. You no longer need to wait until the job moves to the head of the queue to determine if it will be run or will be Terminated/Cancelled. The two new fields are now available in all AWS Regions where AWS Batch is currently available. To learn more about AWS Batch, see the AWS Batch User Guide. To learn more about the AWS Batch API, see the AWS Batch API Reference.
Over-provisioning Amazon ECS task CPU and memory incurs unnecessary cost while under-provisioning them can lead to poor application performance. Compute Optimizer delivers actionable recommendations so you can optimize task CPU and memory for your Amazon ECS services running on Fargate.
Compute Optimizer also quantifies cost impact of adopting these recommendations, so that you can prioritize your optimization efforts based on the size of the saving opportunity. Compute Optimizer recommendations provide container-level CPU and memory configurations as necessary when downsizing tasks to ensure compatibility between task and associated container configurations.
Compute Optimizer support for Amazon ECS services running on Fargate is now available in a total of 21 AWS Regions, including the US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Asia Pacific (Osaka), Asia Pacific (Hong Kong), Middle East (Bahrain), Africa (Cape Town), Europe (Milan), and South America (São Paulo) Regions.
The Amazon Relational Database Service (Amazon RDS) Optimized writes now supports R6g and R6gd instances. With Optimized Writes you can improve write throughout by up to 2x at no additional cost. This is especially useful for Amazon Relational Database Service (Amazon RDS) for MySQL customers with write-intensive database workloads, commonly found in applications such as digital payments, financial trading, and online gaming.
R6g and R6gd DB instances are powered by Arm-based AWS Graviton2 processor and increase the DB instance choices for Amazon RDS Optimized Writes. In MySQL, you are protected from data loss due to unexpected events, such as a power failure, using a built-in feature called the “doublewrite buffer” that takes up to twice as long, consumes twice as much I/O bandwidth, and reduces the throughput and performance of your database. Amazon RDS Optimized Writes provides you with up to 2x improvement in write transaction throughput on RDS for MySQL by writing only once while protecting you from data loss and at no additional cost.
Amazon RDS Optimized Writes is available as a default option from RDS for MySQL version 8.0.30 and above. Support for R6g and R6gd DB instances in RDS Optimized Writes is now available in the US East (N. Virginia, Ohio) US West (N. California, Oregon), Canada (Central), Europe (London, Ireland, Paris, Frankfurt, Stockholm, Milan), South America (Sao Paulo), Asia Pacific (Seoul, Mumbai, Sydney, Tokyo, Singapore), and AWS GovCloud (US-East, US-West) AWS Regions.
Amazon RDS now supports integration with AWS Secrets Manager to streamline how you manage your master user password for your RDS database instances. With this feature, RDS fully manages the master user password and stores it in AWS Secrets Manager whenever your RDS database instances are created, modified, or restored.
The new feature supports the entire lifecycle maintenance for your RDS master user password including regular and automatic password rotations; removing the need for you to manage rotations using custom Lambda functions.
RDS integration with AWS Secrets Manager improves your database security by ensuring your RDS master user password is not visible in plaintext to administrators or engineers during your database creation workflow. Furthermore, you have flexibility in encrypting the secrets using your own managed key or by using a KMS key provided by AWS Secrets Manager.
RDS and AWS Secrets Manager provide you the ease and security in managing your master user password for your database instances, relieving you from complex credential management activities such as setting up custom Lambda functions to manage password rotations.
Protobuf is Google's language and platform neutral, extensible mechanism for serializing structured data. Protobuf is a popular messaging format among IoT customers in industries like fintech, automotive, and telecommunications, because of its ability to efficiently encode device messaging payloads with low overhead and little CPU usage.
AWS IoT Core is a fully managed service that allows you to connect billions of IoT devices to AWS cloud without provisioning and managing cloud infrastructure. Rules Engine is a feature in AWS IoT Core that allows you to filter, decode, and process IoT device data and route the data to 15+ AWS and third-party services.
To get started, create and upload a Protobuf descriptor file to one of your S3 buckets. The descriptor file contains the schema-transformation from Protobuf to JSON. You can then ingest Protobuf encoded data from IoT devices and decode the data to JSON format using decode function in AWS IoT Core’s Rules Engine before routing the data to different AWS and third-party services.
AWS Transfer Family announces built-in support for PGP decryption of files uploaded over SFTP, FTPS or FTP to Amazon S3 or Amazon EFS. Customers can now configure and automate decryption of files that are encrypted using PGP keys by their users before upload, making it easy to meet their data protection and compliance requirements when exchanging sensitive data with third parties.
AWS Transfer Family provides managed workflows that allow you to create, automate and monitor linear sequence of steps for post-upload processing of files received via AWS Transfer Family. With this launch, you can use a new, built-in and fully managed workflow step to automatically decrypt uploaded files using PGP keys.
You can configure your PGP decryption tasks with just a few clicks in the AWS console, without writing any code or licensing third-party solutions. Using AWS Secrets Manager, you can setup a single PGP key to decrypt all files received via AWS Transfer Family, or specify user-specific PGP keys. In addition, you can monitor and audit your file decryption tasks using Amazon CloudWatch logs.
This week, Amazon Connect announces general availability of Microsoft Edge Chromium browser support. Edge Chromium has grown in popularity and many companies have selected it as their preferred or enterprise-wide standard browser. With this launch, customers now have the option to use the Edge Chromium browser in all regions where Amazon Connect is available.
Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers.
This week AWS introduced a new account-level vCPU-based quota. The account-level per Region vCPU-based quota allows you to control the maximum number of aggregated vCPUs your applications are able to scale up to within a Region. Configuring such quotas can help you avoid runaway workloads or inefficient queries.
With this feature, Amazon EMR Serverless provides you with two quota controls to help you manage costs. You can manage this account level vCPU-based quota in the AWS Service Quotas Management console to set the maximum vCPUs for all applications in your account. You can also configure an application level maximum capacity configuration to set the maximum vCPUs an individual application can scale up to.
Amazon Aurora Serverless v2, the next version of Aurora Serverless, is now available in 22 regions, including Africa (Cape Town) and Europe (Milan).
Aurora Serverless is an on-demand, automatic scaling configuration for Amazon Aurora. Aurora Serverless v2 scales instantly to support even the most demanding applications. It adjusts capacity in fine-grained increments to provide just the right amount of database resources for an application’s needs. You don’t need to manage database capacity, and you pay for only the resources consumed by your application.
Aurora Serverless v2 provides the full breadth of Amazon Aurora capabilities, including Multi-AZ support, Global Database, RDS Proxy, and read replicas. Amazon Aurora Serverless v2 is ideal for a broad set of applications. For example, enterprises that have hundreds of thousands of applications, or software as a service (SaaS) vendors that have multi-tenant environments with hundreds or thousands of databases, can use Aurora Serverless v2 to manage database capacity across the entire fleet.
Now, you can use PartiQL to query, insert, update, and delete data in DynamoDB tables in a SQL-like language. PartiQL for DynamoDB allows you to interact with DynamoDB tables and run ad-hoc queries in the AWS Management Console and AWS Command Line Interface.
Amazon RDS now supports X2iedn DB instances for RDS for MySQL and on RDS for PostgreSQL. The X2iedn instances are ideal for read-intensive and high throughput write workloads. X2iedn instances support recently announced features such as Amazon RDS Optimized Writes and Amazon RDS Multi-AZ deployments with two readable standbys posts.
X2iedn instances are powered by 3rd generation Intel Xeon Scalable Processors (Ice Lake). X2iedn offers 4x memory compared to R6i upto a maximum of 4TiB, and up to 100 Gbps network throughput to Amazon Elastic Block Store (Amazon EBS). Compared to equivalent R6i instances, X2iedn instances offer up to 3.5x the transaction throughput for RDS for MySQL and up to 1.8x the transaction throughput for RDS for PostgreSQL.
Amazon RDS X2iedn DB instances are supported on RDS for MySQL version 8.0.28 and higher and RDS for PostgreSQL version 13.7 and higher, and version 14.5 and higher. For complete information on pricing and regional availability, please refer to the Amazon RDS pricing page. Get started by creating a fully managed X2iedn database instance using the Amazon RDS Management Console.
This week, AWS were excited to announce that Amazon Transcribe now supports batch transcription in two new languages - Swedish and Vietnamese. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for you to add speech-to-text capabilities to your applications.
Amazon Transcribe enables organizations to increase the accessibility and discoverability of their audio and video content, serving a breadth of use cases. For instance, contact centers can transcribe recorded calls for downstream analysis to better understand conversation insights to improve customer experience and agent productivity.
Content producers and media distributors can automatically generate transcriptions for subtitles to improve content accessibility. Enterprises can transcribe meetings to make the content accessible and searchable by detecting key terms.
These new languages are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (São Paulo).
AWS are excited to announce support for the Amazon Graviton2 instance family in four additional regions. Supported instance types include general purpose (M6g), compute optimized (C6g), and memory optimized (R6g and R6GD) instances. Support for C6g, M6g, R6g and R6gd is available in the Europe (Paris) region. In addition, the Asia Pacific (Mumbai), South America (Sao Paulo), and Canada (Central) regions already supported C6g, M6g and R6g instance families, and we have now added support for R6gd in these regions.
Amazon OpenSearch Service Graviton2 instances support OpenSearch versions and Elasticsearch versions 7.9 and above. With Amazon OpenSearch Service, Graviton-based instances provide up to 30% better price-performance than comparable x86-based Amazon Elastic Compute Cloud instances. Further savings are available through reserved instance (RI) pricing for these instances.
Amazon EC2 M6g, C6g, R6g, R6gd instances are powered by Amazon Graviton2 processors that are built utilizing 64-bit Arm Neoverse cores and custom silicon designed by Amazon. Amazon Graviton2 processors deliver a major leap in performance and capabilities.
Amazon Graviton2 instances are built on the Amazon Nitro System, a collection of Amazon-designed hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage. Amazon OpenSearch Service Graviton2 instances come in sizes large through 16xlarge and offer compute, memory, and storage flexibility.
AWS are excited to announce support for full cluster lifecycle automation through GitOps or Infrastructure as Code (IaC) tools like Terraform for Amazon EKS Anywhere (EKS-A). Previously, the only mechanism for customers to perform full EKS-A cluster lifecycle operations such as provisioning, updates, and deletion was by using an Amazon command-line-interface (CLI) tool “eksctl”.
Now, EKS-A supports GitOps and IaC in the EKS-A management cluster so customers can automate EKS-A workload cluster lifecycle operations including cluster creation, update, and deletion.
An EKS-A management cluster is an on-premises Kubernetes cluster that creates and manages a fleet of EKS-A workload clusters where customer run their applications. GitOps and IaC are methods that allow customers to automate the management of IT infrastructure using code.
GitOps and IaC enable development best practices such as using Git as the single-source-of-truth, code reviews for infrastructure change tracking and auditing, and continuous integration/continuous delivery (CI/CD) processes.
Customers now get the benefits of GitOps and IaC such as standardized deployment workflows, improved deployment reliability and visibility, and increased operational efficiency during EKS-A workload cluster deployments.
This week, Amazon Neptune announced support for JupyterLab 3 on Neptune notebooks to boost the productivity of developers and data scientists using the Neptune Workbench. With this update, you can now launch Neptune notebooks in the AWS Management console using JupyterLab to access a modern interactive development environment (IDE) complete with developer tools such as code authoring, debugging, and support for the latest open source JupyterLab extensions.
With the JupyterLab interface, you can use IDE tools such as an integrated debugger to inspect variables and step through breakpoints while building code. For example, you can interactively build, test, and run a graph analytics workload using the Neptune Python integration on JupyterLab.
In addition, using the Language Server extension, you can enable modern IDE functionalities like tab-completion, syntax highlighting, jump to reference, and variable renaming across notebooks and modules, making you much more productive.
This feature is now available in all AWS regions where the Amazon Neptune Workbench is available. By default, new Neptune notebooks will be created using the Amazon Linux 2, JupyterLab 3 environment. Neptune notebooks are hosted by and billed as Amazon SageMaker Notebook Instances. Customers are charged for the notebook instance while the instance is in Ready state.
To get started, create a new Neptune notebook in the AWS Management Console and choose “Open Jupyter” or “Open JupyterLab” once the notebook is in the Ready state. You can walkthrough over 30+ included tutorial notebooks to learn about sample graph applications or graph query basics.
This week, Amazon Nimble Studio announced persistent storage has moved from public preview to general availability. This means faster workstation startup times, and simplified workstation storage volume management for administrators.
Additionally, Nimble Studio now supports configurable gp3 AWS Elastic Block Store (EBS) volumes, providing up to a 20% lower price point per GB than existing gp2 volumes. Customers can scale volume size, IOPS (input/output operations per second) and throughput, and only pay for the resources they need.
Persistence allows artist workstation sessions to “Stop,” preserving the workstation’s Amazon Elastic Block Store (EBS) volume, and “Start” from the preserved EBS volume, reducing workstation startup times. Customers using persistence only pay for workstations when they’re in use and are responsible for all EBS storage costs when persistence is enabled.
By default, persistent storage is enabled for Nimble Studio. To simplify EBS volume management and cost savings, your persistent EBS volume is deleted automatically when terminating the workstation session.
Amazon Nimble Studio absorbs all storage costs for workstation sessions when persistence is disabled. This non-persistent storage solution allows sessions to boot from the customer’s Amazon Machine Images (AMI), or a Nimble Studio provided base AMI, only.
When logging off a workstation, customers will only have the option to terminate their sessions. This non-persistent storage includes up to a 500GiB gp3 EBS volume per workstation.
This week, AWS are excited to announce the general availability of Amazon Elastic Kubernetes Service (EKS) Anywhere single-node clusters on bare metal. With single-node Amazon EKS Anywhere clusters, the cluster management components run alongside your applications on a single bare metal server.
Single-node clusters enable you to run Amazon EKS Anywhere clusters in resource-constrained development/test and edge environments. The minimum resource requirements for the cluster management components are 2 vCPU, 8GB RAM, and 25GB storage so you can conserve machine resources for your applications.
Single-node clusters bring the benefits of Amazon EKS Anywhere for a consistent, AWS-supported Kubernetes experience across on-premises data centers and edge environments.
Amazon Kinesis Video Streams now offers fully managed capabilities to stream video and audio in real-time from Web Real-Time Communication (WebRTC) standards compliant cameras, IoT devices and browsers to the cloud for secured storage, playback and analytical processing. Customers can now use our enhanced WebRTC SDK and cloud APIs to enable real-time streaming as well as media ingestion to the cloud.
To get started, you can install the Amazon Kinesis Video Streams with WebRTC SDK on any security camera or IoT device with a video sensor, and use the APIs to enable media streaming with sub 1-second latency, as well as ingestion and storage in the cloud.
Once ingested, you can access your data through our easy-to-use APIs. Amazon Kinesis Video Streams enables you to playback video for live and on-demand viewing, and quickly build applications that take advantage of computer vision and video analytics through integration with Amazon Rekognition Video and Amazon SageMaker.
AWS are excited to announce the general availability of Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere on Nutanix with AHV Virtualization, which expands the choice of infrastructure options for customers running Kubernetes on-premises. Nutanix enhances the list of deployment options for Amazon EKS Anywhere customers, which already includes bare metal servers, VMware vSphere, and Apache CloudStack.
Nutanix provides hyperconverged infrastructure software with enterprise-grade data services for stateful application needs including Volumes, Objects, Files, and Nutanix Database Services (NDB). We collaborated with Nutanix to integrate Amazon EKS Anywhere with the Cluster API provider for Nutanix (CAPX) to provide customers with declarative, Kubernetes-style APIs for cluster creation, configuration, and management.
There are no upfront commitments or fees to use Amazon EKS Anywhere. Customers can optionally purchase Amazon EKS Anywhere Enterprise Subscriptions for access to Amazon EKS Anywhere Curated Packages as well as 24/7 support from AWS highly-trained subject matter experts for all bundled tooling.
Amazon Nimble Studio now supports Elastic Block Store (EBS) Snapshots with the new Auto Backup feature, providing a simple and secure data protection solution customized for your Amazon Nimble Studio workstations.
Auto Backup is intended for artists and administrators leveraging persistent storage for their workstations who wish to protect that data in the event of a recovery situation, such as data loss or misconfigured software.
Administrators can configure how many backups they want to keep per workstation, if any, and Auto Backup creates an EBS Snapshot every 4 hours a workstation is running and when a workstation session is being stopped.
This ensures artists’ persistent local data is securely protected, and can easily be retrieved. In the event that a workstation’s persistent EBS volume is corrupted, or reaches an unusable configuration, the artist can restore their workstation to a previous backup within Amazon Nimble Studio’s portal page by selecting the timestamp of the backup they wish to restore to. Administrators can assist their artists by restoring workstations to backups through AWS console as well. Auto Backup enhances persistent EBS volumes, and can be enabled and configured on Launch Profiles.
Auto Backup leverages EBS Snapshots to secure your data, which incurs a cost based on the size of data being backed up. This technology tracks the changes in EBS volume data, so you are only paying for changes to the backup, rather than a complete backup every time. For a full breakdown of the cost of Auto Backup consult Nimble Studio’s Pricing Page.
Amazon Relational Database Service (Amazon RDS) Custom for Oracle, a managed database services for legacy, custom, and packaged applications that require access to the underlying operating system and database environment, is now available in the AWS Regions of Asia Pacific (Seoul) and Asia Pacific (Osaka).
Amazon RDS Custom for Oracle is a managed database services for legacy, custom, and packaged applications that require access to the underlying operating system and database environment. By using Amazon RDS Custom for Oracle, you can benefit from the agility of a managed database service, with features such as automated backups and point-in-time recovery, and also meet database application’s customization requirements.
By allowing more applications to move to a managed database service, you can save time on the undifferentiated heavy lifting of database management and focus on higher level tasks.
AWS Migration Hub Orchestrator now supports a new feature to import your on-premises virtual machine (VM) images to AWS with a console-based experience for generating Amazon Machine Image (AMI) from your VM image that you have built to meet your IT security, configuration management, and compliance requirements.
With this new feature in AWS Migration Hub Orchestrator, you can automate the manual tasks like the validation of pre-requisites to save time and effort, specify correct license type for bring-your-own-licenses (BYOL) use, and easily track the status of VM import process in the console.
AWS Migration Hub Orchestrator helps you simplify and accelerate the migrations of applications to AWS. Large migration projects involve selecting migration tools, step-by-step planning, and tracking the migration process across different tools and teams.
Migration Hub Orchestrator provides predefined and customizable workflow templates that offer a prescribed set of migration tasks, migration tools, and automation opportunities. With Orchestrator, you can customize the templates, automate the migration of your applications, and track your progress in one place.
On December 15 2022, AWS Migration Hub Refactor Spaces launched support for AWS Lambda aliases as Refactor Spaces Service endpoints. You can now use Lambda aliases with Refactor Spaces to route traffic to specific versions of a Lambda function and configure provisioned concurrency to mitigate Lambda cold starts.
AWS Migration Hub Refactor Spaces is the starting point for incremental application refactoring to microservices in AWS. Refactor Spaces automates the creation of application refactor environments including all of the infrastructure, multi-account networking, and routing to incrementally modernize.
Use Refactor Spaces to help reduce risk when evolving applications into microservices or extending existing applications with new features written in microservices.
Amazon MQ now provides support for RabbitMQ version 3.9.24, which includes several fixes to the previously supported version, RabbitMQ 3.9.20.
Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. You can reduce your operational burden by using Amazon MQ to manage the provisioning, setup, and maintenance of message brokers. Amazon MQ connects to your current applications with industry-standard APIs and protocols to help you easily migrate to AWS without having to rewrite code.
If you are running RabbitMQ 3.9.20 or earlier, we encourage you to upgrade to RabbitMQ 3.9.24. This can be accomplished with just a few clicks in the AWS Management Console. If your broker has automatic minor version upgrade enabled, AWS will automatically upgrade the broker to version 3.9.24 during a future maintenance window.
To learn more about upgrading, please see: Managing Amazon MQ for RabbitMQ engine versions in the Amazon MQ Developer Guide.
Amazon RDS Custom for SQL Server is a managed database service that allows administrative access to the operating system. Starting this week, you can deploy RDS Custom for SQL Server by using AWS CloudFormation templates. AWS CloudFormation simplifies provisioning and management of resources on AWS.
You can create templates for the RDS Custom for SQL Server service as per your required architecture, and have AWS CloudFormation use those templates for quick and reliable provisioning of your database instances. With CloudFormation templates, RDS Custom for SQL Server instance lifecycles can be managed in a repeatable, predictable, and safe way, while allowing for automatic rollbacks, automated state management, and management of instances across accounts and regions.
To get started with RDS Custom for SQL Server, read the setup guide in AWS Documentation. To get started on AWS CloudFormation, read the user guide in AWS Documentation. Amazon RDS Custom for SQL Server is generally available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Frankfurt, Ireland, London, Stockholm), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo) and South America (São Paulo).
Amazon Lookout for Equipment analyzes equipment sensor data to train and build a machine learning model for your equipment – with no ML expertise required. Lookout for Equipment uses your unique ML model(s) and in real-time helps accurately identify early warning signs that could lead to machine failures. This helps you detect equipment abnormalities with speed and precision, quickly diagnose issues, and take action to reduce expensive downtime.
We are excited to announce we now allow customers to provide feedback about events detected by Lookout for Equipment via labels and label groups. Developers, through the API, can create, delete, describe, list and update labels and label groups associated with specific events. This allows end users, through labels and label groups, to provide a list of acceptable fault codes, start and end time of event(s) of interest, target equipment, ratings, and notes.
Label groups can then be used directly and programmatically when training a model instead of providing a CSV file located on S3. The information provided by these labels will still be used as periods of time where models will discard the data as abnormal and won’t train against it.
You can now use Service Quotas in AWS Middle East (UAE) Region to view and manage your service quotas at scale as your AWS workloads grow.
Service Quotas enables you to manage your AWS service quotas from one central location. In addition to viewing service quota values, you can easily request and track quota increases. For supported services, you can proactively manage your quotas by configuring Amazon CloudWatch alarms that monitor usage and alert you to approaching quotas.
You can access Service Quotas through the AWS console, and via AWS APIs and AWS Command Line Interface (AWS CLI). Service Quotas is available at no additional charge.
You can now use AWS Resource Access Manager (AWS RAM) in the AWS Europe (Spain) Region.
AWS RAM helps you securely share your resources across AWS accounts, within your organization or organizational units (OUs), or with AWS Identity and Access Management (IAM) roles and users for supported resource types.
Amazon SageMaker Automatic Model Tuning now gives you the option to set the seed to generate random hyperparameters for more reproducible tuning results. This enables use cases where you need to be able to reproduce your tuning job results, such as for compliance or regulatory reasons.
SageMaker Automatic Model Tuning allows you to find the most accurate version of your machine learning model by searching for the optimal set of hyperparameter configurations. Previously, running the same tuning job more than once could lead to different recommended hyperparameter configurations due to the stochastic nature of the search strategies.
This meant that you would not always be able to reproduce your previous tuning results even when running a tuning job on the same algorithm, dataset and with the same configurations.
Starting this week, you can specify an integer as a random seed for hyperparameter tuning to generate hyperparameters. When running the same tuning job again, you can use the same seed to produce hyperparameter configurations that are more consistent with your previous results.
For the Random and Hyperband strategies, using the same random seed can give you up to 100% reproducibility of the previous hyperparameter configuration for the same tuning job. For the Bayesian strategy, using the same random seed will significantly improve reproducibility for the same tuning job.
This week, Amazon Elastic Container Service (Amazon ECS) announced a new feature that enables customers to add automated safeguards for Amazon ECS service rolling updates. You can now monitor and automatically react to changes during an Amazon ECS rolling update by using Amazon CloudWatch alarms. This enables you to more easily automate discovery and remediation for failed deployments and minimize the impact of a bad change.
Amazon ECS customers use deployment circuit breaker to monitor task launch and health check failures which indicate that the deployment will not reach steady state. In some cases, even if containers start running successfully, the deployment can introduce regressions which get surfaced in the form of degradation in infrastructure (e.g. cpu utilization) or service metrics (e.g. response latency).
To monitor and remediate such deployments, you can now create Amazon CloudWatch alarms which track the metrics most relevant to your application and configure Amazon ECS to monitor these for your deployment. If a metric breach occurs during the deployment, Amazon ECS is designed to stop the deployment and roll it back to the previous stable version.
When you use CloudWatch alarms to monitor your deployment, Amazon ECS is designed to add a ‘bake time’ to the deployment. The bake time is a period of time after the new service version has reached steady state, during which Amazon ECS continues to monitor the alarm associated with the deployment.
Amazon EC2 announces pagination for the EC2 DescribeImages API. It allows you to describe your images over a number of API requests, instead a single one. You can specify a page size when calling the API, which will be used as the upper bound for resources to return in a single request. A pagination token is returned on the response, which you can then include in your next API request to fetch the next page of images.
Pagination helps solve problems related to time-outs or response size when you have a large number of images in your account or shared with you. It allows you to fetch all your images across a single paginated workflow, helping to provide consistent and reliable response times.
This week, AWS Systems Manager announced the launch of Quick Setup for Resource Scheduler, based on AWS Solutions’ Instance Scheduler. Quick Setup for Resource Scheduler provides a user interface in the console which enables you to easily configure schedules for your instances to optimize compute capacity and save costs across accounts and Regions in your Organization.
In this way, you can ensure that EC2 instances only run when they are really needed. For example, you can schedule that targeted instances stop running outside of weekly business hours, during weekends or bank holidays.
This week, AWS were excited to announce the launch of a new Security Hub widget on AWS Console Home, providing a summary of your security posture generated by the security checks that you have enabled in your account using AWS Security Hub.
With the Security Hub widget you can access a summary of your security posture, as captured by AWS Security Hub, and view key insights such as your security score, how many security controls have failed, and how many critical findings you have in your accounts.
Most security checks informing the widget are described in the AWS Foundational Security Best Practices standard, a curated set of security best practice checks vetted by AWS security experts that either run whenever there are changes to the associated resources or on a set periodic schedule.
The remaining checks are informed by industry frameworks, such as PCI DSS and CIS AWS Foundations Benchmark. You can access the full capability of Security Hub from the widget with a single click to view more detail on findings and review or make changes to your enabled security controls.
This week, AWS are excited to announce the launch of three new AWS Systems Manager widgets on Console Home. Customers can now view their operational status as soon as they sign in and take necessary action to remediate operational issues with one-click access to AWS Systems Manager features.
You can access the new Systems Manager widgets on Console Home, which include Managed instances, Patch compliance and Ops summary. The Managed instances widget provides insights into the visibility and control you have on your AWS resources; you can view the number and percentage of EC2 instances that you are managing using AWS Systems Manager and identify EC2 instances that are not managed by Systems Manager.
You can use the Patch compliance widget to view the total number of unpatched instances by severity and trends in patch compliance over time. The Ops summary widget provides summary of your operational issues by severity. You can use the expanded view of the Ops summary widget to see status by category, including availability, cost, performance, security and recovery.
Starting today, AWS Resource Access Manager (AWS RAM) is available for use in the AWS Europe (Zurich) Region.
AWS RAM helps you securely share your resources across AWS accounts, within your organization or organizational units (OUs), or with AWS Identity and Access Management (IAM) roles and users for supported resource types.
AWS are excited to announce the general availability of Fortuna, an open-source library for uncertainty quantification of ML models. Fortuna provides calibration methods, such as conformal prediction, that can be applied to any trained neural network to obtain calibrated uncertainty estimates. The library further supports a number of Bayesian inference methods that can be applied to deep neural networks written in Flax.
Accurate estimation of predictive uncertainty is crucial for applications that involve critical decisions. Uncertainty allows us to evaluate the reliability of model predictions, defer to human decision makers, or determine if a model can be safely deployed. The library makes it easy to run benchmarks and will enable practitioners to build robust and reliable AI solutions by taking advantage of advanced uncertainty quantification techniques.
Amazon Relational Database Service (Amazon RDS) on AWS Outposts now supports Read Replicas. Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB) instances. They provide the option to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
You can create one or more replicas of a given source DB instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances.
Amazon RDS on AWS Outposts now also allows you to create Read Replicas of your DB instances on the same Outpost or across Outposts, providing both scale out read performance as well as part of a disaster recovery solution.
Additionally, you may also create Amazon RDS Read Replicas on Amazon RDS Multi-AZ instance deployments on Outposts, which can provide additional disaster recovery capabilities to your highly-available Amazon RDS on AWS Outposts instances.
Amazon FinSpace now provides customers with additional user activity monitoring options through logging of web application and data access events to AWS CloudTrail. Amazon FinSpace is a managed analytic data hub for capital markets customers that enables analysts and data engineers to access data from multiple sources and transform it using FinSpace’s managed Apache Spark Engine with Capital Markets Time Series Analytics Library.
When a user takes an action in the FinSpace web application or uses data stored in their FinSpace Environment, an event is published to their FinSpace environment’s audit repository. It can then be viewed using the Audit Reports viewer hosted in the FinSpace web application. This provides FinSpace administrators a convenient way to quickly view user activity and data access.
Additionally, some customers want to capture these access events for storage and analysis in their organization’s Security Event and Incident Management (SEIM) tools, often for regulatory compliance reporting. These tools allow organizations to efficiently collect and analyze security event data in one place, giving them the ability to investigate past suspicious activity incidents or detect new ones.
For AWS customers, a common way to ingest data into SEIM tools is through CloudTrail.
AWS IAM Identity Center (successor to AWS Single Sign-On) is now available in the Africa (Cape Town), Asia Pacific (Jakarta), and US West (N. California) regions. IAM Identity Center helps you securely create or connect your workforce identities and centrally manage their access to AWS accounts and cloud applications across your AWS organization.
You can create user identities directly in IAM Identity Center or you can bring them from your Microsoft Active Directory or a standards-based identity provider, such as Okta Universal Directory or Azure AD. With IAM Identity Center, you get a unified administration experience to define, customize, and assign fine-grained access. Your workforce users get a portal for access to all of their assigned AWS accounts and cloud applications.
AWS Glue crawlers now have enhanced support for Linux Foundation Delta Lake tables, increasing operational efficiency to extract meaningful insights from analytics services such as Amazon Athena, Amazon EMR, and AWS Glue. This feature enables analytics services scan Delta Lake tables without requiring the creation of manifest files by Glue crawlers. Newly cataloged data is now quickly made available for analysis using your preferred analytics and machine learning (ML) tools.
Previously, Glue crawlers supported Delta Lake tables by creating manifest files in Amazon S3 for different analytics services to consume. Glue crawlers needed to generate manifest files on a periodic basis to include newer transactions in the original Delta Lake tables resulting in longer processing times.
With this week’s launch, you can create and schedule a Glue crawler with the option to create native Delta Lake tables, then provide a path to Amazon S3 where the Delta Lake tables are located. With each crawler run, the crawler inspects and catalogs schema information and partition information, such as updates or deletes, to Delta Lake tables in the Glue Data Catalog.
AWS Local Zones are now available in two new metro areas—Bangkok and Kolkata. You can now use these Local Zones to deliver applications that require single-digit millisecond latency or local data processing.
At the beginning of this year, AWS announced plans to launch AWS Local Zones in over 30 metro areas across 27 countries outside of the US. AWS Local Zones are also generally available in 8 metro areas outside of the US (Buenos Aires, Copenhagen, Delhi, Helsinki, Hamburg, Muscat, Taipei, and Warsaw) and 16 metro areas in the US (Atlanta, Boston, Chicago, Dallas, Denver, Houston, Kansas City, Las Vegas, Los Angeles, Miami, Minneapolis, New York City, Philadelphia, Phoenix, Portland, and Seattle).
AWS Elemental MediaConvert is now available in the Africa (Cape Town) region. You may now configure and submit MediaConvert jobs using the console or API endpoints within the Africa (Cape Town) region.
With AWS Elemental MediaConvert, audio and video content providers with any size library can easily and reliably transcode on-demand content for broadcast and multiscreen delivery. MediaConvert functions independently or as part of AWS Media Services, a family of services that form the foundation of cloud-based workflows and offer the capabilities needed to transport, transcode, package, and deliver video.
With the general availability of Amazon Elastic Block Store (EBS) Snapshots Archive in AWS Middle East (UAE), Europe (Spain), Europe (Zurich), and Asia Pacific (Hyderabad) Regions, customers in these regions can save up to 75% on storage costs for Amazon EBS Snapshots that they rarely access and intend to retain for more than 90 days.
Amazon EBS Snapshots are incremental in nature, storing only the changes since the last snapshot. This makes them cost-effective for daily and weekly backups that need to be accessed frequently. If you have snapshots that you access every few months or years, and would like to retain them long-term for legal or compliance reasons, you can use Amazon EBS Snapshots Archive to store full, point-in-time snapshots at a lower cost than what you would incur if stored in the standard tier.
Snapshots in the Amazon EBS Snapshots Archive tier have a minimum retention period of 90 days. When you archive a snapshot, a full snapshot archive is created that contains all the data needed to create your Amazon EBS Volume. To create a volume from the snapshot archive, you can restore the snapshot archive to the standard tier, and then create an Amazon EBS volume from the snapshot in the same way you do today.
You can now use AWS PrivateLink to privately access the Amazon Elastic Kubernetes Service (Amazon EKS) management APIs from your Amazon Virtual Private Cloud (VPC). AWS PrivateLink provides private connectivity between VPCs, AWS services, and your on-premises networks.
You can now manage your Amazon EKS clusters in your VPC using AWS PrivateLink to help your organization’s security and compliance requirements. You can also access the VPC endpoint from on-premises environments or from other VPCs using AWS VPN, AWS Direct Connect, or VPC Peering. Creating VPC Endpoints incurs charges, see the AWS PrivateLink pricing page for more information.
Amazon EKS is a managed Kubernetes service that makes it easier for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or worker nodes. Amazon EKS is certified Kubernetes conformant, so you can migrate standard Kubernetes application to EKS without needing to refactor your code.
Amazon Managed Service for Prometheus now supports VPC endpoint policies configured using Amazon Virtual Private Cloud (Amazon VPC). Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service that monitors and alarms on operational metrics at scale.
It does this without you having to manage the underlying infrastructure required to scale and secure the ingestion, storage, alerting, and querying of metrics. With this feature, customers can now configure VPC endpoint policies in the Amazon VPC console, via Command Line Interface or SDK, or through AWS CloudFormation. These policies can restrict access of Amazon Managed Service for Prometheus endpoints to particular AWS accounts, IAM users, and IAM roles.
VPC Endpoint policies created can be applied to Amazon Managed Service for Prometheus workspaces in every region where Amazon Managed Service for Prometheus is generally available. Get started by checking out the user guide.
AWS IoT Device Client introduces a new version 1.8 which provides docker images, PowerPC architectures support, and an implementation to use the new “AWS Run-Command” Job Template.
AWS IoT Device Client docker images contain the latest release of Device Client software for x86_64, aarch64, and armv7 architectures running Ubuntu, Amazon Linux, or Red Hat Ubi8 through the Elastic Container Registry (ECR). With this, customers can get started with the Device Client quickly. Customers can use Device Client with PowerPC64 and PowerPC64le architectures in addition to AMD 64, AARCH 64, MIPS32 and ARMV7L.
In addition, AWS IoT Device Client now supports a newly added AWS Managed Job Template “AWS-Run-Command”. AWS Managed Job Templates provide predefined configurations for frequently performed remote actions on their IoT devices. “AWS-Run-Command” is the 8th AWS Managed Job template which enables customers to instruct IoT Devices to run shell commands using AWS IoT Device Management Jobs feature.
The CRT-based S3 client is intended for users who want maximized throughput when transferring objects to and from Amazon S3, and the S3 Transfer Manager is a high level transfer utility built on top of CRT-based S3 client and provides additional functionalities such as file directories transfer and progress monitoring.
The CRT-based S3 client allows you to transfer objects with enhanced performance and reliability by leveraging Amazon S3 multipart uploads and downloads. Using S3 Transfer Manager’s simple API, you can easily upload a local directory to Amazon S3 or download an entire S3 bucket to a local folder.
In addition, it enables you to manage an ongoing transfer by monitoring its progress or pausing it and resuming it at a later time.
This week AWS announced the general availability of Renate, an open-source Python library for automatic model re-training. The library implements continual learning algorithms to train deep neural networks incrementally when new data becomes available.
Applications of machine learning require updating models as new batches of data become available. Repeatedly re-training deep neural network models from scratch is costly and fine-tuning them with the new data only will lead to a phenomenon called “catastrophic forgetting”.
This means that the model will have good performance on the most recent data, but the performance will degrade on the older data. Renate provides algorithms that alleviate the problem of catastrophic forgetting and helps to automatize the re-training process.
With Renate, users run small scale continual learning experiments on their local machine or run large continual learning jobs using Amazon SageMaker. Renate also supports state-of-the-art hyperparameters tuning out-of-the-box, thanks to the integrations with SyneTune.
Anthos Clusters on Bare Metal
Anthos Clusters on AWS
A new vulnerability (CVE-2022-2602) has been discovered in the io_uring subsystem in the Linux kernel that can allow an attacker to potentially execute arbitrary code. For more information, see the GCP-2022-2025 security bulletin.
Anthos Clusters on VMware
A new vulnerability (CVE-2022-2602) has been discovered in the io_uring subsystem in the Linux kernel that can allow an attacker to potentially execute arbitrary code.
For more information see the GCP-2022-025 security bulletin.
Anthos clusters on VMware 1.14.0-gke.430 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.14.0-gke.430 runs on Kubernetes 1.25.5-gke.100.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.
privateRegistryfield has been added in the Secrets configuration file.
privateRegistrysection in the user cluster configuration file. You can use different private registry credentials for the user cluster and admin cluster. You can also use a different private registry address for user clusters with Controlplane V2 enabled.
gkectl update credentialscommand. For more information, see Update private registry credentials.
cluster-health-controlleris now integrated with
health-check-exporterto emit metrics based on the periodic health check results, making it easy to monitor and detect cluster health problems.
maximumConcurrentNodePoolUpdatein the user cluster configuration file to
1. This will configure the maximum number of additional nodes spawned during cluster upgrade or update, which can potentially avoid two issues — resource quota limit issue and PDB deadlock issue. For more information, see Configure node pool update policy.
node.Status.VolumesAttachedis consistent with the actual PV/disk attachment states during admin and user cluster upgrade preflight checks.
--service-account-key-fileoptional. When the cluster is not registered correctly, and no additional service account key file is passed in through the flag, the
gkectl diagnose snapshotcommand will use the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable to authenticate the request.
loadbalancer.kindfield is now prepopulated with
Anthos clusters on VMware 1.12.4-gke.42 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.12.4-gke.42 runs on Kubernetes 1.23.13-gke.1700.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.
App Engine standard environment Go / .Net / Java / Node.js / PHP / Python / Ruby
The option to update a Serverless VPC Access connector is now available in preview. This feature allows you to edit the machine (instance) type, as well as the minimum and maximum number of instances.
Documentation has been updated to include new samples. The following samples are available in Go, Node.js, and Python:
The following sample is available in Go and Python:
For more information, see All Batch code samples.
(Available without upgrading) Fixed an issue where upgrading a Private IP environment with VPC peerings to Cloud Composer 2.0.31 and later versions resulted in intermittent issues with database connections.
Cloud Composer 1.20.2 and 2.1.2 are versions with an extended upgrade timeline.
The option to update a Serverless VPC Access connector is now available in preview. This feature allows you to edit the machine (instance) type, as well as the minimum and maximum number of instances.
The option to update a Serverless VPC Access connector is now available in preview. This feature allows you to edit the machine (instance) type, as well as the minimum and maximum number of instances.
The new Cloud Spanner Kafka connector publishes change streams records to Kafka for application integration and event triggering. For more information, see Build change streams connections to Kafka.
You can now use the
ALTER INDEX statement to add columns into an index or drop non-key columns. For more information, see Alter an index.
Cloud SQL for MySQL
Cloud SQL for MySQL now supports using the
lower_case_table_names flag for MySQL 8.0. For more information, see Configure database flags.
Generally available: N2 VMs with 64 or more vCPUs now support up to 4 GB/s (read) and 3 GB/s (write) throughput per instance with Extreme persistent disks (pd-extreme). Previously the maximum was 2.2 GB/s per instance.
New sub-minor versions of Dataproc images:
Google are launching a public preview version of the Purchase Order (PO) processor,
pretrained-purchase-order-v1.1-2022-06-17, with the following new features:
The Document AI OCR Processor has the following new features:
The OCR Processor now supports extracting embedded text from digital PDFs in public preview. A fallback to the optical OCR model is automatically triggered to extract text in the regions when the PDF being processed contains non-digital text. To opt into this feature, set
process_options.ocr_config.enable_native_pdf_parsing=true in your API request to the OCR Processor.
Added advanced versioning support to the Document AI OCR, which enables OCR users to pin to a historical model version. When enabled, OCR outputs are guaranteed to be consistent and virtually frozen, with zero behavioral drifts. To enable advanced versioning, select the release candidate version
pretrained-ocr-v1.2-2022-11-10 in your Document AI console.
Support for the
australia-southeast2 (Melbourne) region.
Dual-stack clusters in GKE are now generally available. Dual-stack networking is supported on both Standard and Autopilot clusters. To learn more, see Use an IPv4/IPv6 dual-stack network to create a dual-stack cluster.
A new vulnerability (CVE-2022-2602) has been discovered in the io_uring subsystem in the Linux kernel that can allow an attacker to potentially execute arbitrary code. For more information, see the GCP-2022-025 security bulletin.
You can now enable NCCL Fast Socket on your multi-GPU workloads. NCCL Fast Socket is a transport layer plugin designed to improve NVIDIA Collective Communication Library (NCCL) performance on Google Cloud. To enable NCCL Fast Socket, you must be using a GKE Standard cluster with control plane version 1.25.2-gke.1700 or later. For more information, see Improve workload efficiency using NCCL Fast Socket.
CVE-2022-37434, CVE-2022-40674, CVE-2022-1586, CVE-2022-1587 have been patched in the PD CSI driver in 1.22, 1.23, 1.24 for newly created clusters. CVE-2022-37434, CVE-2021-3999, CVE-2022-40674, CVE-2022-1586, CVE-2022-1587 have been patched in the PD CSI driver in 1.25 for newly created clusters.
Google Cloud VMware Engine
VMware Engine nodes are now available in the following additional region:
Security Command Center
userName attribute was added to the
Finding object of the Security Command Center API.
The value of the
userName attribute depends on the type of the finding and is likely not an IAM principal. For example, this can be a system username if the finding is related to a virtual machine, or it be an application login username.
For more information, see the Security Command Center API documentation for the
Vertex AI TensorFlow Profiler
Vertex AI TensorFlow Profiler is generally available GA. You can use TensorFlow Profiler to debug model training performance for your custom training jobs.
For details, see Profile model training performance using Profiler.
Vertex AI Matching Engine
You can now override the default data retention limit of 4000 days for the online store and the offline store in Vertex AI Feature Store.
Virtual Private Cloud
You can use geo-location objects in firewall policy rules to filter external IPv4 and external IPv6 traffic based on specific geographic locations or regions
You can use Threat Intelligence for firewall policy rules to secure your network by allowing or blocking traffic based on threat intelligence data.
You can use address groups to combine multiple IP addresses and IP ranges into a single named logical unit. You can then use this unit across multiple rules in the same or different firewall policies.
You can use fully qualified domain name (FQDN) objects in firewall policy rules to filter incoming or outgoing traffic from specific domain names.
Microsoft Azure Releases And Updates
Nothing to see here :-(
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (includes forever free tier):