Here's the weekly cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday March 31st 2023.
This week we released Azure self-hosted. If you are using Azure and would like to Host Hava on your own infrastructure, please get in touch.
All the lastest Hava news can be found on our Linkedin Newsletter.
Of course we'd love to keep in touch at the other usual places. Come and say hello on:
AWS Updates and Releases
AWS Elastic Disaster Recovery (AWS DRS) now supports automated replication of new disks added to your source servers to help you maintain readiness of your AWS recovery site. Elastic Disaster Recovery helps minimize downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery.
Following this launch, you will no longer need to install the AWS Replication Agent when new disks are added to your source environment. The service initiates data replication to the staging area subnet in your AWS account. Automating replication helps you to maintain continuous data replication, save time and resources, and reduce the risk of data loss in the event of a disruption.
In addition, Elastic Disaster Recovery now supports Oracle ASM Filter Driver, allowing you to replicate source servers with databases running on Oracle ASM.
The new capabilities are available in all of the AWS Regions where Elastic Disaster Recovery is supported. See the AWS Regional Services List for the most up-to-date availability information. All new capabilities are available for the latest Agent version.
Amazon Connect now supports the use of JSON structures such as nested arrays as attributes in flows, enabling you to build more personalized and automated customer experiences.
For example, you can now build self-service experiences to help customers track their order status based on multiple purchases in the last month rather than the single most recent purchase. This new capability is supported in both the “Invoke AWS Lambda Function” and “Show View” flow blocks.
AWS Network Firewall now supports Transport Layer Security (TLS) inspection for ingress VPC traffic. This new feature enables customers to decrypt, inspect, and re-encrypt TLS traffic without having to deploy and manage any additional network security infrastructure.
AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon VPCs. Starting today, you can use AWS Network Firewall to decrypt TLS sessions and inspect inbound VPC traffic originating from internet, another VPC, or another subnet. Encryption and decryption happen on the same firewall instance natively, so traffic doesn’t cross any network boundaries.
Ingress TLS inspection on AWS Network Firewall is available in the Asia Pacific (Sydney) Region and Europe (Ireland) Region.
AWS announced the availability of next-generation General Purpose gp3 storage volumes for Amazon Relational Database Service (Amazon RDS) Custom for Oracle and Amazon RDS Custom for SQL Server. Amazon RDS gp3 volumes give you the flexibility to provision storage performance independently of storage capacity, paying only for the resources you need.
You can choose gp3 storage type for your Amazon RDS Custom database instance with the ability to select from 40 GiB to 64 TiB (20 GiB to 16 TiB for Amazon RDS Custom SQL Server) of storage capacity, with a baseline storage performance of 12,000 IOPS (3,000 IOPS for Amazon RDS Custom SQL Server) included with the price of storage. For workloads that need even more performance, you can scale up to 64,000 IOPS (16,000 IOPS for Amazon RDS Custom SQL Server) for an additional cost.
Amazon RDS offers storage types that differ in performance characteristics and price. General Purpose (SSD) storage offers a cost-effective option that is ideal for a broad range of small to medium sized database workloads and development and testing environments. Provisioned IOPS storage is designed for business-critical, performance-sensitive transactional database workloads, particularly workloads that require low I/O latency and consistent I/O throughput and are often running larger data sets.
R5b instances make 60 Gbps EBS bandwidth available to storage performance-bound workloads without requiring customers to use custom drivers or recompile applications. Customers can take advantage of this improved EBS performance to accelerate data transfer to and from Amazon EBS, reducing the data ingestion time for applications and speeding up delivery of results.
With R5b on EBS, customers have access to high performance scalable, durable, and highly available block storage. R5b instances are ideal for large relational database workloads such as OracleDB, SQL server, Postgres and MySQL to run applications like commerce platforms, ERP systems, and health record systems. R5b instances are also certified for production SAP workloads including SAP NetWeaver based applications and the in-memory SAP HANA database.
Amazon Relational Database Service (RDS) for Oracle now supports version 22.2 of Oracle Application Express (APEX) for 19c & 21c versions of Oracle Database. Using APEX, developers can build applications entirely within their web browser. To learn more about the latest features of APEX 22.2, please refer to Oracle’s documentation.
For more details on supported APEX versions and how to add or modify APEX options for your RDS for Oracle database, please refer to the Amazon RDS for Oracle APEX Documentation.
AWS GovCloud U.S. Regions offer a logically isolated AWS cloud environment operated by U.S. citizens, on U.S. soil. They are designed to facilitate compliance with the FedRAMP High baseline; the DOJ’s Criminal Justice Information Systems (CJIS) Security Policy; U.S. International Traffic in Arms Regulations (ITAR); Export Administration Regulations (EAR); Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG) for Impact Levels 2, 4 and 5; FIPS 140-2; IRS-1075; and other requirements.
Customers can now track the current build status and image build steps for their image pipelines directly in EC2 Image Builder. This capability makes it easier for you to track image builds and troubleshoot build failures. This release also consolidates logs making it easier for you to audit and review builds. Additionally, workflow optimizations provide faster image builds, with internal tests showing up to 35% build speed improvements.
All your image build logs are consolidated in the workflow section in the EC2 Image Builder Console. With this launch, you can access detailed step-by-step image build logs for your custom images in the EC2 Image Builder account. You can also use the AWS CLI, API, and CDK to monitor the progress of your image creation process.
Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it.
This week AWS were excited to announce the launch of Amazon Kendra Featured Results. Amazon Kendra administrators can now define search queries and associate a set of featured documents for each search query. When a user runs a search query with these specific keywords, the associated set of featured documents are displayed as the top search results.
Amazon Simple Notification Service (SNS) has introduced an open-source Extended Client Library for Python that enables you to publish and deliver large message payloads. Previously, only the Extended Client Library for Java was available.
This library is useful for messages that are larger than 256KB, up to a maximum of 2GB. The library automatically saves the actual payload to an Amazon S3 bucket and publishes the reference of the stored Amazon S3 object to the Amazon SNS topic.
Amazon SNS is a messaging service for Application-to-Application (A2A) and Application-to-Person (A2P) communication. The A2A functionality provides high-throughput, push-based, many-to-many messaging between distributed systems, microservices, and event-driven serverless applications.
These applications include Amazon Simple Queue Service, Amazon Kinesis Data Firehose, AWS Lambda, and HTTP/S endpoints. The A2P functionality enables you to communicate with your customers via mobile text messages (SMS), mobile push notifications, and email notifications. Now, with the Extended Client Library for Python, you can publish and deliver messages with payloads up to 2GB by having them automatically stored in Amazon S3 buckets.
AWS Compute Optimizer now supports 61 additional Amazon Elastic Compute Cloud (Amazon EC2) instance types. Newly supported instance types include the latest generation general purpose instance families from both Intel and AMD (M6in, M6idn), compute optimized instance families (C6in), and memory optimized instance family (R6in, R6idn).
The newly launched recommendations help customers discover opportunities to optimize more EC2 instance types that ensure high performance at the lowest cost. With this launch, AWS Compute Optimizer now supports a total of 486 EC2 instance types.
AWS introduced a new capability for AWS Batch to provide user-defined pod labels for jobs that run on your Amazon Elastic Kubernetes Service (Amazon EKS) clusters. Labels are key-value pairs that are used to specify identifying attributes of objects that are designed to be meaningful and relevant to users.
Using labels, customers can map their own organizational structures and bring better accountability, compliance, and cost visibility for their workloads.
This means that customers can now implement tagging best practices that provides efficient capabilities to achieve infrastructure visibility and easily troubleshoot issues. Customers can specify pod labels within the eksProperties request for their jobs in AWS Batch either when they register a job definition, or when they submit the job request.
Once specified, user-defined labels can be viewed by describing the job. Overrides specified during job submission are designed to allow customers to override label values from the job definition and add new key value pairs.
AWS announces the general availability of the AWS Controllers for Kubernetes (ACK) for EventBridge and Pipes. This launch allows you to manage EventBridge resources, such as event buses, rules, and pipes, using the Kubernetes API and resource model (custom resource definitions).
Amazon EventBridge event buses and pipes are serverless event router and point-to-point integrations that enable you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services.
You can set up routing rules to determine where to send your events, allowing for application architectures to react to changes in your systems as they occur.
Amazon Athena announces that Amazon Athena for Apache Spark is now available in 4 new AWS regions: Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Mumbai). This release expands Amazon Athena for Apache Spark beyond the 5 regions available today: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland)
AWS Compute Optimizer now supports EC2 instances that have non-consecutive 30 hours of utilization data during the 14 days period (for EC2 instances without enhanced infrastructure metrics) or up to 93 days (for EC2 instances with enhanced infrastructure metrics). It no longer requires your EC2 instances to have 30 consecutive hours of utilization data to get cost and performance optimization recommendations.
With today’s launch, you can unlock additional savings and performance improvement opportunities for your EC2 instances running scheduled workloads or had stopped in the look back period. Compute Optimizer now generates rightsizing recommendations for every EC2 instance types it currently supports if an EC2 instance has accumulated more than 30 hours of utilization data during the data collection period.
You no longer have to worry about not having visibility into savings or performance improvement opportunities if your EC2 instances were stopped during the data collection period.
AWS is pleased to announce two new features in the AWS Well-Architected Tool (AWS WA Tool)—Consolidated Report and Enhanced Search—that will enable customers to quickly identify risk themes across their workloads and scale improvements across their organization.
Consolidated Report enables customers to see an overview of risks across all their workloads, helping them more easily identify risk trends. This macro-level view helps executive stakeholders understand where common issues lie, and prioritize team resources to drive widespread improvement.
Enhanced Search makes it easier to find relevant workloads quickly using additional search criteria, such as workload name, workload ID, and workload ARN within the AWS Management Console.
Amazon Omics now supports batch import of variant data into Omics variant stores. You can now import up to 1,000 variant call format (VCF) and genome VCF (gVCF) files in a single variant import job. This simplifies how you bring population scale variant data into Omics and make it available for analysis.
Amazon Omics is a fully managed service that helps healthcare and life science organizations and their software partners store, query, and analyze genomic, transcriptomic, and other omics data and then generate insights from that data to improve health and advance scientific discoveries.
We are excited to announce search and discovery of AWS resources and AWS documentation content in the AWS Chatbot. The search feature allows customers to find their AWS resources and discover relevant AWS Documentation by simply typing queries in natural language.
When issues occur customers need to lookup product documentation or support articles to diagnose and remediate the issues. With this feature, customers can ask questions in natural language from chat channels to get answers to these issues without switching context to the AWS Console or other tools.
Customers can type queries such as “@aws How do I update Lambda concurrency settings?” to receive answer excerpts from the AWS Documentation and AWS Support articles. Similarly, customers may need to quickly find relevant AWS resources to troubleshoot the issues. Customers can now find AWS resources by typing “@aws show ec2 instances in us-east-1” in chat channels to retrieve the matching AWS resources.
This week, AWS Site-to-Site VPN announced Tunnel Endpoint Lifecycle Control, a new capability that provides better visibility and control of your VPN tunnel maintenance updates.
AWS Site-to-Site VPN is a fully-managed service that allows you to create a secure connection between your data center or branch office and your AWS resources using IP Security (IPSec) tunnels.
This feature provides you with added flexibility when using AWS Site-to-Site VPN by allowing you to apply updates to your tunnel endpoints at a time that best suits your business ahead of the service-mandated deadline. Enabling this feature provides you with advanced notice of an upcoming maintenance updates that helps you plan and minimize service disruptions for your VPN connections.
AWS Site-to-Site VPN offers two tunnels and our best practices guidance is to configure both tunnels in your VPN connection for high availability. Customers that are sensitive to VPN tunnel state changes or can only support a single up tunnel at a time can use this feature to help reduce operational pain caused by periodic maintenance related VPN tunnel endpoint replacements.
Amazon DevOps Guru for Amazon Relational Database Service (Amazon RDS) now supports Amazon RDS for PostgreSQL. With this capability, you can resolve RDS for PostgreSQL related performance bottlenecks in minutes rather than days.
DevOps Guru for RDS for RDS PostgreSQL supports both reactive insights, anomalous behavior that has already occurred, as well as proactive insights, informing you of impending database performance and availability issues before they become critical.
Amazon DevOps Guru for RDS is a Machine Learning (ML) powered capability for Amazon RDS that automatically detects and diagnoses database performance and operational issues. DevOps Guru for RDS is a feature of DevOps Guru, which detects operational and performance related issues for Amazon RDS engines and dozens of other resource types.
DevOps Guru for RDS expands upon the existing capabilities of DevOps Guru to detect, diagnose, and provide remediation recommendations for a wide variety of database-related performance issues, such as resource over-utilization and misbehavior of SQL queries. When an issue occurs, DevOps Guru for RDS immediately notifies developers and DevOps engineers and provides diagnostic information, details on the extent of the problem, and intelligent remediation recommendations to help customers quickly resolve the issue. DevOps Guru for RDS is available in these regions.
Amazon EventBridge Scheduler is now available in 18 additional regions: Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Mumbai), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Canada (Central), Europe (Zurich), EU (Milan), Europe (Spain), EU (London), EU (Paris), Middle East (UAE), Middle East (Bahrain), South America (Sao Paulo), and US West (N. California).
Amazon EventBridge Scheduler expands on its current scheduling capabilities by making it simple for developers to create, execute, and manage scheduled tasks at scale. Developers create schedules to arrange when events and tasks are triggered to automate IT and business processes and deliver scheduling functionality within their applications. EventBridge Scheduler enables customers to schedule millions of events and tasks across 200+ AWS services without provisioning or managing the underlying infrastructure.
EventBridge Scheduler supports one time and recurring schedules that can be created using common scheduling expressions such as cron, rate, and at a specific time with support for time zones and daylight savings.
Amazon GuardDuty expands threat detection coverage to continuously monitor and profile Amazon Elastic Kubernetes Service (Amazon EKS) container runtime activity to identify malicious or suspicious behavior within container workloads. GuardDuty EKS Runtime Monitoring introduces a new lightweight, fully-managed security agent that monitors on-host operating system-level behavior, such as file access, process execution, and network connections.
Once a potential threat is detected, GuardDuty generates a security finding that pinpoints the specific container, and includes details such as pod ID, image ID, EKS cluster tags, executable path, and process lineage.
GuardDuty EKS Runtime monitoring includes over two dozen new detections at launch, which when combined with GuardDuty EKS Audit Log Monitoring, amounts to more than 50 detections that are tailored to identify threats to Amazon EKS deployments.
GuardDuty EKS Runtime Monitoring can be enabled with a few steps in the GuardDuty console, and is integrated with Amazon EKS to allow for automated agent deployment to existing and new EKS clusters in your account. Leveraging AWS Organizations, you can centrally enable runtime threat detection coverage for accounts and workloads across the organization and maintain full security coverage. Current and new GuardDuty users can try GuardDuty EKS Runtime Monitoring at no cost with a 30-day free trial.
Amazon SageMaker Canvas now provides ready-to-use models so you can generate insights from thousands of documents, images, and lines of text in minutes. Additionally, you can now create custom models to address natural language processing (NLP) and computer vision (CV) use cases. SageMaker Canvas is a visual interface that enables business analysts to generate accurate machine learning (ML) predictions on their own—without requiring any ML experience or having to write a single line of code.
Business analysts are increasingly looking to accelerate their ability to generate insights from a variety of data and respond to ad-hoc analysis requests from business stakeholders. The process is often manual, time-consuming, and error-prone. ML can help business analysts analyze and generate insights from large volumes of data, but creating ML models often requires deep technical expertise.
Starting this week, you can now use SageMaker Canvas to access ready-to-use models or create custom models for specific image or text classification use cases. Ready-to-use models are powered by AWS AI services, including Amazon Rekognition, Amazon Textract, and Amazon Comprehend.
To create a custom model, you can import, prepare, explore, and label data. You can then train a custom model and evaluate the model’s performance. For custom image classification models, you can use heat maps to gain visibility into the training data that is impacting the model’s performance.
You can also correct the model predictions if incorrect, add the verified data back to the original training dataset, and re-train the model to iteratively improve the model’s performance. Finally, you can generate accurate predictions without writing a single line of code.
AWS Compute Optimizer now supports Hard Disk Drive (HDD) volumes and io2 Block Express EBS volume types. You can start saving cost and improving performance on those EBS volumes with the optimization recommendations.
With this launch, you can receive recommendations from Compute Optimizer to increase the size of Throughput Optimized HDD (st1) volumes and Cold HDD (sc1) volumes or switch from st1 volumes to General Purpose (gp3) volumes for improved IOPS or throughput. Compute Optimizer also provides recommendations for moving infrequent accessed data from st1 to sc1 to obtain cost savings.
Additionally, Compute Optimizer provides recommendations to migrate your EBS Magnetic (standard) volumes to current generation EBS volumes. If your Provisioned IOPS (io1/io2) EBS volumes are attached to io2-Block Express-enabled EC2 instances, Compute Optimizer can recommend you to increase their size and provision higher IOPS, fully utilizing the benefits of io2 Block Express.
Starting today, Amazon Elastic Compute Cloud(Amazon EC2) C6in instances are available in Europe (Stockholm), Middle East (Bahrain), Asia Pacific (Jakarta, Mumbai, Sydney), Africa (Cape Town), South America (Sao Paulo), Canada (Central), and AWS GovCloud (US-East) Regions. These instances are powered by 3rd Generation Intel Xeon Scalable processors with all-core turbo frequency of up to 3.5 GHz, and are the first x86-based Amazon EC2 instances to offer up to 200 Gbps network bandwidth.
C6in instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security. You can take advantage of the higher network bandwidth to scale the performance of applications, such as network virtual appliances (firewalls, virtual routers, load balancers), Telco 5G User Plane Function (UPF), data analytics, high performance computing (HPC), and CPU based AI/ML workloads.
C6in instances are available in 10 different sizes with up to 128 vCPUs, including bare metal size. They also deliver up to 80 Gbps of Amazon Elastic Block Store (Amazon EBS) bandwidth and up to 350K input/output operations per second (IOPS), the highest Amazon EBS performance across EC2 instances. C6in instances offer Elastic Fabric Adapter (EFA) networking support on 32xlarge and metal sizes.
Amazon DataZone is a new data management service to catalog, discover, analyze, share, and govern data across organizational boundaries. Visibility of and access to data are key drivers of innovation and value for business. To provide visibility and access between organizations, Amazon DataZone creates a usage flywheel. The flywheel is driven by data producers, who securely share data and its context, and data consumers, who find answers to business questions in the data.
With Amazon DataZone, data producers populate the business data catalog with structured data assets from AWS Glue Data Catalog and Amazon Redshift tables. Data consumers search and subscribe to data assets in the data catalog and share with other business use case collaborators.
Consumers can analyze their subscribed data assets with tools such as Amazon Redshift or Amazon Athena query editors that are directly accessed from the Amazon DataZone portal. The integrated publishing-and-subscription workflow provides access-auditing capabilities across projects.
Amazon DataZone (preview) root domains can be provisioned in the following AWS Regions: US East (N. Virginia), US West (Oregon), or Europe (Ireland). AWS IAM Identity Center, successor to AWS Single Sign-on, must be conﬁgured in the same AWS Region as the root domain. You can publish data from any of these Regions to the Amazon DataZone catalog. Subscribe to the data and consume it in the same Region as data in AWS analytics services such as Amazon Redshift and Athena.
Starting this week, you can use AWS WAF in the Europe (Zurich), Europe (Spain), Asia Pacific (Hyderabad), and Australia (Melbourne) regions. AWS WAF is a web application firewall that helps you protect your web application resources against common web exploits and bots that can affect availability, compromise security, or consume excessive resources.
You can protect the following resource types: Amazon CloudFront distributions, Amazon API Gateway REST APIs, Application Load Balancer, AWS AppSync GraphQL API, and Amazon Cognito user pools.
With AWS WAF, you can control access to your content. Based on conditions that you specify, such as the IP addresses that requests originate from or the values of query strings, your protected resource responds to requests with the requested content, an HTTP 403 status code (Forbidden), or a custom response.
To see the full list of regions where AWS WAF is currently available, visit the AWS Region Table. Please note that only core AWS WAF features like AWS Managed Rules and rules are currently available in these new regions. For more information about the service, visit the AWS WAF page. AWS WAF pricing may vary between regions. For more information about pricing, visit the AWS WAF Pricing page.
The AWS Toolkits enables easier development faster by supporting AWS SAM Accelerate in the IDEs letting you edit, deploy, and test your code iteratively. This update is supported in Visual Studio Code or JetBrains IDEs like IntelliJ IDEA, Rider, WebStorm, PyCharm, CLion, RubyMine, GoLand and PhpStorm.
Customers can quickly deploy and run code changes by selecting the ‘Sync AWS SAM Application’ option from their IDE and run the local changes in the AWS cloud without initiating an AWS CloudFormation deployment. For example, changes to AWS Lambda functions, Lambda Layer resources, AWS StepFunction templates and Amazon APIGateway documents are deployed quickly without an infrastructure deployment.
With the ability to rapidly deploy to the AWS Cloud during development, you can iteratively identify issues with your application that are challenging to detect in your local environment. For example, testing in the AWS Cloud can help you identify issues with IAM roles or API authorization. Check out the Toolkit User Guide for JetBrains and VS Code to learn how to use this feature in your preferred IDE.
Amazon SageMaker Python SDK is an open source library for training and deploying machine-learning models on Amazon SageMaker. Amazon SageMaker Python SDK users can now configure default values for parameters such as IAM roles, VPC, and KMS keys. For the full list of supported parameters and APIs, see the SageMaker SDK defaults documentation page.
With this update, administrators and end users can initialize AWS infrastructure primitives with defaults specified in a configuration file. We support multiple configuration files, one at the admin level, and another at the user level that can be stored in S3, EFS, or local file system.
Once configured, the defaults from these configuration files will be merged and inherited as inputs to the SDK with no additional actions needed. For example, users who may not be familiar with VPC and encryption settings don't need to specify these inputs each time they use a SageMaker API. This saves time for users and admins, eliminates the burden of repetitively specifying parameters, and results in leaner and more manageable code.
Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Today, we are announcing that Amazon Translate asynchronous Batch Translation is now available in eight additional regions - US West (Northern California), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Paris) and Europe (Stockholm).
With this expansion, Amazon Translate asynchronous Batch Translation is supported in 15 regions. Now, AWS customers can reach a wider set of users in many geographies that are increasingly expecting to consume media and interact with organizations in the language of their choice.
With asynchronous Batch Translation, customers can translate a large collection of documents stored in a folder in Amazon Simple Storage Service (S3) bucket. Asynchronous batch operations are particularly useful for translating large collection of documents with one API call, when the application doesn't need a real-time response.
Our asynchronous Batch Translation service accepts a batch of up to 5 GB in size per API call with each document not exceeding 20 MB in size and the number of documents in the S3 bucket folder not exceeding 1 million per batch. Refer to our Guidelines and Limits for more details.
AWS Service Catalog is now available to customers in four additional AWS Regions: Europe (Zurich), Europe (Spain), Asia Pacific (Hyderabad) and Asia Pacific (Melbourne).
AWS Service Catalog enables customers to create, govern, and manage a catalog of Infrastructure as Code (IaC) templates that are approved for use on AWS. These IaC templates can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.
AWS Service Catalog helps you centrally curate and share commonly deployed templates across teams to achieve consistent governance and meet compliance requirements. End-users such as engineers, database administrators, and data scientists can then quickly discover and self-service approved AWS resources that they need to use to perform their daily job functions.
With AWS Service Catalog, organizations can control which laC templates and versions are available, what is configured in each of the available services, and who can access each template, based on individual, group, department, or cost center. AWS Service Catalog is used by enterprises, system integrators, and managed service providers to organize, govern, and provision resources on AWS.
Starting today, you can build, train, and deploy machine learning (ML) models in Europe (Spain) Region.
Amazon SageMaker is a fully managed platform that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models.
AWS DataSync supports copying data from Azure Blob Storage to AWS Storage in preview. Using DataSync, you can move your object data at scale from Azure Blob Storage to AWS Storage services such as Amazon S3. AWS DataSync supports all blob types within Azure Blob Storage and can also be used with Azure Data Lake Storage (ADLS) Gen 2.
AWS DataSync gives customers a purpose-built service to simplify and automate data transfers. AWS DataSync removes the operational burden of writing and maintaining code, buying and operating data transfer software, and manually ensuring data transfers are executed and verified. AWS DataSync integrates with Amazon CloudWatch and Amazon EventBridge for easy monitoring of metrics, logs, and events.
AWS DataSync also compresses data before transit, identifies only the objects that have changed and need to be copied, and automatically recovers from network interruptions. In addition to Azure Blob Storage, DataSync supports Google Cloud Storage and Azure Files storage locations as well as Network File System (NFS) shares, Server Message Block (SMB) shares, Hadoop Distributed File Systems (HDFS), self-managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (Amazon EFS), and all Amazon FSx file systems.
Starting this week, you can use CloudFront Functions to further customize responses to viewers, including changing the HTTP status code and replacing the HTTP body of the response. CloudFront Functions is a serverless edge computing feature on CloudFront built for lightweight HTTP transformations that runs in the 450+ CloudFront edge locations globally.
Previously, CloudFront Functions allowed transforming the HTTP request and response attributes such as headers and cookies. With this launch, when CloudFront receives an HTTP response from your origin server or the cache, you can modify the HTTP response to override the HTTP status code and HTTP body.
For example, if you want to evaluate the headers returned from your origin to determine whether to block a request, you can change the HTTP status code to 403 and drop the HTTP body in the response. You can also use this capability to generate the HTTP body for each request, for example, you can evaluate a request and respond back to viewers with a customized webpage.
Amazon GuardDuty has added new functionality to its integration with AWS Organizations to make it even simpler to enforce threat detection across all accounts in an organization. Since April 2020, GuardDuty customers can leverage its integrations with AWS Organizations to manage GuardDuty for up to 5,000 AWS accounts, as well as automatically apply threat detection coverage to new accounts added to the organization.
In some case, this could still result in coverage gaps, for example, if GuardDuty was not applied to all existing accounts, or if it was unintentionally suspended in individual accounts. Now with a few steps in the GuardDuty console, or one API call, delegated administrators can enforce GuardDuty threat detection coverage for their organization by automatically applying the service to all existing and new accounts, as well as automatically identifying and remediating potential coverage drift. To learn more, see the Amazon GuardDuty account management User Guide.
Customers across industries and geographies use Amazon GuardDuty to protect their AWS environments, including over 90% of AWS’s 2,000 largest customers. GuardDuty continuously monitors for malicious or unauthorized behavior to help protect your AWS resources, including your AWS accounts, access keys, EC2 instances, EKS clusters, data stored in S3, and Aurora databases. You can begin your 30-day free trial of Amazon GuardDuty with a single-click in the AWS Management Console.
AWS CodeBuild’s support for Arm using AWS Graviton2 is now available in: Europe (Milan), Middle East (Bahrain), Asia Pacific (Osaka), Asia Pacific (Hong Kong), and Asia Pacific (Jakarta).
In February 2021, CodeBuild launched an update for native Arm builds to use the second generation of AWS Graviton processors. Support for Graviton2 allows customers to build and test on Arm without the need to emulate or cross-compile. Now, with additional regions, more CodeBuild customers targeting Arm benefit from the capabilities of AWS Graviton2 processors.
This week, AWS released new version 1.27 of AWS Copilot that enables customers to fully customize AWS Cloud Formation templates, which AWS Copilot uses to provision the service, environment, pipeline, and job resources.
Customers can now use AWS Cloud Development Kit (CDK) or YAML patches to change any property of those AWS resources. AWS Copilot is a command-line interface (CLI) that makes it easier for customers to build, deploy, and operate containerized applications on AWS by providing common application architecture and infrastructure patterns, user-friendly operational workflows, and configuring deployment pipelines.
With the new AWS Copilot release (1.27) users can now run copilot svc override, copilot env override, or copilot job override to enable overrides of any property of a service, environment, or job . Users can choose between two options --tool cdk or --tool yamlpatch overrides. With CDK overrides, AWS Copilot bootstraps a new CDK application inside a copilot/<resource name>/overrides/ directory of the user’s project and provides instructions inside stack.ts file on how to use the CDK.
Customers can start by editing stack.ts and modify any properties of AWS Cloud Formation resources generated by AWS Copilot before a deployment. Customers who choose to use YAML patch overrides can override the AWS Cloud Formation template via .yaml patch files that adhere to the JSON patch syntax. Both options give customers full control over AWS resources and their properties that AWS Copilot deploys.
Starting this week, you can build, train, and deploy machine learning (ML) models in Europe (Zurich) Region.
Amazon SageMaker is a fully managed platform that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models.
Amazon Athena has expanded its encryption settings to improve the security of your query results. With today’s launch, you can now ensure all query results are encrypted at or above a level of encryption that you specify.
When you query data, sensitive information may be included in the result. To reduce the impact of unauthorized access by an untrusted third-party, it is recommended that you encrypt your query results.
This week, you can set a default encryption level for queries within a workgroup. However, users can, if permitted, override the default and use a different encryption level for individual queries. With this release, you can now ensure all query results are encrypted with a desired minimum level of encryption and choose one of several methods of varying strength to safeguard your data.
You can use the Athena console, AWS CLI, API, or SDK to configure the level encryption you want. To learn more, see Configuring minimum encryption for a workgroup.
AWS are excited to announce that Incident Manager now provides on-call schedules, to help you have 24/7 coverage and responsiveness for critical issue. This extends Incident Manager’s capabilities for incident response, helping operations teams more quickly engage, respond, and resolve application availability and performance issues when they occur.
With on-call schedules, operations teams can configure rotations through a group of on-call contacts and use it as part of their Incident Manager response plan to ensure that there is always a contact on-call. You can view the on-call schedule and make configuration changes to accommodate out of office, or personal events at any time from the Incident Manager console.
Incident Manager helps you bring the right people and information together when a critical issue is detected, activating pre-configured response plans to engage responders using SMS, phone calls, and chat channels, as well as to run AWS Systems Manager Automation runbooks.
This new native feature adds another option for managing your on-call response. To learn how to get started with On-call Schedules in Incident Manager, see our documentation.
Bottlerocket, a Linux-based operating system that is purpose built to host container workloads, now supports FireLens. Customers using Bottlerocket with Amazon Elastic Container Services (Amazon ECS) can now benefit from a simpler way to collect logs from Bottlerocket nodes.
FireLens is a container log router for Amazon ECS that allows you to send logs to multiple destinations and that gives you extensibility to use the breadth of services at AWS or partner solutions for log analytics and storage. Please refer to the detailed guide on how to configure a custom log routing using FireLens.
NICE DCV version 2023.0 introduces multiple enhancements and features, such as support for Red Hat Enterprise Linux 9 and monitor selection for a full-screen remote session on Linux and macOS clients. NICE DCV is a high-performance remote display protocol that is designed to help customers securely access remote desktop or application sessions, including 3D graphics applications hosted on servers with high-performance GPUs.
NICE DCV version 2023.0 contains the following features and improvements:
- Monitor selection for Linux and macOS clients - users can now choose which local display should be used for a full-screen remote session
- Drag and drop file upload - to start a file upload, users can now use their mouse to drag and drop files from their local desktop to the DCV client window.
- Support for additional Linux distributions - the DCV server software adds support for Red Hat Enterprise Linux 9, Rocky Linux 9 and CentOS Stream 9. Please note that DCV only supports X11 display server protocol, the new Wayland protocol is not supported.
- Time zone redirection - users can now configure sessions on Linux hosts to use the client's local time zone
- QUIC transport updates - UDP-based QUIC transport was enhanced with image quality and performance optimizations.
- User Interface changes - Incremental changes to the User interface of macOS and Linux clients to unify the user experience across different platforms
AWS Application Migration Service (AWS MGN) now supports new migration and modernization features, including import/export of source environment inventory lists, a server migration status dashboard, and new application modernization actions. Application Migration Service helps minimize manual processes by automating the conversion of your source servers to run natively on AWS with optional modernization features.
You can now use Application Migration Service to import your source environment inventory list from a CSV file. This file allows you to specify server attributes individually or in bulk. This can help accelerate migration by connecting the inventory list from your discovery tools with Application Migration Service.
After importing inventory with specified server attributes, you can start replicating these servers to AWS. You can also export source server inventory for reporting, offline review, and integration with other services.
Application Migration Service also supports a new source server migration status dashboard. Using this dashboard can simplify migration project management by providing an aggregated view of the migration lifecycle status of your source servers, including data replication status, test status, and cutover status.
Additional new features include support for 8 preconfigured application modernization actions, including upgrading a Windows Server version, converting a Windows MS-SQL BYOL to an AWS license, and installing the Amazon CloudWatch agent. Application Migration Service also now supports CentOS 5.5 and later and RHEL 5.5 and later operating systems.
Amazon Chime SDK now offers real-time call analytics to help businesses extract insights from voice conversations. Machine learning (ML) powers call insights include speaker search and voice tone analysis. Customers can gain additional ML-powered intelligence such as turn-by-turn transcripts, customer and agent sentiment, through integrations with Amazon Transcribe and Amazon Transcribe Call Analytics.
These insights can be consumed in real-time and after completion of a call, by accessing a data lake, and they can be visualized using tools such as Amazon QuickSight. Additionally, businesses can record voice conversations to the S3 bucket of their choice.
Insights and call recordings apply across enterprise functions including sales and marketing, support, compliance and operations to improve customer experience, increase employee productivity, and reduce compliance costs.
For example, banks can use call analytics to record trader conversations and generate real-time transcriptions. Businesses can apply voice tone analysis to customer conversations to assess sentiment around products and services, and use speaker search to enrich call records and transcripts with identity attribution.
Call analytics works with any Amazon Chime Voice Connector. Customers can get started with just a few clicks in the AWS console or using Application Programming Interfaces (APIs). You can use call analytics in US East (N. Virginia), US West (Oregon), and Europe (Frankfurt) AWS Regions.
Amazon Elastic Kubernetes Service (EKS) announces domainless Group Managed Service Account (gMSA) support for Windows containers. This helps customers to easily authenticate applications hosted on Amazon EKS with Microsoft Active Directory (AD) using a portable user identity and a plug-in mechanism to retrieve the gMSA credentials for their Windows containers. Now, customers can run containers that require AD authentication without joining the EKS nodes to the domain, even in case of autoscaling events.
Group Managed Service Account (gMSA) is a managed domain account that provides automatic password management, service principal name (SPN) management, and the ability to delegate the management to other administrators over multiple servers/instances. This allows multiple containers or resources to share an AD account without having to authenticate each container or resource individually, or without having access to network-shared resources such as SQL Server hosts, or file-shares.
Since the launch of EKS version 1.14, customers can run EKS Windows containers with gMSA by joining underlying nodes to a target AD domain. Now customers can also use a built-in plugin on the latest EKS-Optimized Windows AMIs (versions 1.22 and above) that enables non-domain-joined Windows nodes to retrieve gMSA credentials with a portable user identity instead of a host computer account. Read this blog for a step-by-step guide on how to get started.
Amazon Connect agent workspace now provides programmatic step-by-step guidance that agents can use to identify customer issues and then recommends which actions to take to resolve them. Using Flows, the Amazon Connect no-code/low-code drag-and-drop design interface, you can design a guide that presents what the agent should review or do at a given moment during a customer interaction. This guidance helps decrease the time it takes to train new agents and helps all agents become more productive.
With step-by-step guides, you can help infer customer intent and recommend discrete next actions through business rules configured in flows. For example, when a customer calls, the agent workspace will present the agent with the likely solution based on the customer’s history or current context such as a lost order.
Then, the flow guides the agent through each action needed to complete the solution such as initiate a replacement order. You can design flows for various types of customer interactions and present agents with different guides based on context such as call queue, customer information, and interactive voice response (IVR).
We are pleased to announce that as of today, new Cost Explorer users will automatically receive the benefit of Cost Anomaly Detection. Cost Anomaly Detection uses machine learning to continuously monitor, detect, and alert customers of unexpected cost increases. The default configuration allows new Cost Explorer users to quickly improve cost controls with zero effort.
Up until today, any user who wanted to use Cost Anomaly Detection had to first enable Cost Explorer and then set up cost monitors and alerting preferences to start using the service. The new automatic configuration removes the manual process. With this launch, an AWS service monitor and a daily email subscription will be created for new Cost Explorer users (enabled on and after March 27, 2023) with a regular standalone account or a management account.
If the actual spend is over $100 and exceeds 40% of expected spend, a daily summary will be sent to the primary email address associated with the account. Users are encouraged to create/modify additional cost monitors and alerting preferences for their specific needs. Users who are not interested in Cost Anomaly Detection can opt-out via the console or API.
Anthos Service Mesh
control_plane field in the service mesh fleet feature API (for example,
gcloud container fleet mesh update --control-plane ...) is deprecated. Instead, use the
management field. For more information, see Provision managed Anthos Service Mesh.
On March 27, 2023 we released an updated version of the Apigee Integration.
Artifact Registry is now available in the
me-central1 region (Doha, Qatar).
Artifact Registry is now available in the
europe-west12 region (Turin, Italy).
Artifact Registry repositories with gcr.io domain support are now generally available. These repositories can host your existing Container Registry images and automatically redirect requests for gcr.io hosts to corresponding Artifact Registry repositories.
BigQuery ML documentation is now integrated with BigQuery documentation to unify resources for data analysis and machine learning tasks such as inference. BigQuery ML documentation resources include:
BigQuery Partner Center, which can be used to discover and try validated partner applications, is now generally available (GA). In addition, the Google Cloud Ready - BigQuery initiative has added 14 new partners.
Compute (analysis) is now generally available (GA) in three new BigQuery editions: Standard, Enterprise, and Enterprise Plus. These editions support the slots autoscaling model to meet your organizations' needs and budgets.
You can now use the
tf_version training option to specify the Tensorflow (TF) version during model training. By default,
tf_version is set as '1.15'. If you want to use TF2 with Keras API, you can add
tf_version = '2.8.0' when creating the model.
You can now use the
xgboost_version training option to specify the XGBoost version during model training. By default,
xgboost_version is set as '0.9'. You can choose XGBoost version 1.1 by specifying
xgboost_version = '1.1'.
You can now use the
instance_weight_col training option to identify the column containing weights for each data point in the training dataset. Currently the
instance_weight_col option is only available for boosted tree and random forest models with non-array feature types.
You can now import model artifacts saved in ONNX, XGBoost, and TensorFlow Lite formats into BigQuery for inference, allowing you to leverage models built in popular frameworks directly within the BigQuery ML inference engine.
You can also host models remotely on Vertex AI Prediction and do inference with BigQuery ML, removing the need to build data pipelines manually.
You can do inference with Google Cloud's state of the art pretrained models using Cloud AI service table-valued functions (TVFs) to get insights from your data. The TVFs work with Cloud Vision API, Cloud Natural Language API and Cloud Translation API.
Cloud Composer 2 now supports access with external identities through workforce identity federation.
Fixed a problem where upgrade checks were failing for some Cloud Composer 2 environments. This issue was affecting environments where Cloud Build can't be used to install PyPI packages.
The default value for the
dag_dir_list_interval Airflow configuration option is changed from
Increased the timeout for environment operations performed by Cloud Build to 35 minutes.
Cloud Data Loss Prevention
legacy version of the
STREET_ADDRESS infoType detection model will stay available until further notice. Previously, this
legacy model was scheduled to be removed on 19 June 2023.
Dedicated Cloud Interconnect support is available in the following colocation facilities:
- Ooredoo QDC5 (Qatar Data Center Ooredoo), Doha
- Quantum Switch (QSDC), Doha
For more information, see the Locations table.
Cloud KMS is available in the following region:
For more information, see Cloud KMS locations.
When you create a log view and use the
source() function in your filter, the argument to the function is now validated to ensure that it is a single string representing a project, folder, billing account or organization.
The Cloud Logging API now supports the following region:
The link for the Managed Prometheus page in Cloud Monitoring now goes to the PromQL tab on the Metrics Explorer page.
The following new region is now available:
Cloud SQL for MySQL / PostgreSQL / SQL Server
Support for me-central1 (Doha) region.
The changes in the September 15, 2022 Release Notes entry for read replica maintenance are now available. Cloud SQL read replicas follow the maintenance settings for the primary instance, including the maintenance window, rescheduling, and the deny maintenance period. During the maintenance event, Cloud SQL maintains the replicas before maintaining the primary instance. For more information, see How does maintenance affect read replicas?
Cloud SQL now exposes 38 new metrics. These metrics improve observability of Cloud SQL for SQL Server instances, helping you investigate performance issues and resource bottlenecks. You can find these metrics in the Metrics explorer within the Monitoring dashboard.
For more information about these metrics, see Cloud SQL Metrics.
The rollout of the following PostgreSQL minor versions, extension versions, and plugin versions is currently underway:
- 10.21 is upgraded to 10.22.
- 11.16 is upgraded to 11.17.
- 12.11 is upgraded to 12.12.
- 13.7 is upgraded to 13.8.
- 14.4 is upgraded to 14.5.
Extension and plugin versions
- plv8 is upgraded from 3.1.2 to 3.1.4.
- wal2json is upgraded from 2.3 to 2.4.
- pgTAP is upgraded from 1.1.0 to 1.2.0.
- PostGIS is upgraded from 3.1.4 to 3.1.7.
- pg_partman is upgraded from 4.5.1 to 4.7.0.
- pg_wait_sampling is upgraded from 1.1.3 to 1.1.4.
- pg_hint_plan is upgraded from 1.3.7 to 1.4.
- pglogical is upgraded from 2.4.1 to 2.4.2.
If you use a maintenance window, then the updates to the minor, extension, and plugin versions happen according to the timeframe that you set in the window. Otherwise, the updates occur within the next few weeks.
The new maintenance version is
[PostgreSQL version].R20230316.02_02. To learn how to check your maintenance version, see Self service maintenance. To find your maintenance window or to manage maintenance updates, see Find and set maintenance windows.
Cloud SQL now supports the Linked Servers functionality of SQL Server. You can use this capability to integrate data from multiple sources and distribute queries across multiple servers. To learn more, see About linked servers.
The Cloud SQL Active Directory (AD) Diagnosis tool helps you troubleshoot issues that you might face while connecting to AD-enabled Cloud SQL for SQL Server instances, using an on-premises AD domain.
You can create Cloud Spanner regional instances in Doha, Qatar (
Cloud TPU now supports Tensorflow 2.12.0. For more information see the TensorFlow 2.12 release notes.
Cloud Workstations is available in the following regions:
- asia-south1 (India)
- us-east4 (Virginia, North America)
For more information, see Locations.
Generally available: Doha, Qatar, Middle East
me-central1-a,b,c has launched with E2 and N2 VMs available in all three zones.
See VM instance pricing for details.
Preview: Persistent Disk Asynchronous Replication (PD Async Replication) provides low recovery point objective (RPO) and low recovery time objective (RTO) block storage replication for cross-region active-passive disaster recovery. For more information, see About Persistent Disk Asynchronous Replication.
Generally Available: You can test how workloads running on sole-tenant nodes behave during a host maintenance event, and see the effects of the sole-tenant VM's host maintenance policy on the applications running on the VMs.
For more information, see Simulate host maintenance events on sole-tenant nodes.
Confidential Space is now generally available.
Confidential Space is designed to let parties share sensitive data with a mutually agreed upon workload, while they retain confidentiality and ownership of that data. Such data might include personally identifiable information (PII), protected health information (PHI), intellectual property, cryptographic secrets, and more. Confidential Space helps create isolation so that data is only visible to the workload and the original owners of the data.
Confidential Space. The
assertion.swversion attestation assertion now verifies the Confidential Space image version number the workload is being run on, with the result returned as a list. Previously the assertion was used to determine whether the workload was running on a production or debug Confidential Space image, and the result was returned as an integer. You now determine if a production or debug image is being used with the
Confidential Space. The
assertion.submods.confidential_space.support_attributes assertion can be used to verify the support status of the Confidential Space image being used. It can be used, for example, to ensure that the workload is running on the latest version of the Confidential Space image.
Config Connector version 1.102.0 is now available.
Added support for
Introduced configurable reconciliation interval feature.
Fixed a bug causing diff detection on
reservedIpRange field in
virtualRepositoryConfig fields to ArtifactRegistryRepository
scheduling.maintenanceInterval field to
scheduling.maintenanceInterval field to
groupPlacementPolicy.maxDistance field to
deletionPolicy field to
protectConfig field to
transferSpec.sourceAgentPoolName fields to
spec.github.enterpriseConfigResourceNameRef fields to CloudBuildTrigger.
spec.diskEncryptionKey.rsaEncryptedKey field to ComputeDisk.
spec.rateLimitOptions.enforceOnKeyConfigs field to ComputeSecurityPolicy.
spec.kubeletConfig.podPidsLimit field to ContainerCluster.
spec.kubeletConfig.podPidsLimit field to ContainerNodePool.
spec.instanceType field to SQLInstance.
Dataflow is now available in Doha (
The Dataflow VM image has been updated to include mitigations for multiple vulnerabilities by upgrading to cos-97-16919-235-30. For the full list of mitigations, see the Container-Optimized OS release notes.
Dataflow jobs started on or after March 29, 2023 will run VM instances that use this image.
Vertical Autoscaling now supports batch jobs.
Dataproc cluster creation now supports the
pd-extreme disk type.
Dataproc on GKE now disallows update operations.
Dialogflow CX now provides the TO_NUMBER system function.
Document AI Warehouse
Allow users to upload and view TIFF file types in the UI.
DocAI Warehouse Pipelines (preview):
BigQuery Connector (preview): Supports batch exports of document metadata into BigQuery, which enables users to do data analysis, create reports and dashboards. For example, data visualization using BI dashboards.
Firestore no longer limits the number of writes that can be passed to a
Commit operation or performed in a transaction. Previously, the limit was 500. Limits for request size and the transaction time limit still apply.
me-central1 region in Doha, Qatar is now available.
Starting from GKE 1.26, cluster autoscaler can drain Pods from multiple nodes in parallel. The removal criteria are not changing, so the end state after scale down is going to be the same, but it will be achieved faster.
Google Distributed Cloud Edge
This is a patch release of Google Distributed Cloud Edge (version 1.3.1).
The following changes have been introduced in this release of Distributed Cloud Edge:
- The Kubernetes control plane has been updated to version 1.24.9-gke.2500.
- The Kubernetes container daemon (
containerd) has been updated to version 1.6.6-gke.1.
- The Kubernetes worker node agent (
kubelet) has been updated to version 1.24.7-gke.5.
The following issues have been resolved in this release of Distributed Cloud Edge:
- Errors in the
NodeSystemConfigUpdatecustom resource definition that shipped with Distributed Cloud Edge 1.3.0 have been corrected. The outputs of the affected status fields are now accurate.
Play Integrity is now supported for client-side authentication on Android applications. For more information, see Authenticate with Firebase on Android using a Phone Number.
Memorystore for Redis
Self-service maintenance is now Generally Available for Memorystore for Redis.
Migrate to Containers
On March 27, 2022 we released version 1.1.0 of the Migrate to Containers modernization plugins.
Learn how to Upgrade Migrate to Containers plugins.
Preview: Added support for refactoring WordPress Servers running on Apache2 Linux to containers, which lets you deploy WordPress sites as containers on GKE, GKE Autopilot clusters, Anthos clusters, and Cloud Run.
For more information, see Migrate a WordPress site.
Introduced the following features for JBoss migration:
- Support for JBoss versions has been extended and Migrate to Containers now supports migration of JBoss EAP versions 7.0 - 7.4 to equivalent Wildfly community based container images, besides migrations of Wildfly versions 8.1.0 - 26.1.1.
- Secrets are now automatically created from extracted security realms configuration and key-stores. This new feature fixes potential security risks and lets you update secrets without having to recreate images.
targetImageHomeproperty has been added to the migration plan to allow users to specify an alternative container image with a different
ExcludeFilesproperty has been added to the migration plan, which lets you explicitly exclude files and directories from the container image.
- The data migration feature now automates the creation and mounting of a Persistent Volume Claim (PVC) for the
$JBOSS_HOME/standalone/datadirectory. This directory is available for use by services that require storing content in the file system.
Filtering out files located at
/tmp when discovering Tomcat application dependencies.
Docker images may contain broken symlinks. Ensure that the tar archive artifacts added to dockerfile don't contain symlinks that don't resolve to another file in the archive. If they do, either retrieve the files from the source VM and add them to the dockerfile manually, or replace the symlinks in the source VM and perform extraction again.
Vertex AI Pipelines cost showback with billing labels is now generally available (GA). You can now use billing labels to review the cost of a pipeline run, along with the cost of individual resources generated from Google Cloud Pipeline Components in the pipeline run. For more information, see Understand pipeline run costs.
Vertex AI Workbench
The M105 release of Vertex AI Workbench managed notebooks includes the following:
- Fixed an issue wherein a runtime with idle shutdown enabled doesn't detect activity and shuts down.
- Fixed an issue wherein the runtime data disk runs out of space and prevents access.
- Fixed an issue wherein end user credentials are not preserved after shutdown.
- Changed Health Agent logging levels from
For auto mode VPC networks, added a new subnet 10.212.0.0/20 for the Doha me-central1 region. For more information, see Auto mode IP ranges.
Microsoft Azure Releases And Updates
This release includes quality improvements.
Azure HDInsight for Apache Kafka 3.2.0 is now available for public preview.
Azure HDInsight for Apache Kafka 3.2.0 now available for public preview and ready for production workloads.
Azure HDInsight for Apache Spark 3.3.0 is under development.
Persist data in Azure Container Apps via mounted volumes.
Azure Maps ensures HIPAA compliance is achieved for protected health information (PHI), such as geocoding patient home addresses.
Flush all data from active geo-replicated caches using portal, PowerShell, or CLI
Scale enterprise caches up or out without disrupting the operation of the current cache using in-place scaling.
You can now try out Kubernetes v1.26 features with Azure Kubernetes Service
Storage in-place data sharing with Microsoft Purview public preview is now supported for Azure Data Lake Gen2 and Blob storage accounts in East US, East US2, North Europe, Southcentral US, West Central US, West Europe, West US, West US2
Log all connections to your cache.
Azure HDInsight for Apache Kafka 3.2.0 now available for public preview and ready for production workloads.
Enterprises are now able to operate more securely with large SKUs to support more demanding workloads with App Service Environment v3.
Encrypt Redis persistence files using customer managed keys
General availability enhancements and updates released for Azure SQL in late-March 2023
Now available in East US 2, North Europe and West US 2region, Azure Premium SSD v2 Disk Storage offers the most advanced general-purpose block storage solution with the best price-performance.
Azure Monitor alert rule can now be duplicated via the Azure portal.
Multi-Column Distribution (MCD) is highly desirable for easing migrations, promotes faster query performance and reduces data skew.
Discovery of Java & ASP.NET web apps and assessment of ASP.NET web apps to Azure App Service code (native) on VMware, Hyper-V and Physical servers is now in public preview.
Purview DevOps policies can be used to assign permissions needed for performance monitoring and security auditing at scale and without direct connection to the database.
Azure Monitor is announcing the release of new Azure Policy built-in policies and initiatives for enabling platform logging of audit events for Azure services.
Use the new migration tool to migrate workloads from Single Server to Flexible Server on Azure Database for PostgreSQL, a managed service running the open-source Postgres database on Azure.
Unlock the full potential of the Azure ecosystem with a MongoDB compatible database that simply works. A new architecture designed to provide seamless integration and unbeatable performance.
TARGET AVAILABILITY: Q1 2023
Azure Files NFS v4.1 share now support nconnect option.
This article describes a new feature update for Azure Site Recovery.
Independently throttle your event streaming workloads using application groups and client applications information at namespace or event hub/Kafka topic level.
This article describes a new feature update for Azure Site Recovery.
Azure Chaos Studio is now available in Brazil South region.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Not knowing exactly what is in your cloud accounts, or those of your client's can be a worry. What exactly is running in there and what is it costing? What obsolete resources are you still being charged for? What legacy dev/test environments can be switched off? What open ports are inviting in hackers? You can answer all these questions with Hava.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (No credit card required and includes a forever free tier):