This week's roundup of all the cloud news.
Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday 15th April 2022
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
AWS Updates and Releases
Source: aws.amazon.com
AWS DataSync can now copy data to and from Amazon FSx for OpenZFS
AWS DataSync now supports transferring files to and from Amazon FSx for OpenZFS, a fully managed service that offers highly reliable, scalable, performant, and feature-rich file storage built on the open-source OpenZFS file system. Using DataSync, you can easily and securely migrate your on-premises file or object storage to FSx for OpenZFS or perform ongoing transfers of your data between FSx for OpenZFS and your on-premises storage or AWS Storage services. You can also use DataSync to move data between FSx for OpenZFS file systems.
AWS DataSync is an online data movement service that provides you with a simple way to automate and accelerate copying data over the internet or AWS Direct Connect. DataSync securely and seamlessly connects to your Amazon FSx for OpenZFS file system, and copies files and folders together with their associated metadata. DataSync automates data transfers, including scheduling, monitoring, encryption, and data integrity validation. Support for Amazon FSx for OpenZFS complements existing DataSync capabilities to copy data between Network File System (NFS) shares, Server Message Block (SMB) shares, Hadoop Distributed File Systems (HDFS), self-managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, Amazon FSx for Windows File Server file systems, and Amazon FSx for Lustre file systems.
This new capability can be used in all regions where AWS DataSync and Amazon FSx for OpenZFS are available. Learn more by reading the DataSync User Guide and DataSync website.
AWS Fargate now delivers faster scaling of applications
AWS Fargate, the serverless compute engine for Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS), now enables customers to scale applications faster, improving performance and reducing wait time. We have made several improvements over the last year that enable you to scale applications up to 16X faster, making it easier to build and run applications at a larger scale on Fargate.
When using the Amazon ECS service scheduler for running web and other long-running applications, you will be able to launch up to 500 tasks in under a minute per service, 16X faster than last year. Previously you would have waited for nearly 15 minutes to scale an application to 500 tasks at steady state. When running one-time or periodic batch jobs using ECS RunTask API, Fargate now allows customer accounts to burst up to 100 On-Demand or Spot tasks, with a sustained task launch rate of 20 tasks per second, 20X faster than last year. For example, if you run a batch job with 1,200 On-Demand tasks, you can now launch your job in under a minute, while previously it would have taken about 20 minutes. Similarly, EKS Fargate customers will now observe up to 20X faster scaling when using the Platform Versions referenced in the release notes.
Amazon FSx for NetApp ONTAP introduces a single Availability Zone deployment option
Amazon FSx for NetApp ONTAP now supports Single Availability Zone (AZ) deployments. Single-AZ file systems are designed for use cases that do not require the data resiliency model of a Multi-AZ file system, such as running development and test workloads, or storing secondary copies of data that is already stored on premises or in other AWS Regions. Single-AZ file systems provide a lower-cost storage option than Multi-AZ file systems for these use cases, while offering all the same data management capabilities and features.
Production workloads typically require high availability and durability across multiple AZs. FSx for ONTAP Multi-AZ file systems offer a simple solution for these workloads by providing storage with built-in replication across AZs.
However, with some workloads you may already have another copy of your data stored elsewhere (e.g., disaster recovery from on premises to AWS or across AWS Regions), and you don’t need your Amazon FSx file system to also offer Multi-AZ resiliency. Non-production workloads such as development and testing also don’t always require high availability across AZs. Single-AZ file systems offer a cost-optimized solution for these use cases by only replicating data within an AZ. Like Multi-AZ file systems, Single-AZ file systems also offer automatic backups that are stored across multiple AZs for high durability.
You can create Single-AZ file systems in all Regions where Amazon FSx for NetApp ONTAP is available. To learn more about this deployment option, please visit the FSx for NetApp ONTAP documentation, the FSx for NetApp ONTAP product page, and the AWS News Blog.
Announcing the Amazon Chime SDK for JavaScript 3.0 and React Components 3.0
The Amazon Chime SDK lets developers add intelligent real-time audio, video, and screen share to their web and mobile applications. The Amazon Chime SDK for JavaScript 3.0 and React Components 3.0 provides developers of web and Electron applications with a simplified developer experience for faster implementation and consistency across popular browsers and Chromium runtimes.
Each user’s audio and video devices are now fully decoupled from their connection to the WebRTC media session. A user can select their preferred devices in a preview window prior to joining the session, and the devices do not have to be programmatically reselected when joining the session. Once connected to the session, the user can switch devices instantly and without interrupting their WebRTC connection.
The 3.0 client libraries now use the standardized WebRTC metrics for all browsers. Safari 12, and Plan B in Session Description Protocol (SDP) negotiations are no longer supported. Client event logging to destinations including Amazon CloudWatch is now available in both client libraries. The AWS SDK dependency has been updated from 2.x to 3.x, to help avoid using multiple versions in a single application and minimize application footprint.
Amazon EFS integration with the new and improved launch experience on the EC2 Console
Starting this week, you can now automatically attach Amazon EFS file systems to new Amazon EC2 instances created from the Configure storage section of the new and improved instance launch experience, making it simpler and easier to use serverless and elastic file storage with your EC2 instances. This integration helps you simplify the process of configuring EC2 instances to mount EFS file systems at launch time with recommended mount options. In addition, from the Configure storage section of the launch experience, you can also create new EFS file systems using the recommended settings without having to leave the Amazon EC2 console.
Amazon EFS is a simple, serverless, set-and-forget, elastic file system that makes it easy to set up, scale, and cost-optimize file storage in the AWS Cloud. It is built to scale on demand to store petabytes of data without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS can elastically scale to be consumed from tens of thousands of connections from any AWS compute including Amazon EC2 instances, serverless functions, and containers. Now, you can automatically mount your EFS file systems on newly created Amazon EC2 instances using the new and improved EC2 launch experience.
You can now automatically attach Amazon FSx file systems to new Amazon EC2 instances you create in the new EC2 launch experience, making it simple to use feature-rich and highly-performant FSx shared file storage with your EC2 instances.
The Amazon FSx family of services makes it easy to launch, run, and scale shared storage powered by popular commercial and open-source file systems. Amazon FSx for NetApp ONTAP provides fully managed shared storage in the AWS Cloud with the popular data access and management capabilities of ONTAP. Amazon FSx for OpenZFS provides fully managed cost-effective shared storage powered by the popular OpenZFS file system. Now, you can automatically attach your FSx for ONTAP and FSx for OpenZFS file systems on newly created EC2 instances using the new launch experience on the EC2 Console.
To get started, simply select your FSx for ONTAP and FSx for OpenZFS file systems in the Storage section of the new EC2 launch experience. This capability is supported in all regions where these FSx services are available. See the Amazon EC2 documentation for more information.
Amazon Connect Wisdom now supports PrivateLink
You can now use AWS PrivateLink to privately access Amazon Connect Wisdom instances from your Amazon Virtual Private Cloud (Amazon VPC) without using public IPs, and without requiring the traffic to traverse across the internet.
AWS PrivateLink provides private connectivity among VPCs, AWS services, and your on-premises networks, without exposing your traffic to the public internet. You can now manage your Amazon Connect Wisdom agent assistants without requiring an Internet Gateway in your VPC, instead using AWS PrivateLink with private IP connectivity and security groups to help meet your compliance requirements.
Amazon Connect Wisdom delivers to contact center call agents the information they need to help solve customer issues as they’re actively speaking with customers. Contact centers often use knowledge management systems and document repositories that are separate from their agents' desktop application, which forces agents to spend valuable time searching for answers while speaking with customers, leading to poor customer experiences. With Amazon Connect Wisdom, agents are delivered content recommendations personalized for the call and can search across connected repositories from within their agent desktop to find answers.
To use AWS PrivateLink, create an interface VPC endpoint for Amazon Connect Wisdom in your VPC using the Amazon VPC console, SDK, or CLI. You can also access the VPC endpoint from on-premises environments or from other VPCs using AWS VPN, AWS Direct Connect, or VPC Peering.
Amazon Connect Wisdom is available in the US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (London) AWS regions.
Amazon WorkSpaces launches new Graphics G4dn bundles to improve performance and optimize costs
Amazon WorkSpaces is introducing two new graphics bundles based on the EC2 G4dn family: Graphics.g4dn and GraphicsPro.g4dn. These bundles allow you to run graphics- and compute-intensive workloads on desktops in the cloud as cost-effective solutions for graphics applications that are optimized for NVIDIA GPUs using NVIDIA libraries such as CUDA, CuDNN, OptiX, and Video Codec SDK. They come with the NVIDIA T4 Tensor Core GPU that features multi-precision Turing Tensor Cores and RT Cores, AWS custom second generation Intel® Xeon® Scalable (Cascade Lake) processors, and the local NVMe storage designed for applications that require fast access to locally stored data.
Graphics.g4dn bundle provides 4vCPUs, 16 GB of RAM, 16 GB of video memory, 125 GB of temporary NVMe SSD local instance store, and a minimum 100 GB of persistent storage for the user volume and root volumes. It is ideal for customers seeking low-cost GPU-enabled virtual desktops to operate mainstream graphics-intensive applications, such as engineering, design, and architectural applications.
GraphicsPro.g4dn bundle offers 16vCPUs, 64 GB of RAM, 16 GB of video memory, 225 GB of temporary NVMe SSD local instance store, and a minimum 100 GB of persistent storage for the user volume and root volumes. The offering enables high-end workstation workflows on WorkSpaces and is ideal for media production, seismic visualization, GIS data processing, data intelligence, small-scale ML model training, and ML inference.
You can deploy Graphics.g4dn or GraphicsPro.g4dn bundles with Windows 10 Desktop experience powered by Windows Server 2019, or you can bring your own Windows licenses. You can launch the new graphics bundles by selecting their names in the Amazon WorkSpaces management console, or through the Amazon WorkSpaces APIs. You can use the new graphics bundles with Amazon WorkSpaces client applications with PCoIP streaming protocols. The new graphics bundles will be available in US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt, Ireland, London), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, and Tokyo), and South America (São Paulo).
AWS Single Sign-On is now HIPAA eligible
This week, AWS announced that AWS Single Sign-On (AWS SSO) is now HIPAA (Health Insurance Portability and Accountability Act) eligible. AWS SSO is where customers create, or connect, workforce identities and manage their access centrally across AWS accounts. HIPAA eligibility means that customers subject to HIPAA - including health insurance companies, healthcare providers, healthcare clearinghouses, government programs that pay for healthcare, military and veterans' health programs, as well as their associates - can now use AWS SSO for authentication and authorization of users who configure or manage AWS workloads that store, process or transmit Protected Health Information (PHI) and users who sign into applications integrated with AWS SSO that utilize PHI.
If you have a HIPAA Business Associate Addendum (BAA) in place with AWS, you can now start using AWS SSO for HIPAA eligible workloads or use cases. With just a few clicks in the AWS SSO management console you can create users in AWS SSO, or connect your existing identity source, and configure permissions that grant your users access across AWS accounts and hundreds of pre-integrated cloud applications. For information and best practices about configuring AWS HIPAA Eligible Services, see the Architecting for HIPAA Security and Compliance on Amazon Web Services Whitepaper.
Amazon Personalize now supports resource tagging
This week AWS announced support for tagging of Amazon Personalize resources to help simplify organization and cost management of resources. Amazon Personalize enables developers to improve customer engagement through personalized product and content recommendations – no ML expertise required.
Tags are labels in the form of key-value pairs that may be attached to individual Amazon Personalize resources to search and filter resources, or allocate costs. For example, if you have multiple campaigns that are linked to different use cases on your website, you can set key as “Website” and value as “Home Page” or “Details page” to understand individual spend for each of the two campaigns.
This functionality also allows customers with multi-tenant deployments to track spend among their end customers. Tagging functionality is available for several Amazon Personalize resources such as dataset groups, solutions, campaigns, recommenders, import jobs, batch inference, batch segment jobs and other resources.
For a complete list of the supported resources and to learn more on how to tag your Amazon Personalize resources, see the Amazon Personalize Developer Guide. Tagging support for Amazon Personalize resources is available in all Amazon Personalize regions.
Starting this week, Amazon Virtual Private Cloud (VPC) customers can now create their own Prefix List in the AWS Asia Pacific (Osaka) region. A prefix list is a collection of CIDR blocks that can be used to configure VPC security groups and route tables and share it with other AWS accounts using Resource Access Manager (RAM). Customers can easily audit and apply prefix lists across all their accounts to have a consistent security posture and routing behavior.
VPC security groups and route tables are used to control access and routing policies. Customers often have a common set of CIDR blocks for security group and route table configurations. Prefix lists allow customers to group multiple CIDR blocks into a single object, and use it as a reference in their security groups or route tables. This makes it easier for customers to roll out changes and maintain consistency in security groups and route tables across multiple VPCs and accounts.
With this region expansion, prefix list is now available in all AWS Regions except Asia Pacific (Jakarta), AWS GovCloud (US-East), AWS GovCloud (US-West), Amazon Web Services China (Beijing) Region, operated by Sinnet, and Amazon Web Services China (Ningxia) Region, operated by NWCD. There is no additional charge to use prefix lists.
AWS AI & ML Scholarship Program opens applications for underrepresented and underserved students
The AWS Artificial Intelligence (AI) and Machine Learning (ML) Scholarship program, in collaboration with Intel and Udacity, opens applications for underserved and underrepresented students in tech globally to gain hands on learning, tailored mentoring, and up to two of 2,500 Udacity Nanodegree scholarships awarded annually.
The AWS AI & ML Scholarship program brings ML concepts to life by leveraging AWS DeepRacer Student to get hands-on in learning how to train ML models that drive a virtual race car. AWS DeepRacer Student teaches foundational ML concepts by providing educational content centered on ML fundamentals. Students who successfully complete learning module quizzes and meet a designated lap time performance will be prequalified and receive a unique code to complete their scholarship application through Udacity.
Starting this week, applicants can log into AWS DeepRacer Student, opt into the scholarship program and begin tracking scholarship prerequisites on the home screen. Up to 2,000 students will be awarded a scholarship for the Udacity AI Programming with Python Nanodegree, a four-month course in a collaborative virtual class environment. Throughout the course, students will receive weekly support from Udacity mentors in small group sessions and monthly mentoring events with AWS and Intel experts on topics ranging from career development, networking, and more. The top 500 performing students in the first Nanodegree will be eligible to earn a second scholarship for an advanced Udacity Nanodegree specifically curated for the AI & ML scholarship recipients. Students in the advanced Nanodegree will also receive one to one mentoring from AWS and Intel experts to further prepare them in their career journey.
Students over the age of 16 and currently enrolled in high school or higher institutions around the globe are encouraged to apply today at awsaimlscholarship.com.
Google Cloud Releases and Updates
Source: cloud.google.com
Access Transparency
Access Transparency supports Secret Manager in GA stage. For the complete list of services that Access Transparency supports, see Supported services.
Anthos Clusters on AWS
Anthos Clusters on AWS now supports Kubernetes versions 1.22.8-gke.200 and 1.21.11-gke.100. For more information, see the open source release notes for Kubernetes 1.22.8 and Kubernetes 1.21.11.
Kubernetes 1.22 removes support for several deprecated v1beta1 APIs. Before upgrading your clusters to v1.22, you must upgrade your workloads to use the stable v1 APIs and confirm their compatibility with v1.22. For more information, see Kubernetes 1.22 Deprecated APIs.
When you create a new cluster using Kubernetes version 1.22, you can now configure custom logging parameters.
As a preview feature, you can now choose Windows as your node pool image type when you create node pools with Kubernetes version 1.22.8.
You can now set the autoscaler's minimum node count to zero.
This release of Anthos Clusters on AWS improves your ability to update your cluster configuration, including
- control plane security group IDs
- control plane proxy
- control plane and node pool SSH
- node pool security group IDs
- node pool root volume
- node pool encryption
- node pool proxy
You can now view most common asynchronous cluster and nodepool boot errors in the long running operation error field.
As a preview feature, you can now configure nodes to be dedicated hosts.
To create new 1.22 clusters, you need to add the ec2:GetConsoleOutput
permission to your Anthos Multi-Cloud API role.
Anthos clusters on Azure
Anthos Clusters on Azure now supports Kubernetes versions 1.22.8-gke.200 and 1.21.11-gke.100. For more information, see the open source release notes for Kubernetes 1.22.8 and Kubernetes 1.21.11.
Kubernetes 1.22 removes support for several deprecated v1beta1 APIs. Before upgrading your clusters to v1.22, you must upgrade your workloads to use the stable v1 APIs and confirm their compatibility with v1.22. For more information, see Kubernetes 1.22 Deprecated APIs.
When you create a new cluster using Kubernetes version 1.22, you can now configure custom logging parameters.
As a preview feature, you can now choose Windows as your node pool image type when you create node pools with Kubernetes version 1.22.8.
You can now set the autoscaler's minimum node count to zero.
This release of Anthos Clusters on Azure adds the ability to update your
- control plane and node pool VM size
- cluster annotations
- Azure admin users
- control plane root volume size
You can now set the autoscaler's minimum node count to zero.
You can now view most common asynchronous cluster and nodepool boot errors in the long running operation error field.
Apigee Connectors
Preview release of new Connectors for Apigee
On April 12, 2022, GCP released the preview version of new Connectors for Apigee.
The following new connectors are available for Apigee:
BigQuery
Starting in July 2022, the projects.list
API method will return results in unsorted order. Currently, the API returns the results in sorted order, although this is not a documented behavior of the API.
Cloud Database Migration Service
Database Migration Service now supports migrating Oracle workloads into Cloud SQL for PostgreSQL. Click here to access the documentation.
Cloud Logging
You can now add indexed LogEntry
fields to your Cloud Logging buckets to make querying your logs data faster.
Cloud SQL for MySQL
Customer-managed encryption key (CMEK) organization policy constraints are now available in Preview.
constraints/gcp.restrictNonCmekServices
allows you to control which resources require the use of CMEK.constraints/gcp.restrictCmekCryptoKeyProjects
allows you to control the projects from which a Cloud KMS key can be used to validate requests.
You can use both constraints together to enforce the use of CMEK from allowed projects.
To learn more, see Customer-managed encryption keys (CMEK) organization policies. To add CMEK organization policies now, see Add Cloud SQL organization policies.
Cloud SQL for PostgreSQL
Customer-managed encryption key (CMEK) organization policy constraints are now available in Preview.
constraints/gcp.restrictNonCmekServices
allows you to control which resources require the use of CMEK.constraints/gcp.restrictCmekCryptoKeyProjects
allows you to control the projects from which a Cloud KMS key can be used to validate requests.
You can use both constraints together to enforce the use of CMEK from allowed projects.
To learn more, see Customer-managed encryption keys (CMEK) organization policies. To add CMEK organization policies now, see Add Cloud SQL organization policies.
Cloud SQL for PostgreSQL supports in-place major version upgrades in Preview. You can upgrade your instance's major version to a more recent version. For more information, see Upgrade the database major version in-place.
Cloud SQL for SQL Server
Customer-managed encryption key (CMEK) organization policy constraints are now available in Preview.
constraints/gcp.restrictNonCmekServices
allows you to control which resources require the use of CMEK.constraints/gcp.restrictCmekCryptoKeyProjects
allows you to control the projects from which a Cloud KMS key can be used to validate requests.
You can use both constraints together to enforce the use of CMEK from allowed projects.
To learn more, see Customer-managed encryption keys (CMEK) organization policies. To add CMEK organization policies now, see Add Cloud SQL organization policies.
Cloud SQL for SQL Server supports in-place upgrades in Preview. You can upgrade your instance's major version or edition. For more information, see Upgrade the database major version in-place.
Compute Engine
Tau T2D VMs are now available in the following regions and zones:
- Las Vegas, NV
(us-west4-a,b)
- São Paulo, Chile, South America
(southamerica-east1-a,b,c)
- St. Ghislain, Belgium
(europe-west1-c)
N2 general-purpose VMs are available in Salt Lake City, UT (us-west3-a,b,c)
.
See VM instance pricing for details.
Config Connector
Config Connector version 1.81.0 is now available.
Added support for ApigeeEnvironment
resource.
Added field spec.cluster[].autoscalingConfig
to BigtableInstance
resource.
Added field spec.edgeSecurityPolicy
to ComputeBackendBucket
resource.
Added field spec.type
to ComputeSecurityPolicy
resource.
Added field spec.schedule.repeatInterval
to StorageTransferJob
resource
Fixed the bug introduced in version 1.62.0 that list fields can't be set to empty lists.
Dataproc
Announcing the General Availability (GA) release of Dataproc on GKE, which allows you to execute Big Data applications using the Dataproc jobs API on GKE clusters.
The dataproc:dataproc.performance.metrics.listener.enabled
cluster property, which is enabled by default, listens on port 8791 on all master nodes to extract performance-related telemetry Spark metrics. The metrics are published to the Dataproc service for it to use to set better defaults and improve the service. To opt-out of this feature, set dataproc:dataproc.performance.metrics.listener.enabled=false
when creating a Dataproc cluster.
New sub-minor versions of Dataproc images:
1.5.62-debian10, 1.5.62-ubuntu18, and 1.5.62-rocky8
2.0.36-debian10, 2.0.36-ubuntu18, and 2.0.36-rocky8
Changed the owner of /usr/lib/knox/conf/gateway-site.xml
from root:root
to knox:knox
.
Fixed and issue in which the Dataproc autoscaler would sometimes try to scale down a cluster by more than one thousand secondary worker nodes at one time. Now, the autoscaler will scale down at most one thousand nodes at one time. In cases where the autoscaler previously would have scaled down more than one thousand nodes at one time, it will scale down the nodes by at most one thousand nodes, and a log will be written to the autoscaler log noting this occurrence.
Fixed bugs that could cause Dataproc to delay marking a job cancelled.
Eventarc
Eventarc is now available in the following regions:
australia-southeast2
(Melbourne, Australia)northamerica-northeast2
(Toronto, Ontario, North America)southamerica-west1
(Santiago, Chile, South America)
Filestore
You can now use customer-managed encryption keys (CMEK) to protect data at rest in Filestore's High Scale SSD Tier instances.
- High Scale SSD instances stop and restart automatically when the state of an associated key changes
- This feature is currently in Preview
GKE
Egress NAT policy to configure IP masquerade is now generally available on GKE Autopilot clusters with Dataplane v2 in versions 1.22.7-gke.1500+ or 1.23.4-gke.1600+. For configuration examples of Egress NAT policy, see Egress NAT Policy documentation.
- Version 1.20.15-gke.2500 is now the default version in the Stable channel.
- The following versions are now available in the Stable channel:
Storage Transfer Service
Storage Transfer Service now offers a predefined role to simplify permission assignment to transfer agents. The roles/storagetransfer.transferAgent
role grants a minimum set of permissions required for the service to communicate with agents and eliminates the need to assign each permission individually.
The role should be granted to the user account or service account being used to authenticate the agents. See the On-premises agent account documentation for more details.
Microsoft Azure Releases And Updates
Source: azure.microsoft.com
In development: Azure Media Services low-latency live streaming
Azure Media Services is announcing the preview of low latency HLS with glass-to-glass latency as low as 3 seconds with support for automatic transcriptions and DRM.
Generally available: Azure SQL Migration extension for Azure Data Studio
Assess, receive Azure recommendations, and then migrate your SQL Server databases using the Azure SQL Migration extension in Azure Data Studio.
General availability: Azure Cosmos DB autoscale RU/s entry point is 4x lower
Set autoscale on your database or container with a new scale range of 100 RU/s – 1000 RU/s.
Public preview: Azure Cosmos DB API for MongoDB unique index re-indexing
Unique indexes can now be created in existing collections with data in the API for MongoDB.
General availability: Zone redundancy for Azure SQL Database general purpose tier
Make your general purpose provisioned and serverless Azure SQL databases and elastic pools more resilient with catastrophic datacenter outages, without any changes of the application logic by selecting zone redundancy.
Azure SQL—Public preview updates for mid-April 2022
Public preview enhancements and updates released for Azure SQL.
Public preview: Azure SQL Database Hyperscale reverse migration to general purpose tier
Easily move your previously migrated databases on Azure SQL Database Hyperscale back to the general purpose tier.
Public preview: Azure Monitoring Agent supports custom and IIS logs
New collection types supported by Azure Monitoring Agent that extends beyond the standard data available today.
Generally available: Azure Static Web Apps support for private endpoints
Azure Static Web Apps support for private endpoints is now generally available with a 99.95 SLA for production and staging environments.
Public preview: Metrics and alerts support in Azure Container Apps
You can now view metrics for your Azure Container Apps and setup alerts for metric thresholds and Log Analytic queries.
Public preview: Health probes support in Azure Container Apps
You can now use readiness, liveness, and startup probes in your Azure Container Apps.
Public preview: Visual Studio support for Azure Container Apps
You can now publish .NET Core applications to Azure Container Apps from Visual Studio 2022 Preview 2.
Public preview: Visual Studio Code extension for Azure Container Apps
You can now deploy and manage Azure Container Apps from Visual Studio Code.
Generally available: IBM WebSphere on Azure with evaluation licensing
Use evaluation licenses, instead of full entitlements, for WebSphere on Azure Virtual Machines for trial and prototyping.
Public preview: Managed identities support in Azure Container Apps
You can now use managed identities with Azure Container Apps instead of passing secrets in connection strings.
Public preview: Support for Windows clients using Azure Monitor agent
Using the new Windows client installer launched today, you can now get the benefits of the new agent and data collection rules on your Windows 10 and 11 client devices.
Generally available: Java 17 and Tomcat 10.0 on Azure App Service
Use new supported versions of Java and Tomcat to run web applications on Azure App Service.
Generally available: new result set grid in Azure Monitor log analytics
The new result set grid in log analytics boasts a brand-new experience, introduces numerous new features, and is much easier to use!
Public preview: Azure Monitor activity log insights
Azure Monitor activity log insights let you view information about changes to resources and resource groups in a subscription.
Public preview: Redesign of alerts summary (landing) page
The summary (landing) page for alerts has been simplified to improve usability and actionability.
Have you tried Hava automated diagrams for AWS, Azure and GCP. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure or GCP accounts. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here: