This week at Hava we are pleased to announce Hava now has Soc2 compliance. We also did a minor release. Full details are on the Linkedin newsletter.
Here's the weekly cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday March 17th 2023.
All the lastest Hava news can be found on our Linkedin Newsletter.
Of course we'd love to keep in touch at the other usual places. Come and say hello on:
AWS Updates and Releases
Amazon GuardDuty broadens threat detection coverage to help you protect your data residing in Amazon Aurora databases. GuardDuty RDS Protection is designed to profile and monitor access activity to Aurora databases in your AWS account without impacting database performance.
Using tailored machine learning models and integrated threat intelligence, GuardDuty can detect potential threats such as high severity brute force attacks, suspicious logins, and access by known threat actors.
Current GuardDuty users, including those in the public preview, can activate RDS protection with a single step in the GuardDuty console and, leveraging AWS Organizations, across all accounts in an organization.
If you’re new to GuardDuty, you will have RDS Protection turned on by default. All GuardDuty users can try RDS Protection at no cost with a 30-day free trial. For a full list of Regions where RDS Protection is available, visit Region-specific feature availability.
Amazon GuardDuty is a threat detection service that continuously monitors for malicious behavior to help protect your AWS resources, including your AWS accounts, access keys, EC2 instances, EKS clusters, data stored in S3, and now Aurora databases. Aurora is a fully managed MySQL and PostgreSQL-compatible relational database built for the cloud as part of the Amazon Relational Database Services (RDS).
This week, AWS announced the general availability of AWS Chatbot for Microsoft Teams that enables AWS customers to securely monitor their AWS resources, and respond to operation events in their AWS infrastructure from Microsoft Teams channels where their entire team can quickly review, collaboratively diagnose, and securely run common DevOps commands.
AWS Customers can now use AWS Chatbot for Microsoft Teams to implement ChatOps for AWS in their Microsoft Teams channels. Customers can set up Microsoft Teams channels to receive notifications from any of the 200+ AWS services that publish to Amazon CloudWatch or Amazon Simple Notification Service.
AWS Customers can retrieve diagnostic information, configure AWS environments, and take actions to remediate incidents without switching away from their Teams channels.
To enable notifications, customers subscribe their Microsoft Teams channels to SNS topics or CloudWatch Events in AWS Chatbot. When an alarm or an event occurs, AWS Chatbot delivers the notification directly to the subscribed Microsoft Teams channel.
Microsoft Teams channel members can also run AWS CLI commands to diagnose issues and configure AWS resources by tagging the AWS Chatbot in the channel.
AWS Cloud Map is now available in Asia Pacific (Melbourne) AWS Region. AWS Cloud Map is a cloud resource discovery service. With AWS Cloud Map, you can define custom names for your application resources, such as Amazon Elastic Container Services (Amazon ECS) tasks, Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon DynamoDB tables, or other cloud resources.
You can then use these custom names to discover the location and metadata of cloud resources from your applications using AWS SDK and authenticated API queries.
Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting this week, AWS customers can use the Amazon Kendra Microsoft SharePoint Cloud Connector to index and search documents from SharePoint Cloud.
Critical information can be scattered across multiple data sources in an enterprise, including messaging platforms like Microsoft Sharepoint. Amazon Kendra customers can now use the Kendra Microsoft SharePoint Cloud Connector to index documents and search for information across this content using Kendra Intelligent Search.
The Amazon Kendra Microsoft SharePoint Cloud connector is available in all AWS regions where Amazon Kendra is available.
AWS Storage Gateway expands availability to the AWS Asia Pacific (Hyderabad) and AWS Asia Pacific (Melbourne) Regions enabling customers to deploy and manage hybrid cloud storage for their on-premises workloads.
AWS Storage Gateway is a hybrid cloud storage service that provides on-premises applications access to virtually unlimited storage in the cloud. You can use AWS Storage Gateway for backing up and archiving data to AWS, providing on-premises file shares backed by cloud storage, and providing on-premises applications low latency access to data in the cloud.
This week, we are announcing the general availability of Amazon Linux 2023 (AL2023), our new Linux-based operating system for AWS that is designed to provide a secure, stable, high-performance environment to develop and run your cloud applications. AL2023 provides seamless integration with various AWS services and development tools and offers optimized performance for Amazon Elastic Compute Cloud (EC2) Graviton-based instances and AWS Support at no additional licensing cost.
Starting with AL2023, a new Amazon Linux major release will be available every 2 years. This cadence provides you with a more predictable release cycle and up to 5 years of support, making it easier for you to plan your upgrades.
AL2023 offers several improvements over Amazon Linux 2 (AL2). For example, AL2023 takes a security-by-default approach to help improve your security posture with preconfigured security policies, SELinux in permissive mode and IMDSv2 enabled by default, and the availability of kernel live patching.
With deterministic upgrades through versioned repositories, you can lock to a specific version of the Amazon Linux package repository, giving you control over how and when you absorb updates. With this capability, you can adhere to operational best practices more efficiently by ensuring consistency between package versions and updates across your environment. For a full comparison, see Comparing Amazon Linux 2 and Amazon Linux 2023.
Starting today, the general-purpose Amazon Elastic Compute Cloud (Amazon EC2) M6a instances and compute-optimized Amazon EC2 C6a instances are available in South America (Sao Paulo) region. These instances are powered by third-generation AMD EPYC processors with an all-core turbo frequency of up to 3.6 GHz, and they are built on the AWS Nitro System.
M6a instances deliver up to 35% better price performance than comparable M5a instances, while C6a instances deliver up to 15% better price performance than comparable C5a instances. Both instances offer 10% lower cost than comparable x86-based EC2 instances.
With these additional regions, M6a and C6a instances are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), and South America (Sao Paulo).
Amazon S3 Multi-Region Access Points now support datasets that are replicated across multiple AWS accounts. Cross-account Multi-Region Access Points simplify object storage access for applications that span both AWS Regions and accounts, avoiding the need for complex request routing logic in your applications.
They provide a single global endpoint for your multi-region applications, and dynamically route S3 requests based on policies that you define. This helps you to more easily implement multi-region resilience, latency-based routing, and active-passive failover, even when data is stored in multiple accounts.
Many customers use S3 Replication to replicate data to a bucket in a different AWS account, providing additional protection against accidental or unauthorized data deletion. S3 Multi-Region Access Points now support these multi-account configurations.
To get started, first set up cross-account S3 Replication. This will automatically maintain a replica of your data in one or more AWS Regions. Second, create a Multi-Region Access Point. The easiest way to do this is through the S3 console, which provides a step-by-step setup process, as well as an overview of your replication configuration and metrics.
Finally, update the bucket policy for any bucket that is in a different AWS Account than your Multi-Region Access Point to allow retrieval requests.
We are pleased to announce the alpha release of Mountpoint for Amazon S3, a new open source file client that delivers high throughput access, lowering compute costs for data lakes on Amazon S3. Mountpoint for Amazon S3 is a file client that translates local file system API calls to S3 object API calls like GET and LIST.
It is ideal for read-heavy data lake workloads that process petabytes of data and need Amazon S3’s high elastic throughput to scale up and down across thousands of instances.
Mountpoint for Amazon S3 supports random and sequential read operations on files, and listing files and directories. For details on supported file system operations with this release, read the documentation.
Mountpoint for Amazon S3 is an open source project. We are making the alpha release of Mountpoint for Amazon S3 available to the community to collect feedback early and incorporate your input into the design and implementation.
AWS welcome your contributions and your feedback on our roadmap which outlines the plan for adding additional features to Mountpoint for Amazon S3. To get started with Mountpoint for Amazon S3, visit the GitHub page and the technical blog.
Amazon S3 on Outposts now supports S3 Replication on Outposts. This extends S3’s fully managed approach to replication to S3 on Outposts buckets, helping you meet data residency and data redundancy requirements. With local S3 Replication on Outposts, you can create and configure replication rules to automatically replicate your S3 objects to another Outpost, or to another bucket on the same Outpost.
During replication, your S3 on Outposts objects are always sent over your local gateway (LGW), and objects do not travel back to the AWS Region. S3 Replication on Outposts provides an easy and flexible way to automatically replicate data within a specific data perimeter to address your data redundancy and compliance requirements.
With S3 Replication on Outposts, you can replicate objects to a single Outpost destination bucket or to multiple destination buckets. The destination buckets can be in different AWS Outposts or within the same Outpost as the source bucket.
As with S3 Replication in regions today, you can select to replicate all the items in your bucket, or you can filter to select objects based on prefix, object tags, or a combination of both. S3 Replication on Outposts also provides detailed metrics and notifications to monitor the status of your object replication.
You can monitor replication progress by tracking bytes pending, operations pending, and replication latency between your source and destination Outposts buckets using Amazon CloudWatch. You can also set up EventBridge Notifications to receive replication failure notifications to quickly diagnose and correct configuration issues.
Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting today, AWS customers can use the Amazon Kendra Confluence Server Connector to index and search documents from Confluence Server.
Critical information can be scattered across multiple data sources in an enterprise, including messaging platforms like Confluence Server. Amazon Kendra customers can now use the Kendra Confluence Server Connector to index messages and search for information across this content using Kendra Intelligent Search.
We are excited to announce the launch of the updated Amazon GameLift console experience to help customers more intuitively and efficiently manage and scale their game servers. Amazon GameLift is a fully managed solution that allows you to manage and scale dedicated game servers for session-based multiplayer games.
With this release, customers can more easily monitor and manage their game server instances and settings from a single interface.
The updated experience includes a redesigned navigation to more easily manage Amazon GameLift features such as creating builds, scripts, fleets, FlexMatch rules, and includes over 80 new info panels to help developers configure their Amazon GameLift resources without ever needing to leave the console.
With the extended Amazon CloudWatch integration, customers can create their own dashboards and custom views such as instance performance, utilization/capacity, and player sessions.
AWS added new pages to the console to provide customers with information on their game server groups, instance pricing history, and samples to set up Amazon GameLift resources and FlexMatch rule sets. The updated console experience gives game developers more visibility, flexibility, and speed when it comes to managing their Amazon GameLift resources.
Application Auto Scaling customers can now use arithmetic operations and mathematical functions to customize the metrics used with Target Tracking policies. Target Tracking works like a thermostat - it continuously changes the capacity of the scaled resource to maintain the scaling metric at the customer-defined target level.
Customers can use arithmetic operators (such as +, -, /, and *) and mathematical functions (such as Sum and Average) to easily create custom metrics based on existing CloudWatch metrics. Application Auto Scaling offers support to automatically scale capacity of 13 supported AWS services, including Amazon Elastic Container Service (ECS) services.
Specifically, Target Tracking works like a thermostat - it continuously changes the capacity of the scaled resource to maintain the specified metric at a customer-defined target level. Today’s release makes it easier and cheaper to configure Target Tracking with custom metrics.
Target Tracking offers out-of-the-box support for the most common metrics such as CPU Utilization of ECS services . In some cases, customers want to scale based on their own application-specific metrics, such as the number of requests served, or based on metrics published by other AWS services , such as AWS Simple Queue Service.
Until today, you would have had to create and pay for custom CloudWatch metrics for Target Tracking to consume. Now, if the custom metric is a simple function of other existing metrics, you can use CloudWatch Metric Math in the Target Tracking policy, instead of publishing (and paying for) a new custom CloudWatch metric.
For example, to define a custom metric representing the SQS messages per task in a ECS service, you could take the existing SQS metric for messages in queue and simply divide it by capacity in the Target Tracking Policy configuration using Metric Math to make it work with your Target Tracking policy.
Amazon OpenSearch Service announces security analytics that provides new threat monitoring, detection, and alerting features. These capabilities help you to detect and investigate potential security threats that may disrupt your business operations or pose a threat to sensitive organizational data.
Security analytics is built on open source OpenSearch and comes pre-packaged with over 2200 open source Sigma security rules. These rules help you find potential security threats from your event logs in real time. Previously users needed to have prior security knowledge and expertise on multiple products to generate actionable security alerts and insights.
With security analytics, users with no prior security experience can now leverage simplified workflows to correlate multiple security logs and investigate security incidents without leaving OpenSearch.
To get started, you can create detectors by using pre-packaged rule sets that automatically detect and generate findings. You can use OpenSearch Dashboards to create visualizations, dashboards or reports to help generate additional insights for further security investigation.
Additionally, you can create custom rules, customize security alerts based on threat severity, and receive automated notifications at your preferred destination such as email or a Slack channel.
Amazon EMR is excited to announce a new capability that enables users to apply AWS Lake Formation based table and column level permissions on Amazon S3 data lake for write operations (i.e., INSERT INTO, INSERT OVERWRITE) with Apache Hive jobs submitted using Amazon EMR Steps API.
This feature allows data administrators to define and enforce fine-grained table and column level security for customers accessing data via Apache Hive running on Amazon EMR.
Amazon EMR integration with AWS Lake Formation allows you to define and enforce database, table, and column-level permissions with open source data processing engines such as Apache Spark and Apache Hive running on Amazon EMR.
Prior to this release, data administrators can define and enforce Lake Formation based permissions on Databases, Tables, and Columns for read only workloads with Apache Hive on EMR. With the current release, you can now use Hive to write to or alter Lake Formation-enabled Tables.
This means you can enforce Lake Formation-based Database, Table, and Column level permissions when your customers are running INSERT INTO, INSERT OVERWRITE and ALTER TABLE queries. To use Lake Formation based permissions, customers must use Glue Data Catalog as the metastore.
Using Amazon S3 Object Lambda, you can add your own code to S3 GET, HEAD, and LIST API requests to modify data as it is returned to an application. You can now use an S3 Object Lambda Access Point alias as an origin for your Amazon CloudFront distribution to tailor or customize data to end users. For example, you can resize an image depending on the device that an end user is visiting from.
Beginning now, S3 Object Lambda Access Point aliases are automatically generated and are interchangeable with S3 bucket names for data accessed through S3 Object Lambda.
For existing S3 Object Lambda Access Points, aliases are automatically assigned and ready for use. With aliases for S3 Object Lambda Access Points, any application that requires an S3 bucket name can easily present different views of data depending on the requester.
Starting today, AWS customers can update Apple macOS operating system from within the guest environment on their Amazon Elastic Compute Cloud (EC2) M1 Mac instances. With this capability, customers can now update their guest environments to a specific or latest macOS (non-beta) version without having to tear down their existing macOS environments, launch new instances, and reinstall libraries, tooling, and dependencies such as Apple Xcode.
Virtual Private Cloud (VPC) interface endpoints for Amazon S3 now offer private DNS options that can help you more easily route S3 requests to the lowest-cost endpoint in your VPC. With private DNS for S3, your on-premises applications can use AWS PrivateLink to access S3 over an interface endpoint, while requests from your in-VPC applications access S3 using gateway endpoints.
Routing requests like this helps you take advantage of the lowest-cost private network path without having to make code or configuration changes to your clients.
To get started with private DNS for S3, first create an inbound resolver endpoint in your VPC and point your on-premises resolver to it. Then, go to the VPC console and use the enable DNS name option when you create or modify an interface endpoint. To automatically route requests from on-premises applications over interface endpoints, select Enable private DNS only for inbound endpoint.
With this option, S3’s regional DNS names (*.s3.region.amazonaws.com) will resolve to the private IP addresses on your interface endpoints for on-premises clients. Your in-VPC clients will be unaffected, and will continue to use S3’s public IP addresses. This means applications will use interface endpoints for your on-premises traffic, while in-VPC traffic will use lower-cost gateway endpoints.
AWS Data Exchange for Amazon S3 is now generally available, helping customers easily find, subscribe to, and use third-party data files for faster time to insight, storage cost optimization, simplified data licensing management, and more.
This feature is intended for subscribers who want to use third-party data files directly from data providers’ Amazon Simple Storage Service (Amazon S3) buckets, without needing to create or manage data copies, as well as data providers who want to offer in-place access to data hosted in their Amazon S3 buckets.
Subscribers access the same S3 objects that the data provider maintains, helping them optimize storage costs and use the most up-to-date data available, without additional engineering or operational work.
Data providers can easily set up AWS Data Exchange for Amazon S3 on top of their existing S3 buckets to share direct access to an entire S3 bucket or specific prefixes and objects; these S3 objects can be server-side encrypted with either customer-managed keys stored in AWS Key Management Service or Amazon S3 managed keys.
After setup, AWS Data Exchange automatically manages subscriptions, entitlements, billing, and payment.
AWS Data Exchange for Amazon S3 is available in all AWS Regions where AWS Data Exchange is available today.
Amazon Chime SDK now supports Amazon Voice Focus background noise reduction for telephone carrier deployments. Amazon Voice Focus is an award-winning, machine-learning based noise suppression algorithm that reduces unwanted environmental noises like wind, barking dogs, keyboard typing, and car horns from phone calls.
Now telecom providers can deploy background noise reduction, at scale, across their voice network for both landline and mobile subscribers.
Using Amazon Voice Focus for carriers, telecom providers can offer background noise reduction as a consumer and business-oriented voice calling feature to help improve the fidelity of voice conversations. For example, fixed-line operators can now provide Amazon Voice Focus to customers across their Session Initiation Protocol (SIP) trunking services to on-premises phone systems.
In this case, Amazon Voice Focus can help reduce background noise interrupting conversations between contact center agents and callers. Mobile Network Operators (MNOs) can also offer Amazon Voice Focus to their subscribers to make it easier for callers to communicate in noisy environments like busy city streets or loud vehicles.
Amazon Keyspaces (for Apache Cassandra) is a scalable, serverless, highly available, and fully managed Apache Cassandra-compatible database service.
This week, Amazon Keyspaces added support for client-side timestamps. Client-side timestamps are Cassandra-compatible timestamps that are persisted for each cell in your table. You can use client-side timestamps for cell level conflict resolution by letting applications determine the order of writes.
For example, when multiple applications make updates to the same data or when write operations arrive out of order due to variable network latency, Amazon Keyspaces uses these timestamps to process the writes based on the write timestamps of individual cells within rows.
To use client-side timestmaps, use the USING TIMESTAMP clause in your Data Manipulation Language (DML) CQL query. With this launch, you can also use the WRITETIME function to see the timestamp value that is stored for a specific cell on tables using client-side timestamps.
Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting today, AWS customers can use the Amazon Kendra SharePoint OnPrem Connectors (2013, 2016, 2019) and Subscription Edition to index and search messages from SharePoint OnPrem.
Critical information can be scattered across multiple data sources in an enterprise, including platforms like SharePoint OnPrem. Amazon Kendra customers can now use the Amazon Kendra Microsoft SharePoint OnPrem Connectors to index documents and search for information across this content using Amazon Kendra Intelligent Search.
Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting this week, AWS customers can use the Amazon Kendra Confluence Cloud Connector to index and search documents from Confluence Cloud.
Critical information can be scattered across multiple data sources in an enterprise, including messaging platforms like Confluence Cloud. Amazon Kendra customers can now use the Kendra Confluence Cloud Connector to index documents and search for information across this content using Kendra Intelligent Search.
The Amazon Kendra Confluence Cloud connector is available in all AWS regions where Amazon Kendra is available.
Anthos clusters on VMware
Fixed issue where API Gateway used the IP address of the Google Cloud Load Balancer (GCLB) (specifically the address of the forwarding rule) to validate IP-restricted API keys in requests proxied by a GCLB. API gateway now correctly validates IP-restricted API keys using the IP address of the client calling the GCLB.
Fixed issue where API Gateway used the IP address of the Google Cloud Load Balancer (GCLB) (specifically the address of the forwarding rule) to validate IP-restricted API keys in requests proxied by a GCLB. API gateway now correctly validates IP-restricted API keys using the IP address of the client calling the GCLB.
App Engine flexible environment Python
In the Google Cloud console, the Job details page has been updated to include an Events tab, which lists the job's status events and contains a link to the job's logs.
To view the Events tab, follow the steps to describe a job using the console.
All public SKU groups, including 8 Google Cloud Marketplace SKU groups are now available for repricing in the Partner Sales Console (PSC). You can can use the new SKU groups in repricing configurations to pass the granular margin to your customers. You can also view and download the list of SKUs in these SKU groups.
You can search for SKU groups by both name and ID.
When you restore a backup, if the destination cluster doesn't have enough nodes to store the new table, Cloud Bigtable returns a
FAILED_PRECONDITON error message. Previously, a
RESOURCE_EXHAUSTED error was returned.
Cloud Composer 2.1.9 and 1.20.9 release started on March 13, 2023. Get ready for upcoming changes and features as we roll out the new release to all regions. This release is in progress at the moment. Listed changes and features might not be available in some regions yet.
Fixed the issue where BigQuery tasks in the deferrable mode failed when data lineage was enabled.
Cloud Composer 2.1.9 and 1.20.9 images are available:
The Logging Query Language now supports a built-in
SEARCH function that you can use to find strings in your log data. The
SEARCH function is in preview. For more information, see
Cloud SQL for MySQL
Cloud SQL for MySQL now supports 106 new database flags. See supported flags for more information.
Generally available: Hyperdisk provides the fastest block storage for Compute Engine for your high-end, memory intensive workloads. Hyperdisk volumes are durable network storage devices that your VMs can access, similar to Persistent Disk. For more information, see About Hyperdisk.
Workforce identity federation now supports browser-based sign-in. The feature is generally available (GA). To use it, see Browser-based sign-in in Obtain short-lived tokens for workforce identity federation, or locate the Browser-based sign-in section in the configuration guide for your identity provider.
Identity Platform has updated several quotas. View the updated quotas from Identity Toolkit API > Quotas on the APIs & Services page in the Google Cloud console.
Secure Web Proxy
Cloud Secure Web Proxy supports TLS inspection, which helps you intercept the TLS traffic, inspect the encrypted request, and enforce security policies. This feature is supported in Preview.
Cloud Text-to-Speech now offers Long Audio Synthesis. This new API can be used to synthesize texts longer than 5 KB. For more information about API usage using the command line, see Create long audio from text by using the command line.
Hybrid subnets are available in Preview. A hybrid subnet combines an on-premises subnet and a VPC subnet into a single logical subnet. You can migrate individual workloads and instances from the on-premises subnet to the VPC subnet over time without needing to change IP addresses.
Microsoft Azure Releases And Updates
Work with JSON-style documents synced across multiple regions.
Monitor the PgBouncer process health for Azure Database for PostgreSQL – Flexible Server via Azure Monitor metrics and write custom alert rules on these metrics.
New troubleshooting experiences via monitor workbooks for Azure PostgreSQL - Flexible Server
General availability enhancements and updates released for Azure SQL in mid-March 2023
Azure Load Testing now uses Apache JMeter version 5.5 for running load tests.
New features now available in GA include the ability to visualize timeseries models, and create a Compute Instance on behalf of another user.
You can now choose different destinations for your Azure Container Apps logs
Azure Kubernetes Service Edge Essentials is a Microsoft supported lightweight Kubernetes distribution that is fine tuned to run on edge devices with constrained resources.
The Azure SQL Migration extension allows offline database migrations of on-premises and cloud SQL Server databases to Azure SQL Database, providing a comprehensive modernization experience in Azure Data Studio.
Use the Azure Database for MySQL - Flexible Server connector to connect to MySQL data and build with Power Apps and Logic Apps.
You can now use an identity-based connection to access Azure Storage, instead of embedding secrets in connection strings.
Take advantage of the features in OpenShift version 4.11.
Customers can now collect Syslog from their AKS Clusters using Azure Monitor container insights
Improve development velocity and efficiency via modern REST and GraphQL API using the new data API builder with support for multiple Azure databases.
Now available in Brazil Southeast, South Africa North and UAE North region, Azure Ultra Disk Storage provides high performance along with sub-millisecond latency for your most demanding workloads
Expanded regional availability and several new features for workload optimization, enhanced security, and performance.
Boost Azure Disk Storage IOPS and throughput with Performance Plus
Build CDC data flows from Azure Cosmos DB analytical store, at no RUs cost, with Azure Synapse Analytics or Azure Data Factory.
Public preview enhancements and updates released for Azure SQL in mid-March 2023
Azure Monitor container insights now offers a new, lightweight schema for the container logs in ContainerLogV2
Azure Portal metric charts will now enable customers to analyze metrics on their Azure resources by supporting the split-by operator on more than one dimension.
Azure Firewall Basic provides cost-effective, enterprise-grade network security for small and medium businesses (SMBs).
You can now make REST or GraphQL requests to a built-in `/data-api` endpoint to retrieve and modify contents of a connected database, without having to write backend code.
Customers can now take advantage of the unlimited virtualization licensing capability included with the SQL Server Software Assurance with Azure Hybrid Benefit for SQL Server on Azure VMware Solution.
Use the newly available pg_hint_plan extension to tweak PostgreSQL execution plans and semver extension to do semantic versioning in Azure Database for PostgreSQL – Flexible Server.
You can now add an APEX custom domain to your Static Web Apps with A Records
Azure Machine Learning is now Generally Available in the UK West region.
New feature now available in Public Preview includes the ability to receive troubleshooting documentation on failed environment builds and shorten the training phase of large-scale distributed PyTorch models.
We are adding the “Selective Disk Backup and Restore” capability in Enhanced Policy of Azure VM Backup.
The Azure Digital Twins Data history now supports historizing twin property updates, twin lifecycle events, and relationship lifecycle events.
Illumio for Azure Firewall enables organizations to understand application traffic and dependencies and apply consistent protection across environments - limiting exposure, containing breaches, and improving efficiency.
Extract robust insights from images and video content across multiple industry domains
Immutable vaults help you protect your backups against threats like ransomware attacks and malicious actors by ensuring that your backup data cannot be deleted before its intended expiry time.
Azure Chaos Studio is now available in East Asia region.
Create Ephemeral VMs with customer managed key encryption types for encryption at host.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Not knowing exactly what is in your cloud accounts, or those of your client's can be a worry. What exactly is running in there and what is it costing? What obsolete resources are you still being charged for? What legacy dev/test environments can be switched off? What open ports are inviting in hackers? You can answer all these questions with Hava.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (No credit card required and includes a forever free tier):