64 min read

In Cloud Computing This Week [Apr 14th 2023]

April 14, 2023

 

Cloud_News_Roundup

Hello,

Here's the weekly cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday April 14th 2023.

This week we released Azure self-hosted. If you are using Azure and would like to Host Hava on your own infrastructure, please get in touch.

All the lastest Hava news can be found on our Linkedin Newsletter.

Subscribe On Linkedin

Of course we'd love to keep in touch at the other usual places. Come and say hello on:

Facebook.      Linkedin.     Twitter.


Getting_Started_aws_logo

AWS Updates and Releases

Source: aws.amazon.com

Amazon EC2 Trn1n instances, optimized for network-intensive generative AI models, are now generally available

This week, AWS announced the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Trn1n instances, which are powered by AWS Trainium accelerators. Building on the capabilities of Trainium-powered Trn1 instances, Trn1n instances double the network bandwidth to 1600 Gbps of second-generation Elastic Fabric Adapter (EFAv2).

With this increased bandwidth, Trn1n instances deliver up to 20% faster time-to-train for training network-intensive generative AI models such as large language models (LLMs) and mixture of experts (MoE). Similar to Trn1 instances, Trn1n instances offer up to 50% savings on training costs over other comparable Amazon EC2 instances.

To support large-scale deep learning (DL) models, Trn1n instances are deployed in EC2 UltraClusters with high-speed EFAv2 networking. EFAv2 speeds up distributed training by delivering up to 50% improvement in collective communications performance over first-generation EFA.

You can use the UltraClusters to scale to up to 30,000 Trainium accelerators and get on-demand access to a supercomputer with 6.3 exaflops of compute performance.

Similar to Trn1, each Trn1n instance has up to 512 GB of high-bandwidth memory, delivers up to 3.4 petaflops of FP16/BF16 compute power, and features NeuronLink, an intra-instance high-bandwidth nonblocking interconnect. AWS Neuron SDK integrates natively with popular machine learning (ML) frameworks, such as PyTorch and TensorFlow, so that you can continue using your existing frameworks and application code to train DL models on Trn1n.

Developers can run DL training workloads on Trn1n instances using AWS Deep Learning AMIs, AWS Deep Learning Containers, or managed services such as Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWS ParallelCluster, Amazon SageMaker, and AWS Batch.

Amazon EC2 Inf2 instances, optimized for generative AI, are now generally available

This week, AWS announced the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Inf2 instances. These instances deliver high performance at the lowest cost in Amazon EC2 for generative AI models including large language models (LLMs) and vision transformers.

Inf2 instances are powered by up to 12 AWS Inferentia2 chips, the latest AWS designed deep learning (DL) accelerator. They deliver up to four times higher throughput and up to 10 times lower latency than first-generation Amazon EC2 Inf1 instances.

You can use Inf2 instances to run popular applications such as text summarization, code generation, video and image generation, speech recognition, personalization, and more. Inf2 instances are the first inference-optimized instances in Amazon EC2 to introduce scale-out distributed inference supported by NeuronLink, a high-speed, nonblocking interconnect.

You can now efficiently deploy models with hundreds of billions of parameters across multiple accelerators on Inf2 instances. Inf2 instances deliver up to three times higher throughput, up to eight times lower latency, and up to 40% better price performance than other comparable Amazon EC2 instances. To help you meet your sustainability goals, Inf2 instances offer up to 50% better performance per watt compared to other comparable Amazon EC2 instances.

Inf2 instances offer up to 2.3 petaflops of DL performance and up to 384 GB of total accelerator memory with 9.8 TB/s bandwidth. AWS Neuron SDK integrates natively with popular machine learning frameworks, such as PyTorch and TensorFlow. So, you can continue using your existing frameworks and application code to deploy on Inf2.

Developers can get started with Inf2 instances using AWS Deep Learning AMIs, AWS Deep Learning Containers, or managed services such as Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), and Amazon SageMaker.

Amazon CodeWhisperer is now generally available

AWS is announcing the general availability of Amazon CodeWhisperer. This artificial intelligence (AI) coding companion generates real-time single-line or full function code suggestions in your integrated development environment (IDE) to help you more quickly build software. With general availability, we are excited to introduce two tiers: CodeWhisperer Individual and CodeWhisperer Professional.

CodeWhisperer Individual is free to use for generating code. You can sign up with an AWS Builder ID based on your email address. The Individual Tier provides code recommendations, reference tracking, and security scans. 

CodeWhisperer Professional—priced at $19 per user, per month—additionally offers enterprise administration capabilities to organizations that want to provide their developers with access to CodeWhisperer. Administrators get organizational license management to centrally manage which developers in the organization should have access to CodeWhisperer. They also get organizational policy management to set service policies at the organizational level. 

CodeWhisperer programming language support is also expanding. In addition to supporting Python, Java, JavaScript, TypeScript, and C#, CodeWhisperer can now also generate code suggestions for Go, Rust, Kotlin, Scala, Ruby, PHP, SQL, C, C++, and Shell Scripting.

Announcing AWS Elemental MediaConnect Gateway

This week, AWS announced the general availability of AWS Elemental MediaConnect Gateway, a new cloud-connected software application that transmits live video between your on-premises multicast network and AWS Elemental MediaConnect. You can easily build end-to-end live video contribution and distribution workflows in AWS at scale.

MediaConnect Gateway is deployed on hardware you supply using Amazon Elastic Container Service (ECS) Anywhere. Getting started with MediaConnect Gateway takes just a few minutes. Once the installation is complete, management of MediaConnect Gateway is done with the AWS Management Console, MediaConnect API, or AWS CDK.

You create an ingress bridge to move a feed from on-premises to a MediaConnect flow in the cloud or an egress bridge to move content from a cloud flow back to an on-premises network. You can use MediaConnect Gateway in conjunction with all other features of MediaConnect, allowing you to build out live video contribution and distribution workflows. 

Amazon EC2 I4i instances available in additional regions

Starting this week, storage optimized Amazon EC2 I4i instances are now also available in Asia Pacific (Melbourne, Mumbai, Osaka), Europe (Milan, Stockholm), and Middle East (Bahrain) regions. Amazon EC2 I4i instances are powered by 3rd generation Intel Xeon Scalable processors (Ice Lake) and deliver the highest local storage performance within Amazon EC2 using AWS Nitro NVMe SSDs. 

Amazon EC2 I4i instances are designed for databases such as MySQL, Oracle DB, and Microsoft SQL Server, and NoSQL databases such as MongoDB, Couchbase, Aerospike, and Redis where low latency local NVMe storage is needed in order to meet application service level agreements (SLAs). I4i instances are available in 8 sizes - large, xlarge, 2xlarge, 4xlarge, 8xlarge, 16xlarge, 32xlarge and metal. 

With this regional expansion, I4i instances are now available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Canada (Central), Europe (Ireland, Frankfurt, London, Milan, Paris, Stockholm), Asia Pacific (Hong Kong, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Middle East (Bahrain) and South America (São Paulo). Customers in these regions can purchase the I4i instances via Savings Plans, Reserved, On-Demand, and Spot instances. 

Amazon Redshift enhances string query performance by up to 63x

This week, Amazon Redshift introduced additional performance enhancements that speed up string-based data processing by 5x to 63x compared to alternative compression encodings such as LZO or ZSTD. Amazon Redshift achieves this through vectorized scans over light weight CPU-efficient dictionary-encoded string columns that allows the database engine to operate directly over compressed data.

These techniques are optimal on low cardinality string columns (CHAR or VARCHAR). Low cardinality string columns are columns that have up to a few hundred unique string values.

You can automatically benefit from this new high performance string enhancement by enabling Automatic Table Optimization (ATO) in your Amazon Redshift data warehouse. If you do not have ATO enabled on your tables, you can receive recommendations from the Amazon Redshift Advisor in the Amazon Redshift Console on a string column’s suitability for BYTEDICT encoding.

You can also define new tables that have low cardinality string columns with BYTEDICT encoding. String enhancements in Amazon Redshift are now available in all Amazon Web Services (AWS) regions where Amazon Redshift is available.

AWS Lambda now supports SnapStart for Java functions in 6 additional regions

AWS Lambda now supports SnapStart for Java functions in 6 additional AWS Regions: Asia Pacific (Mumbai), Asia Pacific (Seoul), Canada (Central), Europe (London), South America (São Paulo), US West (N. California). AWS Lambda SnapStart for Java delivers up to 10x faster function startup performance at no extra cost. Lambda SnapStart is a performance optimization that makes it easier for you to build highly responsive and scalable Java applications using AWS Lambda, without having to provision resources or spend time and effort implementing complex performance optimizations.

You can activate Lambda SnapStart for new or existing Java-based Lambda functions running on Amazon Corretto 11 using the AWS Lambda API, AWS Management Console, AWS Command Line Interface (AWS CLI), AWS Cloud Formation, AWS Serverless Application Model (AWS SAM), AWS SDK, and AWS Cloud Development Kit (AWS CDK).

Amazon FSx for Windows File Server is now available in the three additional regions

Customers can now create Amazon FSx for Windows File Server file systems in three new AWS Regions: Europe (Zurich), Europe (Spain), and Asia Pacific (Hyderabad).

Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities. Amazon FSx for Windows File Server, a member of the Amazon FSx family, provides fully managed, highly reliable file storage built on Windows Server and can be accessed via the industry-standard Server Message Block (SMB) protocol. 

Amazon MSK is now available in Hyderabad, Spain, and Zurich Regions

Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now available in Asia Pacific (Hyderabad), Europe (Spain), and Europe (Zurich) Regions. Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easier for you to build and run applications that use Apache Kafka as a data store.

Amazon MSK is fully compatible with Apache Kafka, which enables you to more quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you spend more time building innovative streaming applications and less time managing Kafka clusters. 

Amazon Chime SDK updates Service Level Agreement

AWS has updated the Service Level Agreement (SLA) for Amazon Chime SDK to include a Monthly Uptime Percentage of 99.99% with a completed AWS Well-Architected Review. The Amazon Chime SDK provides builders with an easy way to add real-time voice, video, and messaging, powered by machine learning, into their applications.

SIP Trunking enables telephony administrators to connect on-premises phone systems to the public telephone network and Amazon machine learning services, such as Amazon Transcribe and Amazon Comprehend. PSTN Audio enables builders to create custom telephony applications like voice prompts, call routing, and recording.

WebRTC media sessions bring people together with real-time audio, video, and screen and content sharing in web and mobile applications. The integration with PSTN Audio allows users to participate via the public telephone network. Messaging provides secure chat that scales to millions of users with customizable retention, inspection, and redaction.

The updated SLA is available immediately in all regions where Amazon Chime SDK is available. Refer to the Available Regions in the Developer Guide for a list of available regions. 

Announcing updated video background blur and replacement in Amazon Chime SDK

Developers can now use updated video background blur and replacement with improved quality in Amazon Chime SDK client library for JavaScript. The Amazon Chime SDK lets developers add real-time audio, video, and screen share to their web applications. Video background blur and replacement obscure users’ surroundings to help increase visual privacy. 

The updated background blur and replacement incorporates visual improvements such as a new image segmentation model that can help distinguish more clearly between a meeting participant and their background.

The updated background blur and replacement is available in the Amazon Chime SDK client library for JavaScript, and works with meetings in all AWS Regions where Amazon Chime SDK meetings are available.

AWS Service Management Connector introduces AWS Support and Automation integrations in Jira Cloud

Starting this week, AWS customers can use Atlassian’s Jira Service Management (JSM) Cloud as a single place to track and manage cases (incidents) from AWS Support via the AWS Service Management Connector for JSM. AWS Support enables users to create, track and resolve cases related to AWS resources in a central place, helping customers reduce the time to issue resolution.

This dual sync integration between AWS Support cases and JSM incidents (issues) enables users to manage AWS Support cases while using their existing workflows in JSM Cloud.

In addition to the AWS Support integration, this launch also introduces AWS Systems Manager Automation integration which allows customers to automate common and repetitive IT operations and management tasks using predefined or custom built automated runbooks. The AWS Service Management Connector also provides existing integrations with AWS Service Catalog, AWS Security Hub and AWS Systems Manager Incident Manager. 

It’s easy to get started. The AWS Service Management Connector for Jira Service Management is available as an app (plug-in) to install at no-cost from the Atlassian Marketplace. Customers may incur cost for the AWS services used as well as the licensing for the IT service management (ITSM) tools.

These new features are available in all AWS Regions where AWS Support, AWS Service Catalog, AWS Security Hub and AWS Systems Manager services are available. For more information, please visit the documentation on the AWS Service Management Connector.

Amazon GameLift adds support for Unreal Engine 5

We are excited to announce Amazon GameLift now supports games built on Unreal Engine 5 with the latest update to the Amazon GameLift Server SDK. Amazon GameLift is a fully managed solution that allows you to manage and scale dedicated game servers for session-based multiplayer games. The latest version of the Amazon GameLift Server SDK 5.0 now supports Unity 2020.3, Unreal 4.26, Unreal 5.0, Go language, and custom C++ and C# engines. 

With this release, customers can integrate their Unreal 5 based game servers with the Amazon GameLift service. In addition, the latest Amazon GameLift Server SDK with Unreal 5 plugin is built to work with Amazon GameLift Anywhere, enabling developers to test and iterate Unreal game builds faster and manage game sessions across any server hosting infrastructure.

The Amazon GameLift Server SDK 5.0 now gives game developers even more flexibility, power, and ease of use when it comes to managing Unreal based game servers. 

Amazon FSx for Lustre is now available in three additional regions

Customers can now create Amazon FSx for Lustre file systems in three new AWS Regions: Europe (Zurich), Europe (Spain), and Asia Pacific (Hyderabad).

Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities.

Amazon FSx for Lustre, a member of the Amazon FSx family, provides fully-managed shared storage built on the world’s most popular high-performance file system, designed for fast processing of workloads such as machine learning, high performance computing (HPC), video processing, financial modeling, and electronic design automation (EDA). 

Amazon FSx for NetApp ONTAP is now available in three additional regions

Customers can now create Amazon FSx for NetApp ONTAP file systems in three new AWS Regions: Europe (Zurich), Europe (Spain), and Asia Pacific (Hyderabad).

Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities.

Amazon FSx for NetApp ONTAP, a member of the Amazon FSx family, provides the first and only complete, fully managed NetApp ONTAP file systems in the cloud. It offers the familiar features, performance, capabilities, and APIs of ONTAP with the agility, scalability, and simplicity of an AWS service.

AWS IoT Core announces general availability for MQTT5 Shared Subscriptions and new CloudWatch metrics

AWS IoT Core, a managed cloud service that lets customers connect billions of Internet of Things (IoT) devices and routes trillions of messages to AWS services, announces support for Shared Subscriptions to its MQTT-based messaging broker service.

MQTT is a device-to-device messaging communication standard widely used in IoT applications for device to cloud message delivery. AWS IoT Core supports both MQTT3.1.1 and the newer MQTT5 industry specifications allowing clients with either version to take advantage of this new feature seamlessly.

Using Shared Subscriptions, MQTT3 or MQTT5 clients can distribute a large volume of inbound messages to a group of subscribers for processing messages in a more efficient manner. This is achieved by delivering each message randomly to one of the subscribers, thus spreading the message processing load across a larger set of processors.

In addition, AWS IoT Core is also enabling new CloudWatch metrics to help with observability of messages queued for persistent sessions in the MQTT broker. New metrics will indicate Success, ServerError, or Throttling of queued messages. These new metrics will simplify the debugging of any persistent message deliveries.

To get started and to learn more about MQTT5 features supported by AWS, refer to the technical documentation.

EC2 Image Builder supports vulnerability detection with Amazon Inspector for custom images

Customers can now use EC2 Image Builder to easily scan custom Amazon Machine Images (AMIs) and Container images in their image pipelines to evaluate the impact of CVEs (Common Vulnerabilities and Exposures). You no longer have to manage custom scripts that identify CVEs on your images during image build process, to analyze next steps and mitigate the impact of CVEs.

With this feature, powered by Amazon Inspector, you are provided a security overview of your AMIs and Container images that details the affected resources, vulnerability details, and known remediations.

To access this feature, you need to enable Amazon Inspector for your AWS account. You can go to the Image Builder Console to activate security scanning for your AWS account. Once Amazon Inspector scanning is activated, Image Builder will generate a security overview of your images on the next build of image pipelines in that account.

You can also manually run required image pipelines to generate the latest security overview of images. Security findings are accessible in the Console, as well as via CLI, API, CloudFormation, and CDK.

AWS Ground Station now supports Wideband Digital Intermediate Frequency

Amazon Web Services (AWS) announces the general availability of Wideband Digital Intermediate Frequency (DigIF) for satellite operators using Software Defined Radios (SDRs) with AWS Ground Station. With Wideband DigIF, satellite operators can use a SDR of their choice to perform demodulation and decoding of data in their Amazon Virtual Private Cloud (Amazon VPC), resulting in more control and flexibility of downlink data.

Wideband DigIF provides satellite operators the ability to downlink up to five channels, 400 MHz total, per polarity. This feature delivers Wideband DigIF data across AWS’s low-latency, high bandwidth global network into a satellite operator’s Amazon VPC.

To use Wideband DigIF, satellite operators utilize Amazon CloudFormation to set up the required resources to deliver data to Amazon Elastic Compute Cloud (Amazon EC2) for real-time processing or store data in Amazon Simple Storage Service (Amazon S3) for asynchronous processing.

AWS launches Split Cost Allocation Data for Amazon ECS and AWS Batch

Starting this week, AWS customers can receive cost data for Amazon Elastic Container Service (Amazon ECS) tasks and AWS Batch jobs in the AWS Cost and Usage Reports (CUR), enabling you to analyze, optimize, and chargeback cost and usage for your containerized applications. With AWS Split Cost Allocation Data, customers can now allocate application costs to individual business units and teams based on how containerized applications consume shared compute and memory resources.

Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Using Amazon ECS, customers create resources such as services, tasks, and clusters, spread across different teams, products, business units, and environments.

With visibility into the cost of the underlying workloads, customers can allocate costs to individual business units or teams. Previously only Amazon ECS task usage data was available in the AWS CUR for customers using the EC2 launch type. Now you can view cost data for your Amazon ECS tasks in the AWS CUR. Customers using the AWS Fargate launch type can also view their actual CPU and memory usage in the AWS CUR.

AWS Batch customers running workloads using Amazon ECS on Amazon EC2 and AWS Fargate compute environments will also see their cost and usage in the AWS CUR.

AWS Lake Formation is now available in AWS Europe (Spain) Region

AWS Lake Formation is a service that allows you to set up a secure data lake in days. A data lake is a centralized, curated, and secured repository that stores your data, both in its original form and prepared for analysis. A data lake enables you to break down data silos and combine different types of analytics to gain insights and guide better business decisions.

Creating a data lake with Lake Formation allows you to define where your data resides and what data access and security policies you want to apply. Your users can then access the centralized AWS Glue Data Catalog which describes available data sets and their appropriate usage.

Your users then leverage these data sets with their choice of analytics and machine learning services, like Amazon EMR for Apache Spark, Amazon Redshift Spectrum, AWS Glue, Amazon QuickSight, and Amazon Athena.

AWS Lambda adds support for Node.js 18 in the AWS GovCloud (US) Regions

AWS Lambda now supports Node.js 18 as a managed runtime in the AWS GovCloud (US) Regions. Developers creating serverless applications in Lambda with Node.js 18 in the AWS GovCloud regions can take advantage of new features such as an upgrade of the bundled AWS SDK for JavaScript to v3 and improved support for deploying ES Modules using Lambda layers.

This release also provides access to Node.js 18 language enhancements, including the experimental ‘fetch’ API. For more information on Lambda’s support for Node.js 18, see our blog post at Node.js 18.x runtime now available in AWS Lambda.

To deploy Lambda functions using Node.js 18, upload the code through the Lambda console and select the Node.js 18 runtime. You can also use the AWS CLI, AWS Serverless Application Model (AWS SAM) and AWS CloudFormation to deploy and manage serverless applications written in Node.js 18.

Additionally, you can also use the AWS-provided Node.js 18 base image to build and deploy Node.js 18 functions using a container image. To migrate existing Lambda functions running earlier Node versions, review your code for compatibility with Node.js 18 and then update the function runtime to Node.js 18.

Node.js 18 is the latest long-term support (LTS) release of Node.js and will be supported for security and bug fixes until April 2025. AWS will automatically apply updates to the Node.js 18 managed runtime and to the AWS-provided Node.js 18 base image, as they become available.

AWS Lake Formation is now available in AWS Europe (Zurich) Region

AWS Lake Formation is a service that allows you to set up a secure data lake in days. A data lake is a centralized curated, and secured repository that stores all your data, both in its original form and prepared for analysis. A data lake enables you to break down data silos and combine different types of analytics to gain insights and guide better business decisions.

Creating a data lake with Lake Formation allows you to define where your data resides and what data access and security policies you want to apply. Your users can then access the centralized AWS Glue Data Catalog which describes available data sets and their appropriate usage.

Your users then leverage these data sets with their choice of analytics and machine learning services, like Amazon EMR for Apache Spark, Amazon Redshift Spectrum, AWS Glue, Amazon QuickSight, and Amazon Athena.

Amazon EKS and Amazon EKS Distro now support Kubernetes version 1.26

Kubernetes 1.26 introduced several new features and bug fixes, and AWS is excited to announce that you can now use Amazon EKS and Amazon EKS Distro to run Kubernetes version 1.26. Starting today, you can create new 1.26 clusters or upgrade your existing clusters to 1.26 using the Amazon EKS console, the eksctl command line interface, or through an infrastructure-as-code tool.

Highlights of this Kubernetes version include stable support for Windows privileged containers and mixed protocols in services of type LoadBalancer, as well as scheduling improvements including controlling whether to take taints/tolerations into consideration when calculating Pod Topology Spread skew. For detailed information on major changes in Kubernetes 1.26, see the Amazon EKS blog post and the Kubernetes project release notes

Kubernetes 1.26 support for Amazon EKS is available in all AWS Regions where Amazon EKS is available, including the AWS GovCloud (US) Regions.

Amazon EFS is now available in the AWS Asia Pacific (Melbourne) region

Customers can now create file systems using Amazon Elastic File System (Amazon EFS) in the AWS Asia Pacific (Melbourne) Region.

Amazon EFS is designed to provide serverless, fully elastic file storage that lets you share file data without provisioning or managing storage capacity and performance. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files.

Because Amazon EFS has a simple web services interface, you can create and configure file systems quickly and easily. The service is designed to manage file storage infrastructure for you, meaning that you can avoid the complexity of deploying, patching, and maintaining complex file system configurations. 

AWS WAF increases web ACL capacity units limits

AWS WAF customers can now use up to 5,000 Web ACL capacity units (WCU) per web ACL. WAF uses WCU to calculate and control the operating resources that are required to run WAF rules, rule groups, and web ACLs. Previously, WAF permitted a maximum of 1,500 WCU per web ACL.

Customers who needed more than 1,500 WCU for their web ACLs needed to request manual limit increases. As WAF has launched new features and improved core product capabilities over the past few years, customers have asked us to increase the WCU limit so that it’s easier for them to use these new features without worrying about running out of capacity. 

Now, customers can use up to 5,000 WCU in their web ACLs without the need to request a limit increase. Web ACLs that use more than 1,500 WCUs will incur an additional charge per million requests processed for each 500 WCU the Web ACL uses beyond the default allocation of 1,500. For more information about pricing, visit the AWS WAF Pricing page.

AWS Firewall Manager is now available in three more regions

AWS Firewall Manager is now available in the Europe (Zurich), Europe (Spain), and Asia Pacific (Hyderabad) Regions, bringing AWS Firewall Manager to a total of 28 AWS commercial regions, two GovCloud regions, and all Amazon CloudFront edge locations.

AWS Firewall Manager is a security management service that enables customers to centrally configure and manage firewall rules across their accounts and resources. Using AWS Firewall Manager, customers can manage AWS WAF rules, AWS Shield Advanced protections, AWS Firewall Manager and VPC security groups across their entire AWS Organizations.

AWS Firewall Manager ensures that all firewall rules are consistently enforced, even as new accounts and resources are created.

Amazon AppFlow announces 6 new connectors

Amazon AppFlow announces the release of 6 new data connectors for Software-as-a-Service (SaaS) applications. The new connectors enable you to transfer your data from Aftership, BambooHR, Freshsales, Google Sheets, Kustomer, and Pipedrive, providing connectivity to CRM, HR, and shipment tracking applications. The Amazon AppFlow integrations make it easier for you to access your data, gain actionable insights, and streamlines analysis and reporting.

Amazon AppFlow is a fully-managed integration service that enables you to securely transfer your data between Software-as-a-Service (SaaS) applications like Salesforce, SAP, Google Analytics, Facebook Ads, ServiceNow, and AWS services like Amazon S3 and Amazon Redshift without writing code.

AWS AppSync now supports publishing events to Amazon EventBridge

AWS AppSync is a managed service for building scalable APIs that connect applications to data and events. Developers can now use the new AppSync EventBridge data source target in their AppSync API to easily publish events generated from their applications, such as shopping cart actions, to subscribers of an event bus powered by Amazon EventBridge. 

With AppSync, developers write functions contained in pipeline resolvers to connect the types, fields, or operations defined in a GraphQL schema to their data sources. Native AppSync data sources for common targets, like Amazon DynamoDB, Amazon RDS, Amazon OpenSearch, and Amazon Lambda, handle the heavy lifting of building the business logic those functions require to query and update data.

With the new AppSync EventBridge data source, it is now easier to write functions that publish API updates as events to an event bus powered by EventBridge. 

AWS Lambda supports Maximum Concurrency for Amazon SQS as an event source in AWS GovCloud (US)

AWS Lambda now supports setting Maximum Concurrency to the Amazon SQS event source in the AWS GovCloud (US) Regions, which allows customers to control the maximum concurrent invokes by the Amazon SQS event source. When multiple Amazon SQS event sources are configured to a function, customers can control the maximum concurrent invokes of an individual SQS event source.

AWS Lambda makes it easier to consume events from Amazon SQS at scale. A Lambda function subscribes to an SQS queue using an event source mapping (ESM). The ESM consists of processing instances that poll the queue for messages and invoke Lambda function. Processing instances scale up when there are more messages to process, and scale down when the number of messages in the queue drop or when they encounter function errors. 

Previously, customers looking to limit the maximum concurrent invokes by the ESM needed to set a reserved concurrency limit which would limit the concurrency used by the function, but at the cost of less consistent throughput and retrying messages due to function throttling. This new control on the event source mapping directly limits the number of concurrent invokes without customers having to configure reserved concurrency to perform a similar action. 

Amazon EC2 Serial Console is now available on EC2 bare metal instances

Starting this week, EC2 Serial Console is now generally available on EC2 bare metal instances in addition to Nitro virtual instances. EC2 Serial Console provides a simple and secure way to troubleshoot boot and network connectivity issues interactively, by establishing a connection to the serial port of an instance.

Previously, on bare metal instances, you could get serial console output as logs or a screenshot through the EC2 management console, API or the CLI. With the EC2 Serial Console feature, you can interactively run troubleshooting commands for resolving boot and network configuration issues.

EC2 Serial Console is ideal for situations where you are unable to connect to your instance via normal SSH or RDP. Access to EC2 Serial Console is not permitted by default at the account level, and customers can grant access to EC2 Serial console on their account by turning on the “EC2 Serial Console” account attribute from the AWS CLI or Console. It is also integrated with IAM and AWS for Organizations policies for fine grained access control.

Amazon MWAA now supports Apache Airflow version 2.5

You can now create Apache Airflow version 2.5 environments on Amazon Managed Workflows for Apache Airflow (MWAA). Apache Airflow 2.5 is the latest minor release of the popular open-source tool that helps customers author, schedule, and monitor workflows.

Amazon MWAA is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud. With Apache Airflow version 2.5 on Amazon MWAA, customers can enjoy the same scalability, availability, security, and ease of management that Amazon MWAA offers with the improvements of Apache Airflow 2.5, such as annotations for DAG runs and task instances, auto-refresh for task log view, and a better dataset user-interface.

Apache Airflow version 2.5 on Amazon MWAA includes Python version 3.10 and comes pre-installed with recently released Amazon Provider Package version 7.1.0, enabling access to new AWS integrations, such as Amazon SageMaker Pipelines, Amazon SageMaker Model Registry, and Amazon EMR Notebooks.

AWS WAF supports larger request body inspections for Amazon CloudFront distributions

Starting this week, AWS WAF supports inspecting the body of incoming requests to protected CloudFront distributions, up to 64KB. The default inspection size of the body of an HTTP/S request has been increased from 8KB to 16KB. This new default will be applied to all new and existing WAF web access control lists, free of charge. 

AWS WAF is a web application firewall that enables you to monitor the HTTP(S) requests that are made to your protected web application resources. The inspection limit on the body defines the portion of each request payload WAF will inspect for application threats. Customers can continue to choose to allow, block, or count requests that exceed the limit they define.

AWS WAF previously had a maximum request body inspection of 8KB. For CloudFront distributions, AWS WAF will now support inspecting up to 64KB of request bodies. 

Amazon Pinpoint now supports AWS PrivateLink

You can now use AWS PrivateLink to privately access Amazon Pinpoint from your Amazon Virtual Private Cloud (Amazon VPC) without using public IPs, and without requiring the public internet.

AWS PrivateLink provides private connectivity among VPCs, AWS services, and your on-premises networks, without exposing your traffic to the public internet. Amazon Pinpoint is a multi-channel communications service that enables you create personalized communications to engage targeted audiences across SMS, email, push notifications, in-app messaging, and voice.

Now, you can manage your Amazon Pinpoint segments, campaigns, and journeys without requiring an internet gateway in your VPC. AWS PrivateLink comes with private internet connectivity, security groups, and VPC endpoint policies to help meet your compliance requirements.

To use AWS PrivateLink, create an interface VPC endpoint for Amazon Pinpoint in your VPC using the Amazon VPC console, EC2 SDK, or CLI. You can also access the VPC endpoint from on-premises environments or from other VPCs using AWS VPN, AWS Direct Connect, or VPC Peering.

Amazon Rekognition launches Face Liveness to deter fraud in facial verification

This week, AWS announced the general availability of Amazon Rekognition Face Liveness, a new feature to deter fraud in facial verification. Face Liveness helps customers detect in seconds that real users, and not bad actors using spoofs, are accessing their services.

Financial services, gig economy, telco, healthcare, social media, and other customers use facial verification during online onboarding, step-up authentication, and age-based access restriction. These customers need to deter bad actors that use spoofs for unauthorized access. Face Liveness analyzes a short user selfie video to detect whether the user is real or a spoof.

Face Liveness detects spoofs presented to the camera (e.g. printed photos, digital photos or videos, or 3D masks) and spoofs that bypass the camera (e.g. pre-recorded real or deepfake videos). Face Liveness returns a high-quality selfie frame for downstream Amazon Rekognition Face Matching or Age Estimation analysis.

Customers can easily add Face Liveness to their React web, native iOS, and native Android applications using open-source AWS Amplify SDKs. Face Liveness automatically scales up or down based on demand and customers pay only for the face liveness checks performed. No infrastructure management, hardware-specific implementation, or machine learning (ML) expertise is required. Face Liveness uses ML models trained on diverse datasets to support high accuracy across user skin tones, ancestries, and devices.

Amazon ECS on AWS Fargate now supports FIPS 140-2 on AWS Fargate in AWS GovCloud (US) Regions

Starting this week, AWS customers can deploy their workloads on Amazon ECS on AWS Fargate in a manner compliant with Federal Information Processing Standard (FIPS) 140-2. FIPS is a U.S. and Canadian government standard that specifies the security requirements for cryptographic modules that protect sensitive information.

Until now, customers could manage FIPS mode themselves on Amazon ECS tasks running on Amazon EC2 as they had complete control over customizing infrastructure on EC2. With today’s launch, customers can now run their workloads in FIPS compliant manner on Amazon ECS on AWS Fargate.

You can use the new Amazon ECS account-level setting fargateFipsMode to define that all ECS tasks running on Fargate should be configured to be FIPS compliant. Using this mode, ECS and Fargate will communicate using FIPS-compliant endpoints using appropriate cryptographic modules configured and that the underlying kernel is booted in FIPS mode.

Amazon GuardDuty Adds Three New Threat Detections to Alert Customers on Suspicious DNS Traffic

Amazon GuardDuty adds three new threat detections to help detect suspicious DNS traffic indicative of potential attempts by malicious actors to evade detection when performing activities such as exfiltrating data, or using command & control servers to communicate with malware.

The newly added finding types are:

  1. DefenseEvasion:EC2/UnusualDNSResolver
  2. DefenseEvasion:EC2/UnusualDoHActivity
  3. DefenseEvasion:EC2/UnusualDoTActivity

Amazon GuardDuty monitors DNS traffic from EC2 instances that use the Amazon DNS resolvers to detect potential malicious actor activities. However, malicious actors may attempt to mask their activity by using external DNS providers, or by using techniques such as sending DNS traffic over HTTPS (DoH), or over TLS (DoT).

The newly added GuardDuty threat detections help detect this type of activity. GuardDuty learns the expected DNS traffic patterns for the AWS environment to only alert when the activity is suspicious and indicative of potential malicious activity.

The new threat detections are available to all existing and new Amazon GuardDuty customers at no additional costs and require no action to activate. The finding type DefenseEvasion:EC2/UnusualDNSResolver is available in all Amazon GuardDuty supported regions, and the DefenseEvasion:EC2/Unusual DoHActivity and DefenseEvasion:EC2/UnusualDoTActivity threat detections are available in all Amazon GuardDuty supported regions, excluding the AWS Asia Pacific (Osaka), AWS Asia Pacific (Jakarta), AWS Asia Pacific (Seoul), China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD) regions, which will be added at a later date.

Customers across industries and geographies use Amazon GuardDuty to protect their AWS environments, including over 90% of AWS’s 2,000 largest customers. GuardDuty continuously monitors for malicious or unauthorized behavior to help protect your AWS resources.

You can begin your 30-day free trial of Amazon GuardDuty with a single-click in the AWS Management Console. To receive programmatic updates on new GuardDuty features and threat detections, subscribe to the Amazon GuardDuty SNS topic.

AWS Firewall Manager adds support for six additional AWS WAF features

AWS Firewall Manager now supports AWS WAF Bot Control for Targeted Bots, AWS WAF Fraud Control - Account Takeover Prevention, AWS WAF Rules action overrides for managed rule groups, centralized AWS WAF logging directly to S3 buckets and new logging filters, and AWS WAF Captcha Configuration, Challenge configuration, and Token Domains.

With support for AWS WAF Bot Control for Targeted Bots, customers can easily enable and scale advanced bot detection techniques, such as browser interrogation, fingerprinting, and behavioral analysis to protect against targeted bot attacks. With support for AWS WAF Fraud Control - Account Takeover Prevention, customers can scale and protect their application’s login page against credential stuffing attacks, brute force attempts, and other anomalous login activities.

This includes support for ATP managed rules that can inspect application request and response data and block login attempts based on customer-defined login failure conditions and support for ATP origin response. With support for AWS WAF rule action overrides, customers can create rule action overrides for managed rule groups. With support for AWS WAF Captcha Config, Challenge Config, and Token Domains, customers can allow Captcha and Challenge Config to specify WebACL level timeout on Captcha/Challenge actions.

With this launch we’re adding one new optional destinations to allow customers send AWS WAF logs to an Amazon S3 bucket directly. In addition customers can use three additional rule action log filters - CAPTCHA, CHALLENGE, EXCLUDED AS COUNT.

Amazon Connect Voice ID now supports multiple fraudster watchlists per Voice ID domain

Amazon Connect Voice ID now enables customers to maintain multiple fraudster watchlists for their Voice ID domains, with each watchlist supporting up to 500 fraudsters. Previously one Voice ID domain only supported one fraudster watchlist for known fraudster detection.

With the availability of multiple watchlists, customers can configure which fraudster watchlist within their Voice ID domain is to be used for a specific contact in the Amazon Connect Contact Flow. Voice ID APIs allow customers to specify which watchlist a fraudster will be associated with when registering fraudsters, as well as manage individual fraudsters within watchlists.

Multiple fraudster watchlist support for Voice ID will enable Connect customers to help improve contact center security by fine-tuning the fraudster watchlists for different lines of business in their organization. Additionally, customers who have more fraudsters targeting their contact center than the default limit of 500 can now create multiple fraudster watchlists to manage them.

Amazon QuickSight now supports Row Level Security tags with OR condition

Amazon QuickSight now supports nested conditions within Row Level Security (RLS) tags where you can combine AND and OR conditions to simplify multi-tenant access patterns. You can use RLS with tag-based rules to restrict access to a dataset when embedding dashboards for anonymous users.

Previously, you could combine RLS tags using AND condition with a value you assign to the tags at run time. To learn more about setting up row level security using session tags with the OR condition click here.

Amazon Connect now enables agents to handle voice calls, chats, and tasks concurrently

Amazon Connect now supports the ability to concurrently offer agents contacts across multiple channels, including voice, chat, and tasks. Now contact center managers can configure an agent’s routing profile to receive contacts from multiple channels at the same time.

For example, an agent currently handling a chat could be offered a voice call from a high-priority queue when other agents aren't available. Contact center managers can also choose which channels cannot be interrupted. For example, they can allow chats to be interrupted by a phone call, but then prevent offering any further contacts until the agent completes that phone call.

This feature is supported in all AWS regions where Amazon Connect is offered, and there is no additional charge beyond standard pricing for the Amazon Connect service usage and associated telephony charges.

To learn more, see the API documentation or the Amazon Connect Administrator Guide. To learn more about Amazon Connect, the easy-to-use cloud contact center, visit the Amazon Connect website.

Amazon QuickSight now supports Row Level Security tags with OR condition

Amazon QuickSight now supports nested conditions within Row Level Security (RLS) tags where you can combine AND and OR conditions to simplify multi-tenant access patterns. You can use RLS with tag-based rules to restrict access to a dataset when embedding dashboards for anonymous users.

Previously, you could combine RLS tags using AND condition with a value you assign to the tags at run time. To learn more about setting up row level security using session tags with the OR condition click here.

Amazon RDS Optimized Reads now offers up to 2X faster queries on RDS for PostgreSQL

Amazon Relational Database Service (Amazon RDS) for PostgreSQL now supports Amazon RDS Optimized Reads for up to two times faster query processing compared to previous generation instances. Complex queries that utilize temporary tables, such as queries involving sorts, hash aggregations, high-load joins, and Common Table Expressions (CTEs) can now execute up to two times faster with Optimized Reads on RDS for PostgreSQL.

Optimized Read-enabled instances achieve faster query processing by placing temporary tables generated by PostgreSQL on the local NVMe-based SSD block-level storage, thereby reducing your traffic to Elastic Block Storage (EBS) over the network. Refer to our recent blog post to learn more about performance improvements using local disk based database instances for workloads that have highly concurrent read/write processing.

Amazon RDS Optimized Reads is available by default on RDS for PostgreSQL versions 15.2 and higher, 14.7 and higher, and 13.10 and higher. This feature is now available on Intel-based M5d and R5d instances with up to 3,600 GiB of NVMe-based-SSD block-level storage and AWS Graviton2-based M6gd and R6gd database (DB) instances with up to 3,800 GiB of NVMe-based SSD block-level storage and up to 25 Gbps of network bandwidth.

You can configure these disk based DB instances as Multi-AZ DB cluster, Multi-AZ DB instances, and Single-AZ DB instances. 

You can launch a new Optimized Read Workload with RDS for PostgreSQL in the Amazon RDS Management Console or using the AWS CLI. To learn more about Amazon RDS Optimized Reads feature on Amazon RDS for PostgreSQL, refer to the Amazon RDS for PostgreSQL User Guide.

AWS Well-Architected Framework strengthens prescriptive guidance

AWS is pleased to announce an update to the AWS Well-Architected Framework, which will provide customers and partners with more prescriptive guidance on building and operating in the cloud, and enable them to stay up-to-date on the latest architectural best practices in a constantly evolving technological landscape.

The enhanced prescriptive guidance included in this update provides customers and partners with new and refreshed best practices, implementation steps, architectural patterns, and more outcome-driven improvement plans, helping them more easily identify and mitigate risks.

The new version of the AWS Well-Architected Framework is now available within the AWS Well-Architected whitepapers and in the AWS Well-Architected Tool in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Canada (Central), Europe (Frankfurt, Ireland, London, Paris, Stockholm), South America (Sao Paulo), Asia Pacific (Singapore, Sydney, Tokyo, Seoul, Mumbai, Hong Kong), Middle East (Bahrain), and AWS GovCloud (US) Regions.

Amazon SageMaker Inference Recommender improves usability and launches new features

Amazon SageMaker Inference Recommender (IR) helps customers select the best instance type and configuration (such as instance count, container parameters, and model optimizations) for deploying their ML models on SageMaker.

This week, AWS are announcing deeper integration with Amazon CloudWatch for logs and metrics, python SDK support for running IR jobs, enabling customers to run IR jobs within a VPC subnet of their choice, support for running load tests on existing endpoint via a new API, and several usability improvements for easily getting started with IR.

CloudWatch integration provides IR logs under a new log group for identifying any errors with IR execution. Now IR also publishes key metrics such as concurrent users, CPU and memory utilization at P99 latency, besides throughput and latency. Python SDK support lets customers trigger an IR job from jupyter notebooks to get instance type recommendations. We also launched new APIs that provide detailed visibility into all execution steps of IR job and an option to load test the model against an existing endpoint.

To improve usability, we made several mandatory input parameters optional and customers are no longer required to register a model or provide inputs such as domain etc to run an IR job.


 

Getting_Started_gcp_logo
Google Cloud Releases and Updates
Source: cloud.google.com

 

Anthos Clusters on AWS

As of March 21, 2023, traffic to k8s.gcr.io is redirected to registry.k8s.io, following the community announcement. This change is happening gradually to reduce disruption, and should be transparent for most Anthos clusters.

To check for edge cases and mitigate potential impact to your clusters, follow the step-by-step guidance in k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know.

Anthos Clusters on VMware

 

Anthos clusters on VMware 1.12.7-gke.20 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.12.7-gke.20 runs on Kubernetes 1.23.17-gke.900.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

  • Added admin cluster CA certificate validation to the admin cluster upgrade preflight check.

  • We now allow storage DRS to be enabled in manual mode.

  • Fixed an issue where using gkectl update to enable Cloud Audit Logs did not work.

  • We now backfill the OnPremAdminCluster OSImageType field to prevent an unexpected diff during update.

  • Fixed an issue where a preflight check for Seesaw load balancer creation failed if the Seesaw group file already existed.

Anthos clusters on VMware 1.13.7-gke.29 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.7-gke.29 runs on Kubernetes 1.24.11-gke.1200.

The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.

App Engine flexible environment Node.js

Node.js 18 is now generally available. This version requires you to specify an operating system version in your app.yaml. Learn more.

Apigee X

New features now supported in Apigee in VS Code for local development

The following features are now supported with Apigee in VS Code for local development as part of the Insiders build (as of v1.22.1-insiders.3):

  • Create multi-repository workspaces - Choose individual storage locations for artifacts, such as API proxies that are stored as individual SCMs, but develop them together using a single workspace. You no longer have to create a single repository that contains all of your API proxies. See Understanding the structure of an Apigee multi-repository workspace.
  • Use keystore - Introduces a new environment-level setting for creating the required keystores in the Apigee Emulator by using locally available keys. See Configuring the keystrokes (keystores.json).
  • Test API proxies that require service accounts (for example, calling a cloud logging process as part of an API proxy flow) - Set up your Apigee Emulators with a service account key to enable service accounts, add policies and targets that rely on service accounts, and deploy the API proxies to the Apigee Emulator to test them. See Customizing the Apigee Emulator to support service account-based authentication.

Assured Open Source Software

Assured Open Source Software is generally available. For information about the product, see Overview of Assured Open Source Software.

Bare Metal Solution

You can now skip the cooling-off period while deleting a LUN or a storage volume. This feature is generally available (GA). For more information, see Delete LUNs from a storage volume and Delete a storage volume.

Batch

Documentation for pricing has been added to explain how you can visualize the costs associated with your Batch jobs by using Cloud Billing reports. For more information, see Pricing.

Documentation has been added to explain networking concepts and how to configure networking for Batch. For more information, see the following pages:

Batch is available in the following regions:

  • asia-northeast1 (Tokyo)
  • europe-west4 (Netherlands)

For more information, see Locations.

BigQuery

BigQuery supports setting the rounding mode to ROUND_HALF_EVEN or ROUND_HALF_AWAY_FROM_ZERO for parameterized NUMERIC or BIGNUMERIC columns at the column level. You can specify a default rounding mode at the table or dataset level that is automatically attached to any columns added within those entities. The ROUND() function also accepts the rounding mode as an optional argument. This feature is generally available GA.

The limit for maximum result size (20 GiB logical bytes) when querying Azure or Amazon Simple Storage service (S3) data is now generally available (GA). Querying Azure and Amazon S3 data are now subject to the following quotas and limitations:

  • The maximum row size is 10 MiB. For more information, see Quotas for query jobs.

  • If your query uses the ORDER BY clause and has a result size larger than 256 MB, then your query fails. Previously, this limit was 2 MB. For more information, see Limitations.

Chronicle

The following supported default parsers have changed. Each is listed by product name and ingestion label, if applicable.

  • Akamai WAF (AKAMAI_WAF)
  • Area1 Security (AREA1)
  • Atlassian Confluence (ATLASSIAN_CONFLUENCE)
  • AWS VPC Flow (AWS_VPC_FLOW)
  • Cisco Firepower NGFW (CISCO_FIREPOWER_FIREWALL)
  • Cloud Audit Logs (N/A)
  • Cloud Intrusion Detection System (GCP_IDS)
  • Cloud Load Balancing (GCP_LOADBALANCING)
  • Cloud NAT (N/A)
  • Cloudflare (CLOUDFLARE)
  • F5 ASM (F5_ASM)
  • Security Command Center Threat (N/A)
  • GMAIL Logs (GMAIL_LOGS)
  • JumpCloud Directory Insights (JUMPCLOUD_DIRECTORY_INSIGHTS)
  • Kubernetes Node logs (KUBERNETES_NODE)
  • Linux Auditing System (AuditD) (AUDITD)
  • Microsoft Graph API Alerts (MICROSOFT_GRAPH_ALERT)
  • Mimecast (MIMECAST_MAIL)
  • NetApp ONTAP (NETAPP_ONTAP)
  • Office 365 (OFFICE_365)
  • Okta (OKTA)
  • Ping Identity (PING)
  • SentinelOne Deep Visibility (SENTINEL_DV)
  • Sophos Firewall (Next Gen) (SOPHOS_FIREWALL)
  • Symantec Endpoint Protection (SEP)
  • Trustwave SEC MailMarshal (MAILMARSHAL)
  • Unix system (NIX_SYSTEM)

For details about changes in each parser, see Supported default parsers.

Cloud Logging

The Logging Query Language now supports a built-in SEARCH function that you can use to find strings in your log data. The SEARCH function is now GA. For more information, see SEARCH function.

Cloud Monitoring  

Chart legends in select Cloud Monitoring pages have been updated. The default chart legend is simplified, with the option to expand the legend to view more details about your metrics. For more information, see Configure legends.

Cloud Run

Startup CPU boost for Cloud Run services is now at general availability (GA).

When deploying a new revision, Cloud Run now starts enough instances of the new revision before directing traffic to it. This reduces the impact of new revision deployments on request latencies, notably when serving high levels of traffic.

Cloud Spanner

Cloud Spanner integration with Data Catalog is now available in Preview in the europe-central2 region.

For more information, see Manage resources using Data Catalog.

Config Controller

Config Controller now uses the following versions of its included products:

Data Catalog

Data Catalog is now available in the Turin (europe-west12) and Doha (me-central1) regions. For more information on region and feature availability, see regions.

Dataflow

Dataflow cost monitoring is now available in preview.

Dialogflow

Dialogflow CX now supports flexible webhooks, where you can define the request HTTP method, request URL parameters, and fields of the request and response messages.

GKE 

Two new vulnerabilities, CVE-2023-0240 and CVE-2023-23586, have been discovered in the Linux kernel that could allow an unprivileged user to escalate privileges. For more information, see the GCP-2023-003 security bulletin.

In GKE 1.27 and later, GKE nodes will not keep compressed image layers in containerd's content store once they have been unpacked, by setting discard_unpacked_layers=true in containerd configuration. This change will not impact workloads running as Kubernetes Pods and Containers. However, if your workload relies on the image layers in containerd's content store, please make sure your workload can handle the case where image layers are missing.

The new release of the GKE Gateway controller (2023-R01) is now generally available. With this release, the GKE Gateway controller will provide the following new capabilities:

  • Gateway API on Autopilot clusters by default (GKE 1.26+)
  • The Global External HTTP(S) Load Balancer GatewayClass graduates to GA
  • Global Access for the gke-l7-rilb GatewayClass
  • SSL Policies
  • HTTP-to-HTTPS redirect
  • Cloud Armor integration

You can check all the supported capabilities per GatewayClass in this page.

Google Cloud Armor

Advanced rule tuning features for preconfigured WAF rules are now Generally Available. For more information about the new tuning features, see Tune Google Cloud Armor preconfigured WAF rules.

Network Intelligence Center

Network Analyzer now includes an insight that gives a summary of the IP address utilization of all the subnet ranges in the analyzed project. For more information, see IP address utilization summary insights.

Security Command Center

The custom modules feature for Security Health Analytics is now generally available (GA). Custom modules allow you to define custom detectors for Security Health Analytics.

For more information, see Overview of custom modules for Security Health Analytics.

Event Threat Detection, a built-in service of Security Command Center, launched the following new rules to General Availability.

  • Privilege Escalation: Anomalous Impersonation of Service Account for Admin Activity
  • Privilege Escalation: Anomalous Multistep Service Account Delegation for Admin Activity
  • Privilege Escalation: Anomalous Multistep Service Account Delegation for Data Access
  • Privilege Escalation: Anomalous Service Account Impersonator for Admin Activity
  • Privilege Escalation: Anomalous Service Account Impersonator for Data Access

These rules detect anomalous activities that are taken by someone who is using an impersonated service account to access Google Cloud. For more information, see Event Threat Detection rules.

Storage Transfer Service

Transfers from S3-compatible storage to Cloud Storage are now generally available (GA). This feature builds on support for Multipart upload and List Object V2, which makes Cloud Storage suitable for running applications written for the S3 API.

With this new feature, customers can seamlessly copy data from self-managed object storage to Google Cloud Storage. For customers moving data from AWS S3 to Cloud Storage, this feature provides an option to control network routes to Google Cloud, resulting in considerably lower egress charges.

See Transfer from S3-compatible sources for details.

Vertex AI

The Timeseries Insights API is now Generally Available. With the Timeseries Insights API, you can forecast and detect anomalies over billions of events in real time. For more information, see Timeseries Insights.

VPC

Documentation updates for Private Service Connect:

Workload Manager

Generally available: Workload Manager is now generally available (GA) for evaluating SAP workloads. It is a rule-based, cross-project validation service for evaluating workloads running on Google Cloud.

You can use Workload Manager to evaluate your SAP HANA and SAP NetWeaver workloads, and detect deviations from key best practices that SAP, OS vendors, and Google Cloud prescribe. This helps you improve the quality, reliability, and performance of your SAP workloads.

The set of rules provided will continue to evolve to cover new machine types and storage options as they become available, and extend SAP HANA and SAP NetWeaver best practices as relevant for your SAP workloads.

For more information, see the Product overview.


Getting_Started_Azure_Logo
Microsoft Azure Releases And Updates
Source: azure.microsoft.com

 
 

Connect Azure Stream Analytics to Azure Data Explorer using managed private endpoint.

 

You can connect your Azure Stream Analytics job to Azure Data Explorer / Kusto clusters using managed private endpoints

Use Stream Analytics to process exported data from Application Insights

Use Stream Analytics to process exported data from Application Insights

Generally available: Azure Cosmos DB for PostgreSQL REST APIs

Use REST APIs for all cluster operations to streamline your ongoing management experience with one or many clusters.

Public Preview: Performance troubleshooting workbooks for Azure Database for PostgreSQL Flexible Server

Save time by analyzing and troubleshooting common performance issues with Performance troubleshooting workbooks.

Generally available: Azure Cosmos DB for PostgreSQL cluster compute start and stop

Stop and start compute on all cluster nodes to optimize cost of your Azure Cosmos DB for PostgreSQL clusters.

Azure SQL—General availability updates for mid-April 2023

General availability enhancements and updates released for Azure SQL for mid-April 2023

Azure Machine Learning - General Availability for April

New features now available in GA include the ability to customize your compute instance and configure a compute instance to automatically stop if it is inactive.

Public preview: Node Resource Group (NRG) lockdown

You can use this feature to stop unintended changes to AKS resources and reduce support issues.

General Availability: Azure App Health Extension - Rich Health States

Application Health Extension, Rich Health States allows for more detailed health reporting on your VM applications.

General availability: Improved scaling model for Azure Functions with Target Based Scaling

Scaling improvement for Service Bus, Event Hubs, Storage Queue, and Cosmos DB is now available for the Azure Functions Consumption and Premium plans.

General availability: Azure DevOps 2023 Q1

In Q1 we've delivered multiple features across our services. These included improvements on security and new features that have been prioritized based on customer feedback.

General availability: Read replicas for Azure Database for PostgreSQL Flexible Server

Improve performance and scale of read-intensive workloads with read replicas for Azure Database for PostgreSQL Flexible Server.

Azure SQL—Public preview updates for mid-April 2023

Public preview enhancements and updates released for Azure SQL in mid-April 2023

Generally Available: Azure Database for PostgreSQL - Flexible Server in the Australia Central region.

Introducing Azure Database for PostgreSQL – Flexible Server in Australia Central region. This expansion provides a simplified provisioning experience with the openness of the PostgreSQL database community version and more.

Generally Available: New burstable SKUs for Azure Database for PostgreSQL - Flexible Server

Introducing new burstable SKU’s (B4, B8, B12, B16, B20) for Azure Database for PostgreSQL Flexible Server, providing a cost-effective deployment solution for burstable workloads.

Public preview: Azure Container Apps offers new plan and pricing structure

You can now run your apps using serverless, consumption-based compute, and optionally run your apps on dedicated workload profiles that offer more CPU and memory if needed.

Public Preview of query performance insight for Azure Database for PostgreSQL- Flexible Server

Introducing the public preview of query performance insight for Azure Database for PostgreSQL - Flexible Server, a new feature to help you improve the overall performance of your database.

Hotpatch now available for Windows Server VMs on Azure with desktop experience

Hotpatch is now available for Windows Server Azure edition VMs running the desktop experience, allowing you to patch and install updates to Windows Server virtual machines on Azure without rebooting.

Generally Available: Azure Cosmos DB serverless container with 1 TB storage

Get more flexibility and scalability with Azure Cosmos DB Serverless containers, now with expanded storage up to 1 TB and increased RU burstability.

Generally available: Static Web Apps support for Python 3.10

 

You can now build Static Web Apps applications using Python 3.10

Public preview: Azure Functions V4 programming model for Node.js

Introducing a more intuitive and idiomatic experience for JavaScript and TypeScript developers writing Azure Functions apps

Public preview: Azure Container Apps supports user defined routes (UDR) and smaller subnets

You can now define UDRs to manage how outbound traffic is routed for your container app environment’s subnet

General Availability: App Configuration geo-replication

Replicate your application configuration data across supported regions to create redundancy, reduce latency, and distribute request load.

Public Preview: Database-is-alive metrics for monitoring Azure Postgres Flexible Server database availability.

Monitor the database availability for Azure Database for PostgreSQL – Flexible Server via Database-is-alive metric.

Private Preview: Enable Trusted launch on your existing Azure Gen2 VMs

Use this preview to enable Trusted launch on your existing Gen2 VMs to improve foundational security of your Gen2 VMs.

General Availability: NGINXaaS - Azure Native ISV Service

Natively integrated software as a service (SaaS) solution for advanced traffic management and monitoring on Azure.

Public Preview: Azure Chaos Studio is now available in Sweden Central region

 

Azure Chaos Studio is now available in Sweden Central region.

Azure Monitor managed service for Prometheus has updated our AKS add-on to support Windows nodes

 

Azure Monitor managed service for Prometheus has updated our AKS metrics add-on to support Prometheus metric collection from the Windows nodes in your AKS clusters.


  

All_Hava_Diagrams

Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes.  Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.

Not knowing exactly what is in your cloud accounts, or those of your client's can be a worry. What exactly is running in there and what is it costing? What obsolete resources are you still being charged for? What legacy dev/test environments can be switched off? What open ports are inviting in hackers? You can answer all these questions with Hava.
 
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free. 

When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
 
Check out the 14 day free trial here (No credit card required and includes a forever free tier):


Learn More!

 

Topics: aws azure gcp news
Team Hava

Written by Team Hava

The Hava content team

Featured