Here's the weekly cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday March 24th 2023.
All the lastest Hava news can be found on our Linkedin Newsletter.
Of course we'd love to keep in touch at the other usual places. Come and say hello on:
AWS Updates and Releases
This week, AWS announced the release of an allow listing feature that helps you test and enable the new fine-grained IAM permissions for AWS Billing, Cost Management, and Account services.
On Jan 11, 2023, AWS announced the retirement of the existing IAM actions for AWS Billing, Cost Management, and Account consoles under the service prefix aws-portal and two actions under purchase order namespace, purchase-orders:ViewPurchaseOrders and purchase-orders:ModifyPurchaseOrders by July 6, 2023.
New fine-grained IAM actions were launched to allow specific access control to services that are needed for an individual user’s role. The self-service allow listing feature enables you to switch AWS accounts within your organization between the new fine-grained IAM actions and the existing IAM actions.
You can also test the new fine-grained actions in your management account or across member accounts within your organization. Based on the testing outcome, you can continue to use the new fine-grained actions or revert to the existing IAM actions (to be deprecated by July 6, 2023).
With this feature, you can decide to complete the migration to new IAM actions ahead of the July 6, 2023 retirement date or extend (until July 6, 2023) the use of existing IAM actions for AWS accounts or AWS Organizations created on or after March 6, 2023. This self-service allow listing feature will also be retired on July 6, 2023.
AWS are excited to announce that Amazon GameLift now supports per-second billing for both Linux and Windows instances. Amazon GameLift is a fully managed solution that allows you to manage and scale dedicated game servers for session-based multiplayer games. With this update, customers will only pay for Amazon GameLift usage in one second increments, with a minimum of 1 minute.
Customers needing low-latency game servers able to scale with fluctuating player demand have benefited from per second billing for Amazon GameLift Linux instances. Based on customer feedback, we have now extended per second billing to Amazon GameLift Windows instances so that developers can focus on building their games instead of trying to maximize their usage to the hour across a large number of instances running for irregular time periods.
Amazon DocumentDB (with MongoDB compatibility) Elastic Clusters are now available in 3 Asia Pacific regions - Singapore, Sydney, and Tokyo. DocumentDB Elastic Clusters are a new type of DocumentDB cluster that enables you to elastically scale your document database to handle millions of reads and writes per second with petabytes of storage.
With Amazon DocumentDB Elastic Clusters, you can leverage the MongoDB Sharding API to create scalable collections that can be petabytes in size. You can start with Amazon DocumentDB Elastic Clusters for small applications and scale the clusters to handle millions of reads and writes per second, and petabytes of storage capacity as applications grow.
Scaling Amazon DocumentDB Elastic Clusters is as simple as changing the number of cluster shards in the console and the rest is handled by the Amazon DocumentDB service, and can be as fast as minutes compared to hours when done manually. You can also scale down to save on cost at any time.
AWS are excited to announce the availability of AWS Service Catalog in the AWS Middle East (UAE) Region. AWS Service Catalog enables customers to create, govern, and manage a catalog of Infrastructure as Code (IaC) templates that are approved for use on AWS. These IaC templates can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.
They are easily excited.
Service Catalog helps you centrally curate and share commonly deployed templates across teams to achieve consistent governance and meet compliance requirements. End-users such as engineers, database administrators, and data scientists can then quickly discover and self-service approved AWS resources that they need to use to perform their daily job functions.
With Service Catalog, organizations can control which laC templates and versions are available, what is configured in each of the available services, and who can access each template, based on individual, group, department, or cost center.
Service Catalog is used by enterprises, system integrators, and managed service providers to organize, govern, and provision resources on AWS.
Amazon Interactive Video Service (Amazon IVS) now lets developers combine video from multiple hosts into the source for a live stream. With this feature, Amazon IVS adds a new resource called a stage. A stage is a virtual space where participants can exchange audio and video in real time.
You then broadcast a stage to Amazon IVS channels to reach a larger audience. You can also build applications where audience members can be brought “on stage” to contribute to the live conversation.
When using the multiple host feature, you are charged for the duration of time each host is participating or “on stage” in addition to normal rates for video input and output for your live channel. Visit the Amazon IVS pricing page for more information.
Amazon Interactive Video Service (Amazon IVS) is a managed live streaming solution that is designed to be quick and easy to set up, and ideal for creating interactive video experiences. Video ingest and delivery are available around the world over a managed network of infrastructure optimized for live video. Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available.
Amazon Simple Notification Service (Amazon SNS) now supports setting content-type request headers for HTTP/S notifications. This enables your topic subscribers to create a DeliveryPolicy that specifies the content-type value that Amazon SNS assigns to their HTTP/S notifications, such as application/json, application/xml, or text/plain. With this launch, applications can receive their notifications in a more predictable format.
Amazon SNS is a messaging service for Application-to-Application (A2A) and Application-to-Person (A2P) communication. The A2A functionality provides high-throughput, push-based, many-to-many messaging between distributed systems, microservices, and event-driven serverless applications.
These applications include Amazon Simple Queue Service (SQS), Amazon Kinesis Data Firehose, AWS Lambda, and HTTP/S endpoints. The A2P functionality enables you to communicate with your customers via mobile text messages (SMS), mobile push notifications, and email notifications.
Amazon SNS HTTP content-type header support is available in all public AWS Regions and the AWS GovCloud (US) Regions. You can start using this new capability today, via the AWS Management Console, AWS Software Development Kit (SDK), Amazon SNS Command Line Interface (CLI), and the Amazon SNS Application Programming Interface (API). You may also provision your DeliveryPolicy for an Amazon SNS subscription via AWS CloudFormation.
Deadline 10.2.1 is now generally available and adds new functionality with support for launching and managing Spot Fleets in multiple AWS regions from the same Spot Event Plugin and a new worker version tag for instances running on Amazon EC2.
Having Spot Fleets in multiple AWS regions from the same spot event plugin allows you to easily scale your rendering to additional capacity in more regions while managing it from a single Deadline installation. The new version tag communicates your Deadline version to AWS and will be automatically added to Deadline worker instances running on Amazon EC2.
The tag will help AWS notify you via the AWS Health Dashboard about worker instances running versions of Deadline that have known issues, relevant fixes, or are no longer supported.
Starting this week, Amazon Security Lake (Preview) is available in AWS Regions Asia Pacific (Singapore), Europe (London), and South America (Sao Paulo). You can now automatically centralize your security data from cloud, on-premises, and custom sources into a purpose-built data lake stored in your account.
Security Lake makes it easier to analyze security data so that you can get a more complete understanding of your security across the entire organization. Security Lake automatically gathers and manages all your security data across accounts and Regions. You can use your preferred analytics tools while retaining control and ownership of your security data.
Security Lake has adopted the Open Cybersecurity Schema Framework (OCSF), an open standard. This standard helps normalize and combine security data from AWS and a broad range of enterprise security data sources. Now, your analysts and security engineers can get broad visibility to investigate and respond to security events and improve your security across the cloud and on premises.
With this Region expansion, Security Lake is now available in ten regions globally, including US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and South America (Sao Paulo).
AWS IoT TwinMaker is now available in the AWS GovCloud (US-West) Region, extending the service footprint to seven AWS Regions.
AWS IoT TwinMaker is a managed service that makes it easier for developers to build digital twins of real-world systems such as buildings, factories, industrial equipment, and production lines. AWS IoT TwinMaker helps you optimize building operations, increase production output, and improve equipment performance.
With the ability to use existing data from multiple sources, create virtual representations of any physical environment, and combine existing 3D models with real-world data, you can now harness digital twins to create a holistic view of your operations faster and with less effort.
To get started, log into the AWS Management Console and navigate to AWS IoT TwinMaker console and check out a digital twin dashboard to see what you can achieve with AWS IoT TwinMaker. For a full list of AWS Regions where AWS IoT TwinMaker is available, visit the AWS Region table. To learn more, please visit the AWS IoT TwinMaker website or the developer guide.
AWS Resilience Hub has added Amazon Elastic Kubernetes Service (Amazon EKS) as a supported resource. Resilience Hub provides a single place to define, validate, and track the resilience of your applications so that you can avoid unnecessary downtime caused by software, infrastructure, or operational disruptions. Amazon EKS is a managed service to run Kubernetes in the AWS cloud and on-premises data centers.
You can now add Amazon EKS clusters to new or existing Resilience Hub applications and receive assessments and recommendations for improving resilience. Resilience Hub lets you add the clusters directly or through AWS CloudFormation, Terraform, AWS Resource Groups, and AWS Service Catalog AppRegistry.
You can add one or more Amazon EKS clusters in one or more AWS Regions with one or more namespaces in each cluster. This allows Resilience Hub to provide single-Region and cross-Region assessments and recommendations. Resilience Hub can analyze the overall resilience of an Amazon EKS cluster and can examine Kubernetes deployments, replicas, ReplicationControllers, and pods. Resilience Hub also supports stateless Amazon EKS workloads.
In addition, Resilience Hub will now assess Amazon Elastic File System Replication and Availability Zone configuration, as well as Amazon S3 Multi-Region Access Points, Amazon S3 Replication Time Control, and AWS Backup for S3 point-in-time recovery configuration.
The new capabilities are available in all of the AWS Regions where Resilience Hub is supported. See the AWS Regional Services List for the most up-to-date availability information.
Amazon Redshift launched a new getting started experience with the Amazon Redshift Serverless free trial. This will be the recommended path for first time users in regions where Amazon Redshift Serverless is already available, and will replace the Redshift DC2.large based free trial for provisioned clusters. In regions where Amazon Redshift Serverless is not yet available, customers can continue to use the two-month free trial of DC2.large node in provisioned clusters.
First time users of Amazon Redshift Serverless continue to be eligible to use a $300 credit with a 90-day expiration date towards their compute usage. For an easier path to get started, Amazon Redshift also recently launched a new lower data warehouse base capacity configuration of 8 RPUs (Redshift Processing Units) in Amazon Redshift Serverless, so you have more flexibility to start small and scale as needed to try out a variety of workloads leveraging the free trial.
AWS announced the availability of AWS Backup for Amazon S3 in Asia Pacific (Jakarta) and Middle East (UAE) Regions. AWS Backup is a policy-based, fully managed, cost-effective solution that enables you to centralize and automate data protection of Amazon S3 along with other AWS services (spanning compute, storage, and databases) and third-party applications.
Together with AWS Organizations, AWS Backup enables you to centrally deploy policies to configure, manage, and govern your data protection activity.
With this launch Amazon S3 backups is now available in the following Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm), Asia Pacific (Hong Kong, Jakarta, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Middle East (Bahrain, UAE), Africa (Cape Town). For more information on regional availability and pricing, see AWS Backup pricing page.
Amazon Aurora now supports disaster recovery capabilities of Global Database and Cross-Region database cluster snapshot copying in Asia Pacific (Melbourne) region.
Amazon Aurora Global Database is a single database that can span up to six AWS Regions, enabling disaster recovery from region-wide outages and low latency global reads. Starting today, you can create an Aurora global database in the Asia Pacific (Melbourne) Region for replicating writes to other secondary AWS Regions with a typical latency of less than one second, enabling both fast failover with minimal data loss and low latency global reads.
Aurora Global Database is available for both the MySQL-compatible and PostgreSQL-compatible editions of Aurora. For information about Aurora global database versions supported, see Using Amazon Aurora global databases.
With this release, cross-region copying of a DB cluster snapshots created either automatically or manually is now also available in Asia Pacific (Melbourne) for your data retention, compliance, and/or disaster recovery needs. For more information see Copying a DB cluster snapshot.
Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services.
Amazon SageMaker Data Wrangler now supports OAuth based authentication with identity providers including Okta, Microsoft Azure AD, and Ping Federate to access data in Snowflake for machine learning (ML). Data Wrangler reduces the time it takes to aggregate and prepare data for ML from weeks to minutes using a visual interface in Amazon SageMaker Studio.
This launch enables customers who want to use a single identity provider to manage their users, groups and access control across all applications, including Snowflake. Once the admins configure Snowflake OAuth access for Data Wrangler, you can log in using your organization identity provider when connecting from Data Wrangler to Snowflake to bring in data for ML.
You can join data from other popular data sources such as Amazon S3, Amazon Athena, Amazon Redshift, Amazon EMR and over 40 SaaS applications supported by Data Wrangler to create the right data set for ML. You can quickly understand data quality, clean the data, and create features with 300+ built in analysis and data transformations using Data Wrangler’s visual interface.
You can also train and deploy model with SageMaker Autopilot, and operationalize the data preparation process in a feature engineering, training or pipeline using integration with SageMaker Pipeline, all from Data Wrangler.
AWS Security Hub is now available in the Asia Pacific (Hyderabad), Europe (Spain), and Europe (Zurich) AWS Regions. You can now use Security Hub to centrally view and manage the security posture of your AWS accounts in those Regions and take advantage of more than 80 security controls to automatically check your security posture.
Available globally, Security Hub gives you a comprehensive view of your security posture across your AWS accounts. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security findings. You can also continuously monitor your environment using automated security checks based on standards such as AWS Foundational Security Best Practices, the CIS AWS Foundations Benchmark, the Payment Card Industry Data Security Standard, and the National Institute of Standards and Technology (NIST) SP 800-53.
You can also take action on these findings by using Amazon CloudWatch Events rules to send the findings to ticketing, chat, Security Information and Event Management (SIEM), Security Orchestration Automation and Response (SOAR), and incident management tools or to custom remediation playbooks.
AWS are announcing the availability of Route 53 Resolver Query Logging in the Asia Pacific (Jakarta) Region. Route 53 Resolver Query Logging enables you log DNS queries that originate in your Amazon Virtual Private Clouds (Amazon VPCs). With query logging enabled, you can see which domain names have been queried, the AWS resources from which the queries originated - including source IP and instance ID - and the responses that were received.
Route 53 Resolver is the Amazon DNS server that is available by default in all Amazon VPCs. Route 53 Resolver responds to DNS queries from AWS resources within a VPC for public DNS records, Amazon VPC-specific DNS names, and Amazon Route 53 private hosted zones.
With Route 53 Resolver Query Logging, customers can log DNS queries and responses for queries originating from within their VPCs, whether those queries are answered locally by Route 53 Resolver, are resolved over the public internet, or are forwarded to on-premises DNS servers via Resolver Endpoints.
You can share your query logging configurations across multiple accounts using AWS Resource Access Manager (RAM). You can also choose to send your query logs to Amazon S3, Amazon CloudWatch Logs, or Amazon Kinesis Data Firehose.
AWS CloudFormation has expanded the availability of language Transform called ‘AWS::LanguageExtensions' to Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Europe (Spain), Europe (Zurich), and Middle East (UAE) Regions. When declared in a template, the transform enables extensions to the template language in AWS CloudFormation.
The language extensions transform expands the functionality of the base CloudFormation JSON/YAML template language.
With this launch, you can use intrinsic functions for length (Fn::Length) and JSON string conversion (Fn::ToJsonString), and support for intrinsic functions and pseudo-parameter references in update and deletion policies in these 5 new Regions. You use intrinsic functions in your templates to assign property values to properties that are not available until runtime.
For example, you can use the Fn::ToJsonString intrinsic function to convert an object or array to its corresponding JSON string. You can read our AWS blog post on language extensions transform for detailed use cases.
Additionally, the language extension transform supports default parameter values and additional intrinsic functions in Fn::FindInMap. You can use these features to minimize the size of your CloudFormation templates, and improve their readability.
For example, you can combine intrinsic functions such as Fn::Select and Fn::Split to dynamically extract and return string from a given parameter. This returned string value can be used within Fn::FindInMap to map to the desired Mappings section.
You can automate inputs to your Mapping logic with fewer lines of code, instead of declaring multiple conditions. To see other examples of Fn::FindInMap enhancements, refer to our user guide.
Amazon Web Services (AWS) announces Amazon CloudFront expansion in Peru by launching a new edge location in Lima. Customers in Perú can expect up to 50 percent improvement in latency, on average, for data delivered through the new edge location to provide end users faster, more responsive applications.
The new AWS edge location brings the full suite of benefits provided by Amazon CloudFront, a highly distributed and scalable content delivery network (CDN) that delivers static and dynamic content, APIs, and live and on-demand video. Amazon CloudFront uses a global network of 450+ points of presence (POPs) and 13 regional edge caches in 90+ cities across 49 countries to deliver content to end users.
All Amazon CloudFront edge locations are protected against infrastructure-level DDoS threats with AWS Shield Standard that uses always-on network flow monitoring and in-line mitigation to minimize application latency and downtime. You also have the ability to add additional layers of security for applications to protect them against common web exploits and bot attacks by enabling AWS WAF.
Amazon WorkDocs now offers Search Resources API for builders to programmatically search files and folders that you have permissions for. The new Search Resources API provides keyword search and a wide array of search result filtering capabilities.
Customers can apply filters such as content type, last modified date range, file owner, and labels. The new WorkDocs Search Resources API is available via AWS CLI, SDK and REST endpoints.
Amazon WorkDocs Search Resources API is billed at the rate for a SEARCH request. Please refer to the Amazon WorkDocs API Pricing page for the current pricing information. The ability to perform search programmatically is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland).
AWS Batch now enables you to configure ephemeral storage up to 200 GiB in size on AWS Fargate type jobs. This means that you can expand the available ephemeral storage beyond the default value of 20 GiB for your jobs when using Fargate as the compute platform on AWS Batch.
With this launch, you no longer need to limit the size of the Docker images to run machine learning inferences or the size of the data sets in the container for data processing to be under 20GiB. You can configure the ephemeral storage to be as large as 200GiB to run larger sized workloads.
You can define the size of ephemeral volume needed by an AWS Batch job using the ephemeralStorage parameter that gives the option to set the ephemeral storage ranging from 21GiB to up to 200Gb in job definitions. This parameter is available only when using Fargate as the compute platform and can be done using the RegisterJobDefinition API operation.
Now you can configure the size of the ephemeral storage needed for a job using Fargate as the compute platform on AWS Batch. You no longer have the restriction of using the default limit of 20 GiB. This configuration is now available in all AWS Regions where AWS Batch is currently available. To learn more about AWS Batch, see the AWS Batch User Guide. To learn more about the AWS Batch API, see the AWS Batch API Reference.
ECS capacity providers are designed to automatically scale your Amazon EC2 instances based on the capacity requirements of your applications, so that you can seamlessly scale your applications without having to manage scaling of the underlying infrastructure. When you use capacity providers, ECS schedules tasks to run even when there’s no capacity available, by automatically adding capacity in the auto scaling group; without capacity providers, task launches would fail immediately in this scenario.
However, even when you use capacity providers, there could be circumstances when your tasks fail to start, e.g., if none of the EC2 instances match your task placement constraints or the instances in your autoscaling group have fewer resources than the resource requirements (cpu or memory) in your task definition.
You can now use Amazon S3 Event Notifications with Amazon EventBridge in the AWS GovCloud (US) Regions to build, scale, and deploy event-driven applications based on changes to the data you store in Amazon S3.
This makes it easier to act on new data in Amazon S3, build multiple applications that react to object changes simultaneously, and replay past events, all without creating additional copies of objects or developing new software.
With Amazon EventBridge, you can use advanced filtering and routing capabilities, sending events to targets including AWS Lambda, Amazon Kinesis, AWS Step Functions, and Amazon SQS. S3 Event Notifications with EventBridge can simplify your architecture by allowing you to match any attribute, or a combination of attributes, for objects in an S3 event.
This allows you to filter events by object size, time range, or other event metadata fields before invoking a target AWS Lambda function or other destinations. For example, if millions of audio files are uploaded to an S3 bucket, you can filter for specific files and send an event notification to multiple workflows.
Through these multiple workflows, the same event can be used to transcribe an audio file, change its media format for streaming, and apply machine learning to generate a sentiment score. You can also archive and replay S3 events, giving you the ability to reprocess an event in case of an error or if a new application module is added.
Amazon S3 Event Notifications in Amazon EventBridge are now available in all AWS Regions.
Amazon Relational Database Service (RDS) Proxy now supports PostgreSQL major version 15. PostgreSQL 15 includes new features such as the SQL standard "MERGE" command for conditional SQL queries, performance improvements for both in-memory and disk-based sorting, support for two-phase commit, and row/column filtering for logical replication.
The PostgreSQL 15 release also adds support for server-side compression with Gzip, LZ4, or Zstandard (zstd) using pg_basebackup. For more details about this release, refer to the PostgreSQL community announcement. You can also enforce SCRAM (Salted Challenge Response Authentication Mechanism) password-based authentication for your proxy.
RDS Proxy is a fully managed and highly available database proxy for Aurora and RDS databases. RDS Proxy helps improve application scalability, resiliency, and security.
Application Load Balancer (ALB) now supports version 1.3 of the Transport Layer Security (TLS) protocol, enabling you to optimize the performance of your backend application servers while helping to keep your workloads secure.
TLS 1.3 on ALB works by offloading encryption and decryption of TLS traffic from your application servers to the load balancer. TLS 1.3 is optimized for performance and security by using one round trip (1-RTT) TLS handshakes, and only supporting ciphers that provide perfect forward secrecy.
Using TLS with ALB provides you with the tools to more easily manage your application security, enabling you to improve the security posture of your applications. ALB allows you to centralize the deployment of SSL certificates using ALB’s integration with AWS Certificate Manager (ACM) and AWS Identity and Access Management (IAM).
You can also analyze TLS traffic patterns and troubleshoot issues using ALB TLS metrics and access logs. ALB also allows you to use predefined security polices, which control the ciphers and protocols that your ALB presents to your clients.
AWS announced the general availability of AWS Clean Rooms, a fully managed analytics service that helps customers collaborate with their partners without sharing or copying one another’s raw data. Companies can create a clean room in minutes, removing the need to build, manage, or maintain their own solutions or move data outside of AWS.
Companies across multiple industries need to complement their data with external partners’ data to build a more complete view of their business. For example, brands and media publishers want to collaborate with their partners using datasets stored across many marketing channels and applications to improve consumer engagement and deliver better, more relevant campaigns.
Consumer brands also want to collaborate with retailers to learn insights about purchase trends. AWS Clean Rooms makes these collaborations possible while protecting underlying data with a broad set of privacy-enhancing capabilities and cryptographic controls.
Amazon VPC Reachability Analyzer now allows you to view the network reachability between your source and destination in your virtual private clouds (VPCs) through Gateway Load Balancers, AWS Network Firewalls, and AWS PrivateLink services. In addition, you can also check network reachability between a source resource and your specified destination IP address.
Reachability Analyzer is a configuration analysis tool that enables you to check network reachability between a source resource and a destination resource in your virtual private clouds (VPCs). With support for Gateway Load Balancers and AWS Network Firewall, you can now check whether the network reachability between your source and destination is potentially being blocked due to a firewall rule in your AWS Network Firewall or firewall appliance behind the Gateway Load Balancer.
You can also trace and troubleshoot network reachability to AWS PrivateLink services and their target EC2 instances within your AWS Organization. Lastly, you can now trace and troubleshoot network reachability across your VPCs to hosts based on their IP address. In cases where destination IP address is outside AWS, Reachability Analyzer determines reachability between the source and the relevant network gateway on AWS such as an Internet Gateway or AWS VPN Gateway.
AWS announced the opening of a new AWS Direct Connect location within the Equinix MC1 data center in Muscat, Oman. By connecting your network to AWS at the new location, you gain private, direct access to all public AWS Regions (except China Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD), AWS GovCloud Regions, and AWS Local Zones.
The new Muscat location offers dedicated 1 Gbps and 10 Gbps connections, with MACsec encryption available for 10 Gbps connections. Using this new location to reach resources running in the Muscat AWS Local Zone is an ideal solution for applications that require single-digit millisecond latency or local data processing.
The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet. Using the Direct Connect SiteLink feature, you can send data between Direct Connect locations to create private network connections between the offices and data centers in your global network.
Amazon Web Services announces the ability to run Windows 11 operating system on your Amazon Workspaces hosted on hardware that is dedicated to you in the AWS Cloud. To do this, you must Bring Your Own License (BYOL) and provide a Windows 11 license.
You can now offer a consistent desktop experience to your users when they switch between on-premise and virtual desktop deployments. Earlier, BYOL option was only available for Windows 10 operating system.
While Windows Server 10 BYOL WorkSpaces are still supported, you can now choose to run WorkSpaces powered by Windows 11 and benefit from new features such as Trust Platform Module 2.0 (TPM 2.0), and Unified Extensible Framework Interface (UEFI) Secure Boot. Windows 11 is supported for Standard, Performance, Power and PowerPro bundles. For the best experience with video conferencing we recommend using Power or PowerPro bundles only.
To take advantage of this option, your organization must meet the licensing requirements set by Microsoft, and you must commit to running a minimum number of WorkSpaces in a given AWS region each month. To learn more about this option and the eligibility requirements, please see the Amazon WorkSpaces BYOL documentation and FAQs on the BYOL.
You can now download Corretto 20 from our downloads page. This latest version supports the most recent OpenJDK feature release and is available on Linux, Windows, and macOS.
Highlights of OpenJDK 20 include a second preview of Record Patterns, which are used to more easily work with record-based objects. You can use record patterns and type patterns together to create more powerful data navigation.
Virtual threads are also in their second preview. Virtual Threads will make it easier to write multi-threaded applications. OpenJDK 20 also introduces a new incubation feature for scoped values, which allows you to share data between threads.
OpenJDK 20 includes a second preview of the Foreign Function & Memory API, which makes it easier to integrate with native code. There are updates to the Pattern Matching for switch statements preview feature and the incubating Vector API. Structured Concurrency, which makes tasks distributed over multiple threads appear as a single unit of work, is now in its second incubation release.
Application Auto Scaling customers can use tags to manage the AWS Identity and Access Management (IAM) permissions related to their auto scaled resources. Prior to this launch there was no way for customers to control IAM permissions based on custom resource tags. Now, permissions can be centrally managed for all resources tagged with the same key-value pairs.
Custom tags enable customers to categorize their AWS resources by purpose, owner, or environment. With this launch, tags can be used to drive IAM permissions based on those categories. For example, customers can now choose to give selected users the permissions to register or de-register a resource from Application Auto Scaling in only certain environments based on the tag values.
Amazon Elastic Compute Cloud (Amazon EC2) C6in instances, Amazon EC2 M6in and M6idn instances, and Amazon EC2 R6in and R6idn instances are available in bare metal size. These sixth-generation network optimized instances are powered by 3rd Generation Intel Xeon Scalable processors. They are the first x86-based instances offering up to 200 Gbps of network bandwidth. These instances deliver up to 2x more network bandwidth and up to 2x higher packet-processing performance over comparable fifth-generation instances.
The instances are built on AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor that delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security.
Bare metal instances allow EC2 customers to run applications that benefit from deep performance analysis tools, specialized workloads that require direct access to bare metal infrastructure, legacy workloads not supported in virtual environments, and licensing-restricted business critical applications. Workloads on bare metal instances continue to take advantage of all the comprehensive services and features of the AWS Cloud, such as Elastic Load Balancer (ELB), and Amazon Virtual Private Cloud (VPC).
These instances deliver up to 80 Gbps of Amazon Elastic Block Store (EBS) bandwidth and up to 350K IOPS, the highest Amazon EBS performance across EC2 instances. Elastic Fabric Adapter (EFA) networking support is offered on 32xlarge and metal sizes. EFA-enabled instances provide lower latency and improved cluster performance for workloads such as distributed computing and high performance computing (HPC). For applications that require high-speed, low-latency local block storage, M6idn and R6idn instances are equipped with up to 7.6 TB of local NVMe-based solid state disk (SSD).
AWS are excited to announce the support for Graviton3-based instances in Amazon EMR. You can now use Amazon EC2 C7G instances with EMR on EC2 and Amazon EKS. AWS Graviton3 processors are the latest in the AWS Graviton processor family. They provide better compute performance, floating point performance and support DDR5 memory that provides 50% more memory bandwidth as compared to DDR4. Amazon EMR launches support for Amazon Elastic Cloud Compute C7g (Graviton3) instances which improve cost-performance for Apache Spark workloads by up to 13%.
You can improve cost-performance of Spark workloads running on EMR on EKS by up to 15% by migrating to Graviton3-based instances. The benchmark used in this post to arrive at the cost-performance improvement is derived from the industry-standard TPC-DS benchmark, and uses queries from the Spark SQL Performance Tests GitHub repo with the following fixes applied. Performance on your workloads may vary and therefore not yield similar cost savings.
Amazon Connect now provides the ability to configure multiple IAM roles that can be assigned to a single user when using SAML 2.0, enabling you to support user access from multiple identity providers simultaneously.
For example, if you are migrating identity providers, you can configure multiple IAM roles associated to a single user and that user will be able to access Amazon Connect from both providers. To learn more about configuring IAM roles for SAML 2.0 in Amazon Connect, see the Amazon Connect Administrator Guide.
Anthos Config Management
Alpha release of AssignImage mutator, which allows mutation of Docker image paths. For reference, see AssignImage under Mutation in the OPA Gatekeeper documentation.
The constraint template library includes a new template:
VerifyDeprecatedAPI. For reference, see the Constraint template library.
The constraint template library's
K8sPodsRequireSecurityContext template now supports an exempt-list of Images using the new
exemptImages parameter. For reference, see Constraint template library.
The constraint template library's
K8sRequireCosNodeImage template now supports an exempt-list of OS images using the new
exemptOsImages parameter. For reference, see Constraint template library.
Policy Controller has been updated to include a more recent build of OPA Gatekeeper (hash: 8170c5f).
Stopped exposing the "unable to load /repo/source/error.json" transient error in the RootSync and RepoSync API.
Anthos Service Mesh
In April 2023, enabling
mesh.googleapis.com will automatically enable
networksecurity.googleapis.com. These APIs will be required for managed Anthos Service Mesh. You will be able to safely disable them on a project or fleet that has no managed Anthos Service Mesh clusters.
Configuring Certificate Authority connectivity through a HTTP CONNECT-based proxy is now generally available (GA). For more information, see Configure Certificate Authority connectivity through a proxy.
With Envoy versions 1.22 and later, the default minimal TLS version for servers changed from 1.0 to 1.2. Therefore, for Anthos Service Mesh version 1.14 and later, the default minimum TLS version for gateway servers is 1.2. If you need to configure the minimal TLS version on an Anthos Service Mesh gateway server to be lower than 1.2, then you can configure the minProtocolVersion parameter.
In Anthos Service Mesh versions 1.9 and earlier, the server-side minimum TLS version for Anthos Service Mesh workloads was 1.0. In Anthos Service Mesh versions 1.10 and later, the server-side minimum TLS version for Anthos Service Mesh workloads is configured to be 1.2 to improve TLS security. For better security, Anthos Service Mesh does not support configuring the minimum workload TLS version to be lower than 1.2.
Anthos clusters on AWS (previous generation) will be deprecated as of April 1, 2023. Therefore, Anthos Service Mesh will not support Anthos clusters on AWS (previous generation) starting April 1, 2023. For more information, see the deprecation announcement.
Apigee Integrated Portal
On March 23, 2023 we released an updated version of Apigee integrated portal.
Users are now able to enable the content security policy feature for their portal for Apigee and Apigee hybrid. Previously, this feature was available in Apigee Edge only.
On March 22, GCP released an updated version of Apigee X.
Customize SSL certs for access routing when provisioning Apigee Pay-as-you-go organizations.
Users can now select existing self-managed SSL certs when customizing access routing during Apigee Pay-as-you-go provisioning. For more information, see Step 4: Customize access routing .
Receive Cloud console notifications when Pay-as-you-go provisioning completes.
While provisioning is in progress, users can navigate away from the Apigee provisioning page and monitor notifications in the Cloud console for updates when provisioning completes.
App Engine flexible environment Go
Backup and DR
If you are in a region where Hyperdisk Extreme is available, then a mount as a new Compute Engine instance may fail unless you change the boot disk disktype away from Hyperdisk Extreme. This is because images cannot be created using Hyperdisk Extreme disks. In addition, the target instance requires 64 CPUs or more and each disk being created must be 64 GB or larger.
Bare Metal Solution
You can now provision multiple storage volumes to attach to the existing servers in a single configuration request through Google Cloud console intake form.
BigQuery now supports Unicode column naming using international character sets, alphanumeric and special characters. Existing columns can use these new capabilities using the
RENAME command. This feature is now in preview.
Cloud Bigtable is now available in the
europe-west12 (Turin) region. For more information, see Bigtable locations.
You now have the option to use default logs buckets stored within your own project in the same region as your build. You can enable this feature by setting the
defaultLogsBucketBehavior option in your build config file. When you use this option, you gain more control over data residency. Using logs within your own project also allows you to fine-tune access permissions and object lifecycle settings for your build logs. This feature is generally available. For more information, see the Store and manage build logs page.
In addition to the existing values of
1500, Cloud Interconnect now lets you configure your VLAN attachments with an MTU value of
1460. This configuration setting is available for all VLAN attachments for both Partner Interconnect and Dedicated Interconnect.
To minimize the risk of packet loss, Google recommends that you configure the same MTU value on your VPC network, on-premises routers, and associated VLAN attachments whenever possible.
The default MTU for VLAN attachments that you create for Cloud Interconnect is still
Cloud KMS is available in the following region:
For more information, see Cloud KMS locations.
Cloud Load Balancing
Network Load Balancing now supports user-specified weights on the backend service. This allows you to manage the backend load distribution of your load balancer and avoid overloading them.
For details, see:
This feature is in General Availability.
You can create Cloud Spanner regional instances in Turin, Italy (europe-west12).
You can now use Google Cloud tags to group and organize your Cloud Spanner instances, and to condition Identity and Access Management (IAM) policies based on whether an instance has a specific tag. For more information, see Control access and organize instances with tags.
Cloud SQL for MySQL
Cloud SQL for MySQL now supports minor version 8.0.32. To upgrade your existing instance to the new version, see Upgrade the database minor version.
The changes listed in the June 10 Release Notes entry for faster machine type changes have been postponed for Cloud SQL for MySQL.
The following functions and expressions have been added to the GoogleSQL dialect:
Cloud Storage is now available in Turin, Italy (
Objects smaller than 128KiB stored in buckets with Autoclass enabled are no longer managed by Autoclass.
- Such objects are not subject to the Autoclass management fee and are statically set to Standard Storage.
- Any such objects in Autoclass buckets that are currently stored in a different storage class are being transitioned to Standard Storage automatically and free of charge.
Your automated processes might fail if they use API response data about your resource-based commitment quotas. For more information, see Known issues.
Newly-created clusters write
disk_assignments platform logs to Cloud Logging, indicating when VM instances and persistent disks are allocated to a workstation.
Dataflow is now available in Turin (
Dataform in Preview is available in the following regions:
Dataproc is now available in the
europe-west12 region (Turin).
Document AI Warehouse
- Modify RuleSet APIs logic to auto-populate RuleId field during create RuleSet call and allow Rules update using existing RuleId
- Publish action messages by default will include Schema name, Document name, RuleSet name, Rule Id, Action Id and trigger type information.
GKE cluster versions have been updated.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for opt-in control plane upgrades and node upgrades for existing clusters. For more information on versioning and upgrades, see GKE versioning and support and Upgrades.
Starting on March 21, 2023, traffic to k8s.gcr.io will be redirected to registry.k8s.io, following the community announcement. This change will happen gradually to reduce disruption, and should be transparent to the majority of GKE clusters.
To check for edge cases, and mitigate a potential impact, follow the step-by-step guidance in k8s.gcr.io Redirect to registry.k8s.io - What You Need to Know.
Google Cloud Armor
Pub/Sub is now available in Turin, Italy (
Generally available: In projects protected by a service perimeter, and if using Eventarc to route events to Workflows destinations, you can create a new push subscription through Eventarc where the endpoint is set to a Workflows execution. To know more, see Set up a service perimeter using VPC Service Controls.
The ability to dismiss a recommendation is generally available via Recommender API
The export to BigQuery feature now supports custom pricing and non-project scoped recommendations.
The global Recommender Viewer role is now available to get view access to all insights and recommendations available.
Secret Manager is now available in the following region:
For more information, see Secret Manager locations.
Vertex AI Vision
Model event management with Cloud Functions and Pub/Sub
The Vertex AI Vision event management feature lets you generate and send event notifications through Pub/Sub topics by:
- Enabling supported models* to output to Cloud Function for data processing and events generation.
- In-product support to send generated event to configured Pub/Sub topics.
- An easy configuration of the event management system in the Vertex AI Vision Studio.
* GA event management is available for the following models:
- Occupancy analytics pre-trained model
- Vertex AI custom-trained models imported into a Vertex AI Vision application
For more information, see the Enable model event notification with Cloud Functions and Pub/Sub.
Vertex AI Prediction
You can now use N2, N2D, C2, and C2D machine types to serve predictions.
Microsoft Azure Releases And Updates
Workspaces for Azure API Management is now in public preview. This new capability enables granular access control in multi-team Azure API Management deployments.
An updated Conversion service now allows you to easily onboard indoor maps and a new Features API to read and edit your map features!
Azure Virtual Network Manager (AVNM) is now generally available. AVNM is a one-stop shop for managing the connectivity and security of your network resources at scale.
Azure Traffic Manager now enables reserving domain labels across all traffic manager profiles.
Announcement of AAD support for the JMS 2.0 API of Azure Service Bus.
ASP. NET web app migration to Azure App Service using PowerShell Scripts is now generally available.
Azure Maps ensures HIPAA compliance is achieved for protected health information (PHI), like when calculating travel time and distance from a patient’s location to the nearest healthcare facility.
You can now use separate encryption keys for each customer in a single hierarchical namespace enabled storage account.
Azure Data Explorer now supports ingestion of data from .NET Applications via the Serilog Sink.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Not knowing exactly what is in your cloud accounts, or those of your client's can be a worry. What exactly is running in there and what is it costing? What obsolete resources are you still being charged for? What legacy dev/test environments can be switched off? What open ports are inviting in hackers? You can answer all these questions with Hava.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (No credit card required and includes a forever free tier):