Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday November 11th 2022.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
AWS Updates and Releases
AWS AppConfig, a feature of AWS Systems Manager, has achieved FedRAMP High authority to operate in AWS GovCloud (US-West) and AWS GovCloud (US-East) Regions. You can now use AWS AppConfig to more quickly and safely update software, and build applications for workloads that require FedRAMP High authorization.
AWS AppConfig allows customers to release new features and capabilities faster and more safely. Customers can configure, validate, and deploy feature flags and dynamic configuration data to update application behavior at runtime, without doing a code deployment.
Feature flags allow customers to deploy new capabilities to production, but hide them behind a feature flag, which is a simple configuration value. When engineers are ready to start releasing the feature, they can simply update the flag to a subset of users, and measure impact.
Subsequently, features can be rolled out slowly or quickly to all users. AWS AppConfig provides safety guard rails to help you release the new capability with confidence.
Starting this week, Amazon SageMaker JumpStart provides two additional state-of-the-art foundational models, Bloom for text generation and Stable Diffusion for image generation. Customers can access newly added models through the SageMaker Python SDK APIs and SageMaker JumpStart UI inside SageMaker Studio.
Bloom can be used to complete the sentence or generate long paragraphs for 46 different languages, and generated text often appears indistinguishable from the human-written text. This release includes Bloom-560m, Bloom-1b1, and Bloom-1b7 models for text generation. Stable diffusion generates images from given text, and is known for its realistic images that closely resemble the input text.
Amazon SageMaker JumpStart is the Machine Learning (ML) hub of SageMaker that offers 350+ built-in algorithms, pre-trained models, and pre-built solution templates to help customers get started with ML fast. Pre-trained models hosted in JumpStart are State-of-the-Art (SOTA) open source models from popular model hubs such as TensorFlow, PyTorch, Hugging Face and MXNet, and support popular ML tasks such as object detection, text classification, and text generation.
To help data scientists and ML practitioners get started quickly and securely, contents are stored in AWS repository and come with training and inferencing scripts compatible with SageMaker features. Customers can fine-tune models using their own data or deploy as-is for inferencing.
AWS Storage Gateway expands availability to the AWS Middle East (UAE) Region enabling customers to deploy and manage hybrid cloud storage for their on-premises workloads.
AWS Storage Gateway is a hybrid cloud storage service that provides on-premises applications access to virtually unlimited storage in the cloud. You can use AWS Storage Gateway for backing up and archiving data to AWS, providing on-premises file shares backed by cloud storage, and providing on-premises applications low latency access to data in the cloud.
Amazon Simple Email Service (Amazon SES) is now available in the Asia Pacific (Jakarta) AWS Region. Amazon SES is a scalable, cost-effective, and flexible cloud-based email service that allows digital marketers and application developers to send marketing, notification, and transactional emails from within any application. To learn more about Amazon SES, visit this page.
With this launch, Amazon SES is available in 22 AWS regions globally: US East (Ohio, N. Virginia), US West (N. California, Oregon), AWS GovCloud (US-West), Asia Pacific (Osaka, Mumbai, Sydney, Singapore, Seoul, Tokyo, Jakarta), Canada (Central), Europe (Ireland, Frankfurt, London, Paris, Stockholm, Milano), Middle East (Bahrain), South America (São Paulo), and Africa (Cape Town).
The Amazon Time Sync Service is now available over the Internet. Built on Amazon's network infrastructure, the Amazon Time Sync Service utilizes a global fleet of redundant satellite-connected and atomic reference clocks in AWS regions to deliver current time readings of the Coordinated Universal Time (UTC) global standard. Previously, Amazon Time Sync was available through EC2 instances.
Now, you can access the Amazon Time Sync Service at time.aws.com as a publicly available NTP service in addition to the connection provided directly to EC2 instances. This means your devices and infrastructure outside of AWS, such as IOT devices and on-premises infrastructure, can synchronize to the same highly available time sources that were previously accessible only from within our data centers.
In the event of a leap second, the Amazon Time Sync service automatically handles this for you by smoothing out the addition, or removal, of the leap second with a 24-hour linear smear from noon to noon UTC.
Amazon Aurora Serverless v2, the next version of Aurora Serverless, is now available in Asia Pacific (Osaka), Asia Pacific (Jakarta) and Middle East (Bahrain).
Aurora Serverless is an on-demand, automatic scaling configuration for Amazon Aurora. Aurora Serverless v2 scales instantly to support even the most demanding applications, delivering up to 90% cost savings compared to provisioning for peak capacity. It adjusts capacity in fine-grained increments to provide just the right amount of database resources for an application’s needs. You don’t need to manage database capacity, and you pay for only the resources consumed by your application.
Aurora Serverless v2 provides the full breadth of Amazon Aurora capabilities, including Multi-AZ support, Global Database, RDS Proxy, and read replicas. Amazon Aurora Serverless v2 is ideal for a broad set of applications. For example, enterprises that have hundreds of thousands of applications, or software as a service (SaaS) vendors that have multi-tenant environments with hundreds or thousands of databases, can use Aurora Serverless v2 to manage database capacity across the entire fleet.
Aurora Serverless v2 is available for the MySQL 8.0-, PostgreSQL 13- and PostgreSQL 14-compatible editions of Amazon Aurora.
Starting this week, AWS customers can run Apple macOS Ventura (13.0) as Amazon Machine Images (AMIs) on Amazon EC2 Mac instances. Apple macOS Ventura is the current major macOS release from Apple, and introduces multiple new capabilities and performance improvements over prior macOS versions. Apple macOS Ventura supports running Xcode versions 14.0 and later, which include the latest SDKs for iOS, iPadOS, macOS, tvOS, and watchOS.
Apple macOS Ventura AMIs are AWS supported images that are backed by Amazon Elastic Block Store (EBS). These AMIs include the AWS Command Line Interface, Command Line Tools for Xcode, Amazon SSM Agent, and Homebrew. The AWS Homebrew Tap includes the latest versions of AWS packages included in the AMIs.
EC2 Mac instances enable customers to run on-demand macOS workloads in the AWS cloud for the first time, extending the flexibility, scalability, and cost benefits of AWS to all Apple developers. With EC2 Mac instances, developers creating apps for iPhone, iPad, Mac, Apple Watch, Apple TV, and Safari can provision and access macOS environments within minutes, dynamically scale capacity as needed, and benefit from AWS’s pay-as-you-go pricing.
Amazon Relational Database Service (Amazon RDS) for MySQL now supports MySQL minor version 8.0.31. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MySQL, and to benefit from the numerous fixes, performance improvements, and new functionality added by the MySQL community.
You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. Learn more about upgrading your database instances, including automatic minor version upgrades, in the Amazon RDS User Guide.
Amazon WorkSpaces now offers an API to quickly modify the streaming protocol on your existing WorkSpaces without requiring you to go through migrate WorkSpace process. This feature will allow you to retain the root volumes after you switch your WorkSpaces streaming protocol from PC-over-IP (PCoIP) to WorkSpaces Streaming Protocol (WSP) or vice-versa.
End users of WorkSpaces can now easily take advantage of WSP features like Webcam and Smart Card support. WorkSpaces administrators will benefit from this feature because they will not have to manage custom images and bundles configured solely for the purpose of utilizing multiple streaming protocols supported by WorkSpaces.
This new API is available in 13 regions, including Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), South America (San Paulo), US East (Northern Virginia), US West (Oregon), and Africa (Cape Town).
This week, Amazon Elastic Container Service (Amazon ECS) has announced the availability of ECS task scale-in protection, a new feature that enables customers to protect long-running tasks from being terminated by scale-in events and deployments.
This feature helps simplify orchestration of queue-processing asynchronous applications such as a video transcoding jobs wherein some tasks may be running for hours even when cumulative service utilization is low or when a new code version is being deployed while some tasks have not yet finished completing ongoing work that would be expensive to reprocess.
Amazon ECS has an inbuilt Service auto scaling feature that lets you set policies to adjust the desired count of tasks of an ECS service in response to changes in traffic patterns. This enables customers to build applications that are highly scalable for peak traffic conditions, and reduce compute costs during periods of low utilization.
Customers have told AWS that certain applications require a mechanism to safeguard mission-critical tasks from termination by scale-in events during times of low utilization or during service deployments. Customers can now use a new Amazon ECS container agent endpoint or the new Amazon ECS UpdateTaskProtection API to protect tasks from termination by auto scaling and deployment events.
With task scale-in protection, customers have a simplified mechanism for orchestrating their long-running applications with ECS, while also benefiting from the performance and cost-savings of service auto scaling, without needing to invest in custom tooling.
AWS IoT Device Defender, a fully managed service for auditing and monitoring devices connected to AWS IoT, now supports a new audit check for revoked intermediate Certificate Authority (CA). If a CA revokes an intermediate CA because it is potentially compromised, then all certificates issued by that intermediate CA are also potentially compromised and invalid. This new audit check identifies active device certificates issued by a revoked intermediate CA, and helps customers review and replace these active device certificates.
To use this feature, you can enable the new audit check in the Device Defender audit section. If you have not enabled Device Defender audit, you can do it with one-click on Device Defender to help secure your IoT devices. If the CA certificates have an issuer endpoint declared in X.509 extension, this audit check identifies the revoked intermediate CAs and reports the active device certificates issued by them.
You can disable the compromised device certificate using a pre-built mitigation action or initiate a custom mitigation through a Lambda function. More documentation for Device Defender audit CA check can be found here. This feature is available in all regions where AWS IoT Device Defender is available.
Amazon Simple Notification Service (Amazon SNS) now supports a higher default quota for subscription filter policies. With the increased quota, you can now have up to 10,000 subscription filter policies per account, and can apply up to 200 subscription filter policies per topic. By default, topic subscribers receive every message published to a topic. With subscription filter policies, subscribers can filter out unwanted messages, simplifying their architecture and optimizing the utilization of their resources.
Amazon SNS is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication. The A2A pub/sub functionality provides topics for high-throughput, push-based, many-to-many messaging between distributed systems, microservices, and event-driven serverless applications. The A2P functionality enables you to send messages to users at scale, via SMS, mobile push, and email.
Amazon QuickSight now supports monitoring of SPICE consumption by sending metrics to Amazon CloudWatch. QuickSight developers and administrators can use these metrics to observe and monitor SPICE consumption and proactively monitor if a QuickSight account is reaching its SPICE capacity limit which might result in failed dataset ingestions. This allows them to provide their readers with a consistent and uninterrupted experience on QuickSight. For more information, visit here.
Administrators and developers can also use the CloudWatch console to graph metric data generated by Amazon QuickSight. For more information, see Graphing metrics in the Amazon CloudWatch User Guide. They can also create a CloudWatch alarm that monitors CloudWatch metrics for their QuickSight assets. CloudWatch will automatically send a notification when the metric reaches a specified threshold. For examples, see Creating Amazon CloudWatch Alarms in the Amazon CloudWatch User Guide.
This week, AWS announced the general availability of AWS Wavelength on the Vodafone 4G/5G network in Manchester, United Kingdom. Wavelength Zones are now available in 2 locations in the U.K, including the previously announced Wavelength Zone in London.
AWS Wavelength Zones embed AWS compute and storage services at the edge of communications service providers’ 5G networks while providing seamless access to cloud services running in an AWS Region. By doing so, AWS Wavelength minimizes the latency and network hops required to connect from a 5G device to an application hosted on AWS. With AWS Wavelength and Vodafone 5G, application developers can now build the ultra-low latency applications needed for use cases like smart factories, interactive live streaming, autonomous vehicles, video analytics, machine learning inference at the edge, and augmented and virtual reality-enhanced experiences.
Customers in the AWS Asia Pacific (Jakarta) Region can now use AWS Transfer Family.
AWS Transfer Family provides fully managed file transfers for Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (EFS). It supports file transfers over Secure File Transfer Protocol (SFTP), File Transfer Protocol (FTP), FTP over SSL (FTPS) and Applicability Statement 2 (AS2), in addition to common file processing steps, helping customers simplify and accelerate migration of B2B file transfer workflows to AWS.
This week, AWS announced the availability of next-generation General Purpose gp3 storage volumes for Amazon Relational Database Service (Amazon RDS). Amazon RDS gp3 volumes give you the flexibility to provision storage performance independently of storage capacity, paying only for the resources you need.
Every gp3 volume provides you the ability to select from 20 GiB to 64 TiB of storage capacity, with a baseline storage performance of 3,000 IOPS included with the price of storage. For workloads that need even more performance, you can scale up to 64,000 IOPS for an additional cost.
Amazon RDS offers storage types that differ in performance characteristics and price. General Purpose (SSD) storage offers a cost-effective option that is ideal for a broad range of small to medium sized database workloads and development and testing environments. Provisioned IOPS storage is designed to meet the needs of business-critical, performance-sensitive transactional database workloads, particularly workloads that require low I/O latency and consistent I/O throughput and are often running larger data sets.
You can launch a database with a gp3 volume with a few clicks in the Amazon RDS Management Console or via a single command in the AWS Command Line Interface (AWS CLI). gp3 volumes are available in the AWS commercial and AWS GovCloud (US) Regions and are supported on Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server.
AWS Security Hub now supports automated security checks aligned to the Center for Internet Security’s (CIS) AWS Foundations Benchmark version 1.4.0 requirements, Level 1 and 2 (CIS v1.4.0). Security Hub’s CIS v1.4.0 standard includes up to 39 automated rules that conduct continuous checks against 38 CIS v1.4.0 requirements across 8 AWS services. The CIS v1.4.0 standard is supported in addition to the CIS v1.2.0 standard which was previously available in Security Hub.
The new standard is now available in all public AWS Regions where Security Hub is available and in AWS GovCloud (US). To see and enable the new standard and the checks within it, visit the Standards page in Security Hub. You can also enable the standard using the BatchEnableStandards API or use our example script to enable the standard across many accounts.
You can enable your 30-day free trial of AWS Security Hub with a single-click in the AWS Management console. Please see the AWS Regions page for all the regions where AWS Security Hub is available. To learn more about AWS Security Hub capabilities, see the AWS Security Hub documentation, and to start your 30-day free trial see the AWS Security Hub free trial page.
Starting this week, AWS customers can create custom line items in AWS Billing Conductor (ABC) that span across multiple billing periods. For customers who want to apply consistent discounts or fees to specific billing groups, recurring custom line items remove the need to create a new custom line item each month for the same purpose.
For example, customers can use a flat recurring custom line item to distribute credits or use percent-based recurring custom line items to apply a managed service fee or tax. ABC applies recurring custom line items at the beginning of each new billing period.
AWS Billing Conductor is available in all public AWS Regions, excluding the Amazon Web Services China (Beijing) Region, operated by Sinnet and Amazon Web Services China (Ningxia) Region, operated by NWCD
Starting this week, Amazon EC2 C6id, M6id and R6id instances are available in AWS Region Asia Pacific (Sydney). These instances are powered by 3rd generation Intel Xeon Scalable Ice Lake processors with an all-core turbo frequency of 3.5 GHz and up to 7.6 TB of local NVMe-based SSD block-level storage.
Compared to previous generation instances: all three instances deliver up to 15% better price performance; C6id offers up to 138% higher TB storage per vCPU and 56% lower cost per TB; M6id and R6id offer up to 58% higher TB storage per vCPU, and 34% lower cost per TB.
C6id, M6id and R6id are built on AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security. Customers can take advantage of access to high-speed, low-latency local storage to scale performance of applications such as video encoding, image manipulation, other forms of media processing, data logging, distributed web-scale in-memory caches, in-memory databases, and real-time big data analytics.
Amazon MQ now provides support for ActiveMQ 5.17.2, which includes several fixes and enhancements to the previously supported version, ActiveMQ 5.17.1. The default connection limits for ActiveMQ brokers on Amazon MQ has been increased to 300 connections per transport protocol for mq.t2.micro and mq.t3.micro broker types and 2,000 connections per transport protocol for all other supported broker types.
Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. You can reduce your operational burden by using Amazon MQ to manage the provisioning, setup, and maintenance of message brokers. Amazon MQ connects to your current applications with industry-standard APIs and protocols to help you easily migrate to AWS without having to rewrite code.
The new default limits allow you to safely increase the number of connections to your ActiveMQ broker for higher performance. These new default limits will be applied to all new brokers and existing ones after the next restart or maintenance window, regardless of the version of ActiveMQ on the broker.
Starting this week, you can configure the set of instance types used when requesting EC2 capacity with attribute-based instance type selection.
Attribute-based instance type selection is a feature for Amazon EC2 Auto Scaling, EC2 Fleet, and Spot Fleet that makes it easy to create and manage instance type flexible capacity requests. With attribute-based instance type selection, you can define your instance requirements such as number of vCPUs and memory, and let EC2 Auto Scaling or Fleet pick the instances that fit the requirements you provided.
Until now, attribute-based instance type selection allowed to exclude specific instance types from the selection through the excluded instance types list, but did not allow the reverse, i.e. only including specific instance types for consideration. For example, to allow using only M6i instance family, you would have had to exclude over 400 instance types.
Now, you can just set M6i in the allowed instance types list to narrow down the selection set before applying other selection attributes, such as vCPU and memory. This is useful for workloads that have some extent of instance type flexibility but are still limited to specific instance types and want more granular control over which EC2 instance types to run on.
To get started, create or modify an Auto Scaling group or Fleet and specify allowed instance types list and other attributes such as the number of vCPUs and memory that you need. You can specify a list of instance types, families, or generations. Once you are done, EC2 Auto Scaling or Fleet will only consider the instance types that you explicitly included when selecting and launching instances.
Starting this week, you can request EC2 instances based on your workload’s network bandwidth requirements through attribute-based instance type selection.
Attribute-based instance type selection is a feature for Amazon EC2 Auto Scaling, EC2 Fleet, and Spot Fleet that makes it easy to create and manage instance type flexible fleets without researching and selecting EC2 instance types. With attribute-based instance type selection, you can simply define your instance requirements, such as number of vCPUs and memory, and let EC2 Auto Scaling or Fleet pick the instances that fit the requirements you provided.
As an added benefit, when EC2 releases new instances, they are automatically considered too. Until now, attribute-based instance type selection did not support selecting instances based on network bandwidth. Now, you can specify your desired network bandwidth as one of the instance selection criteria. This is useful for workloads that depend on network bandwidth such as video streaming, network appliances (e.g., firewalls), or any workloads that require timely transfer of large amounts of data to or from an instance.
To get started, create or modify an Auto Scaling group or Fleet with attribute-based instance type selection and specify your workload’s network bandwidth requirements and other attributes such as the number of vCPUs and memory that you need. You can specify a range for network bandwidth in Gbps or a specific value by setting both minimum and maximum to the same number. Once you are done, EC2 Auto Scaling or Fleet will select and launch instances based on the attributes, purchase option, and allocation strategy you selected.
Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Starting this week, you can now use Amazon Polly inside an Amazon Virtual Private Cloud (VPC), instead of connecting over the internet, which allows you to have better control over your network environment.
Starting from this week, Amazon Polly customers can use AWS PrivateLink to access privately the Polly APIs from your virtual private cloud. This is available in all AWS Regions where Amazon Polly is supported.
AWS PrivateLink provides private connectivity between VPCs, AWS services, and your networks. You can now manage your AWS resources in your VPC using AWS PrivateLink and meet your organization’s security and compliance requirements. To use AWS PrivateLink, create an interface VPC endpoint for Amazon Polly in your VPC using the Amazon VPC console, SDK, or CLI.
You can also access the VPC endpoint from on-premises environments or from other VPCs using AWS VPN, AWS Direct Connect, or VPC Peering. From today going forward you no longer need to use an internet gateway, Network Address Translation (NAT) devices, or firewall proxies to connect to Amazon Polly.
Amazon Web Services (AWS) announces the preview of Customer Provided Ephemeris support for AWS Ground Station, empowering space vehicle owners to provide their own position and trajectory information for a satellite as it orbits the Earth. Over the next year, satellite ride share providers are planning on launching more than 100 satellites from around the world.
With Customer Provided Ephemeris, flight operations crews can conduct tracking and telemetry operations for space vehicles until entry into final orbit. Customers can also use this feature to improve the quality of tracking of satellites already in orbit and to modify antenna pointing based on orbital maneuvers.
For satellites that are in operations today, AWS Ground Station provides the data required to point the antenna to the correct satellite during a contact. With Customer Provided Ephemeris, satellite vehicle owners can upload their own position data using either the Two Line Element (TLE) or Orbital Ephemeris Message (OEM) formats for each contact.
Governments, businesses, and universities can use Customer Provided Ephemeris to provide more timely and accurate satellite position information during the Launch and Early Operations (LEOPs) of a new satellite, when no historical data is available. Customer Provided Ephemeris can also be used to account for the use of satellite propulsion systems, which would make previously known trajectory information incorrect.
AWS Ground Station is a fully managed service that lets customers control satellite communications, process satellite data, and scale your satellite operations. Customers can integrate their space workloads with other AWS services in real-time using Amazon’s low-latency, high-bandwidth global network.
Customers can stream their satellite data to Amazon Elastic Compute Cloud (Amazon EC2) for real-time processing, store data in Amazon Simple Storage Service (Amazon S3) for low cost archiving, or apply AI/ML algorithms to satellite images with Amazon SageMaker. With AWS Ground Station, customers pay only for the actual antenna time that they use.
AWS Ground Station antennas are now available in North America (Ohio, Hawaii, Oregon), Middle East (Bahrain), Europe (Ireland, Stockholm), Asia Pacific (Seoul, Singapore, Sydney), Africa (Cape Town), and South America (Punta Arenas).
Amazon Keyspaces (for Apache Cassandra), a scalable, serverless, highly available, and fully managed Apache Cassandra-compatible database service, now supports the Murmur3Partitioner.
Partitioners create a numeric token using a hashed value of the partition key. Keyspaces uses this token to distribute data across nodes. Clients can also use these tokens in SELECT operations and WHERE clauses to optimize read and write operations. The Murmur3Partitioner is the preferred partitioner for Cassandra developers.
For existing keyspaces, you can safely change your account-level partitioner at any time. You do not need to reload your Amazon Keyspaces data when you change the partitioner setting. Clients will automatically use the new partitioner setting the next time they connect.
Amazon ElastiCache for Redis now supports Redis 7. This release brings several new features to Amazon ElastiCache for Redis:
- Redis Functions: ElastiCache for Redis 7 adds support for Redis Functions, and provides a managed experience enabling developers to execute LUA scripts with application logic stored on the ElastiCache cluster, without requiring clients to re-send the scripts to the server with every connection.
- ACL improvements: ElastiCache for Redis 7 adds support for the next version of Redis Access Control Lists (ACLs). With ElastiCache for Redis 7, clients can now specify multiple sets of permissions on specific keys or keyspaces in Redis.
- Sharded Pub/Sub: Amazon ElastiCache for Redis 7 now gives you the ability to run Redis’ Pub/Sub functionality in a sharded way when running ElastiCache in Cluster Mode Enabled (CME). Redis’ Pub/Sub capabilities enable publishers to issue messages to any number of subscribers on a channel. With Amazon ElastiCache for Redis 7, channels are bound to a shard in the ElastiCache cluster, eliminating the need to propagate channel information across shards resulting in improved scalability.
For more details about Amazon ElastiCache for Redis 7 (Enhanced), refer to Supported ElastiCache for Redis versions. You can upgrade the engine version of your cluster or replication group by modifying it and specifying 7 as the engine version.
To learn more about upgrading engine versions, refer to Version Management.
Amazon CloudWatch Logs now supports exporting logs to Amazon Simple Storage Service (S3) buckets encrypted using Server side encryption with KMS (SSE-KMS) keys.
CloudWatch customers can export logs within a selected time-range from CloudWatch to S3 buckets in their own or another AWS account. With today’s launch, customers can leverage the enhanced protection and audit trail offered by Amazon S3 buckets encrypted using SSE-KMS as part of their logs exports. Customers can create and manage customer managed keys or use AWS managed keys that are unique to each customer, their service, and their Region.
Customers can use AWS Management console or AWS CLI to set up logs exports to SSE-KMS encrypted S3 buckets in all AWS regions.
Starting this week, AWS Firewall Manager supports Import Existing Network Firewall feature that enables customers to discover existing AWS Network Firewalls and bring them under the central management of AWS Firewall Manager. With this feature, you can see your security coverage provided by existing firewalls across AWS organizations and manage those firewalls without having to instantiate new ones.
Prior to this week, AWS customers using Firewall Manager did not have direct visibility into the existing firewall coverage across their accounts. Security teams and individual accounts often ended up duplicating firewall coverage for accounts or overriding existing protections, resulting in additional overhead for the teams to resolve. Now, using this feature you can discover pre-existing firewall and associate a set of rules and centrally manage them across all associated firewalls to meet your security standards.
You can now accelerate repeat queries in Amazon Athena with Query Result Reuse, a new caching feature released today. Repeat queries are SQL queries submitted within a short period of time and produce the same results as one or more previously run queries. In use cases like business intelligence, where interactive analysis in a dashboard can cause multiple identical queries to be run, repeat queries can increase time to insight as each query needs time to read and process data before returning results to the user.
Query Result Reuse works by returning a previously stored query result when a repeat query is submitted. Athena identifies repeat queries automatically for you, so you don’t need to change your existing queries or modify any application code. With Query Result Reuse, repeat queries run up to 5x faster, giving you increased productivity for interactive data analysis; and don’t scan data, so you get improved performance at lower cost.
Using Query Result Reuse is simple and intuitive. From the Athena console, turn on Query Result Reuse using the *Reuse query results* toggle in the query editor. By default, results from previous queries can be reused by new queries for 60 minutes, but you can choose an expiration time that works best for your use case and frequency of updates in your data lake. Query Result Reuse is available through the Athena console, API, AWS SDK, and compatible applications connecting through Athena’s JDBC or ODBC drivers. Query Result Reuse requires Athena engine version 3, which is more performant and offers additional features not available in version 2.
This week, AWS announced AWS Resource Explorer, a managed capability that simplifies the search and discovery of resources, such as Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Kinesis streams, and Amazon DynamoDB tables, across AWS Regions in your AWS account. AWS Resource Explorer is available at no additional charge to you.
Start your resource search in the AWS Resource Explorer console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the unified search bar from wherever you are in the AWS Management Console. From the search results displayed in the console, you can go to your resource’s service console and Region with a single step and take action.
AWS Resource Explorer is generally available in the following AWS Regions, with more Regions coming soon: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), and South America (São Paulo).
You can now share Amazon EC2 placement groups across multiple AWS accounts using AWS Resource Access Manager (RAM). When a placement group is shared, instances launched by one AWS account can utilize a placement group created by another account. RAM helps you securely share your resources across AWS accounts, within your organization or organizational units (OUs) in AWS Organizations, and with IAM roles and IAM users for supported resource types.
Placement groups influence the placement of interdependent EC2 instances on the underlying hardware. You can select a placement group strategy that best meets the needs of your workload, whether that be low network latency or increased fault tolerance. With placement group sharing, the benefits offered by placement groups are no longer limited to the boundaries of a single account. For example, if you have workloads under different accounts and want low latency between these workloads, then you can use placement group sharing (with cluster placement strategy) to place the instances across accounts closer to each other.
To share a placement group, create a resource share through AWS RAM. Then, add the resources (placement groups) to the resource share and specify the target accounts that you wish to share the resources with. You can see placement groups that have been shared with your account in the EC2 console’s placement groups pane or by using the describe-placement-groups API. To use a shared placement group, specify the placement group Id while launching your instances.
This feature is now available in all commercial AWS Regions for free.
Today Amazon Polly, a service that turns text into lifelike speech, announced the general availability of three new neural Text-to-speech (NTTS) voices: Elin for Swedish, Ida for Norwegian and Suvi for Finnish.
We have designed all three voices to sound friendly and welcoming to make them suitable for general content in publication, interactive voice response services and for educational use cases. With this launch Amazon Polly offers now Neural TTS support for 26 languages and language variants.
This week, AWS CloudTrail announced support for a delegated administrator account, which provides customers with the ability to manage organization trails and CloudTrail Lake event data stores from an account other than the management account in AWS Organizations.
Delegated administrator support enables flexibility for customers by allowing the management account to delegate CloudTrail administrative actions to an organization member account, such as their security and logging member account. With this feature, the management account of an organization remains the owner of all CloudTrail organization resources, even when those organization trails or CloudTrail Lake event data store resources are created and managed through the delegated administrator account.
This helps customers with maintaining continuity of organization-wide CloudTrail audit logs, avoiding any disruption when changes are made to their organization in AWS Organizations.
The management account can register or deregister a member account as a delegated administrator for CloudTrail from the Settings page in the CloudTrail console, or through the AWS CLI or API. Once the management account designates a member account as a delegated administrator, users and roles in the delegated administrator account can perform administrative operations such as create, update, query, and delete on their organization’s event data stores and organization trails.
Amazon OpenSearch Service now supports managed VPC endpoints (powered by AWS PrivateLink) to connect to your Amazon OpenSearch Service VPC-enabled domain in a Virtual Private Cloud (VPC). With an Amazon OpenSearch Service managed endpoint, you can now privately access your OpenSearch Service domain within your VPC from your client applications in other VPCs, within the same or across AWS accounts, without using public IPs or requiring traffic to traverse the Internet.
With this release, OpenSearch Service allows you to create an endpoint to an OpenSearch Service domain from another VPC in the same account or in another AWS account. You can either use the OpenSearch Service console or OpenSearch Service APIs to create an OpenSearch Service managed VPC endpoint.
Amazon OpenSearch Service managed VPC endpoints are powered by AWS PrivateLink. If you use OpenSearch Service managed VPC endpoints to access your OpenSearch Service domain from your client applications in another VPC, you will incur standard AWS data transfer charges and the associated cost for the AWS PrivateLink interface endpoint.
You can now register domain names and auto-configure domain name systems (DNS) on Amazon Lightsail. Amazon Lightsail is the easiest way to get started with AWS for users who need a secure, high-performance and reliable virtual private server (VPS) solution with a simple management interface and predictable pricing. With the addition of domain registration, Lightsail users are able to create a unique online address for their website or web application to establish their own personal or business identity on the internet. With domain registration, users also get Lightsail’s DNS management functionality.
Previously, when users built a website or web application in Lightsail, they would need to purchase a domain elsewhere and then use Lightsail’s DNS management to setup their DNS records for a Lightsail website or application.
Now, with the launch of domain registration in Lightsail, users have the option to create and register an Amazon Route 53 domain directly in the Lightsail console with a few clicks through the use of a simple interface. In addition, with domains registered in AWS, Lightsail will autoconfigure DNS for their domain.
Lightsail now also automates DNS validation of Secure Sockets Layer (SSL) / Transport Layer Security (TLS) certificates. Through this simplified registration and DNS autoconfiguration experience, Lightsail makes it easier to build, configure, and manage application infrastructure for you to get started in the cloud.
You can now use AWS Certificate Manager (ACM) to request and use Elliptic Curve Digital Signature Algorithm (ECDSA) P-256 and P-384 Transport Layer Security (TLS) certificates to secure your network traffic. TLS certificates are used to secure network communications and to establish the identity of websites over the internet as well as resources on private networks. ACM lets you easily provision, manage, and deploy public and private TLS certificates. You can learn more about ECDSA security, performance and compatibility in this AWS Security blog post.
You can use either the ACM console or the request-certificate API with the key-algorithm parameter to issue public / private ECDSA P-256 and P-384 TLS certificates. AWS customers who need to use TLS certificates with 120+ bit security strength can now use these ECDSA certificates to help meet their compliance needs. ECDSA certificates have a higher security strength of 128 and 192 bits respectively, when compared to 112 bit RSA 2048 certificates that you can also issue from ACM.
Security strength is a measure of resilience against brute force attacks. ACM issued ECDSA public certificates can be used with supported integrated services such as Application Load Balancer (ALB) and Amazon CloudFront. When used with integrated services you also get the benefit of managed renewals i.e., ACM will attempt to renew ACM issued, in-use certificates before expiry and automatically bind the renewed certificates with an integrated service.
Sensitive data detection and processing in AWS Glue can now detect PII and other sensitive data specific to United Kingdom (UK) and Japan.
Sensitive data detection in AWS Glue identifies a variety of PII and other sensitive data like credit card numbers. It helps customers take action, such as tracking it for audit purposes or redacting the sensitive information before writing records into a data lake. AWS Glue Studio’s visual, no-code interface lets users include Sensitive Data Detection as a step in a data integration job. Customers can also define their own custom detection patterns for their unique needs.
Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in additional AWS regions US West (Northern California) and Europe (Paris). With Amazon Redshift Serverless, all users including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds.
Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.
With a few clicks in the AWS Management Console, you can get started with querying data using the Query Editor V2 or your tool of choice with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations.
You can take advantage of preloaded sample datasets along with sample queries to kick-start analytics immediately. You can create databases, schemas, and tables, and load your own data from Amazon S3, access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot.
With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, in Amazon S3 data lakes and data in your operational databases, such as Amazon Aurora and Amazon Relational Database Service (Amazon RDS). Amazon Redshift Serverless provides unified billing for queries on any of these data sources, helping you efficiently monitor and manage costs.
AWS is pleased to announce the availability of two new features in the AWS Well-Architected Tool (AWS WA Tool)—integrations with AWS Trusted Advisor and AWS Service Catalog AppRegistry—that will help you more easily discover the information needed to answer Well-Architected review questions and shorten your review time.
Previously, customers conducting AWS Well-Architected Framework reviews had to spend time verifying their answers by double checking their workloads to see if they were following best practices; there wasn’t a clear link between the workload being reviewed and it’s associated resources.
With the AWS Trusted Advisor integration, the AWS WA Tool now presents findings based on automated Trusted Advisor resource checks, providing you with further contextual information during your reviews. This change improves the accuracy of your answers and the speed of your review.
The AWS WA Tool is also now integrated with AWS Service Catalog AppRegistry. AWS Service Catalog AppRegistry lets you store your AWS applications, their associated resource collections, and application attribute groups. The integration with AWS Service Catalog AppRegistry provides you with better visibility into which applications are associated with which workloads during the Well-Architected review process, saving you time by tracking and organizing your workload's associated resources.
AWS Private 5G is a managed service that makes it easier to deploy, operate, and scale your own private mobile network, with all required hardware and software provided by AWS. To set up a private mobile network and connect devices, AWS delivers and maintains the following required components: small-cell radio unit, subscriber identity modules (SIM cards), and mobile network software running in the AWS cloud.
This week, AWS are pleased to announce support for scaling your network anytime you need by ordering additional small-cell radio units to extend coverage or additional SIMs to connect more devices.
From the AWS management console, you can order a private mobile network that meets your connectivity requirements. With support for now scaling your network, customers can start with a single radio-unit and order additional radio-units as their needs grow or order multiple radio-units right away to meet their network coverage or capacity requirements. With multiple radio units spanning your network, the end-devices provisioned within the network can have seamless connectivity from any location within the coverage area.
Mobile enterprise network coverage and capacity requirements can vary due to seasonal demands and other factors, requiring the network to evolve with the needs of the business. AWS Private 5G gives enterprises the ability to easily scale up or scale down the network. To get started, use the AWS management console to order the required number of radio-units and the number of devices you want to connect.
Amazon SageMaker Canvas announces support for correlation matrices for advanced data analysis, thereby expanding capabilities to get insights from your data prior to building ML models. SageMaker Canvas is a visual point-and-click interface that enables business analysts to generate accurate ML predictions on their own — without requiring any machine learning experience or having to write a single line of code.
SageMaker Canvas provides capabilities to analyze and explore your data such as the ability to impute missing values and outliers with standard or custom values, using mathematical functions and operators to define and create new features, and visual exploration of data through box plots, bar graphs, and scatter plots.
Starting this week, SageMaker Canvas supports correlation matrices allowing you to summarize a dataset into a matrix that shows correlations between two or more values and how they relate to one another. This helps you identify and visualize patterns in a given dataset for advanced analysis.
You can now generate correlation matrices for numerical, categorical, and a combination of both variables. Datasets can be analyzed using Pearson or Spearman correlations for numerical values, or Mutual Information for categorical values, giving you choice and flexibility.
The output from these matrices can be used to impute missing data, assign weights to values to understand variance, and other advanced analysis. Correlation matrices are applicable for many use cases such as analyzing price variance based on supply and demand, forecasting the amount of rain based on weather patterns, and understanding the propensity to buy based on new capabilities of a product or service.
Amazon SageMaker Canvas announces support for encryption at rest for datasets and machine learning (ML) models for time series forecasting use cases, on both quick and standard builds. SageMaker Canvas is a visual point-and-click interface that enables business analysts to generate accurate ML predictions on their own — without requiring any machine learning experience or having to write a single line of code.
Previously, SageMaker Canvas supported encryption at rest using customer managed keys (CMK) with AWS Key Management Service (KMS) for binary classification problems with 2 category predictions, multi-class classification problems with 3+ category predictions, and regression problems with numeric predictions. With this announcement, the support for encryption at rest using CMK with AWS KMS is also available for time-series forecasting, thereby covering all problem types currently supported by SageMaker Canvas.
You can enable encryption at rest for SageMaker Canvas by using your own keys to encrypt the file systems on the instances used to train models and generate insights, and the model data in your Amazon S3 bucket. You can continue to import, rotate, disable, delete, define usage policies for, and audit the use of your keys giving you full control and flexibility for your encryption policies.
Anthos Clusters on AWS
For more information, see the GCP-2022-024 security bulletin.
Anthos Clusters on Bare Metal
Anthos Clusters on VMware
Anthos clusters on VMware 1.11.5-gke.14 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.11.5-gke.14 runs on Kubernetes 1.22.15-gke.2200.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.
Anthos Service Mesh
The rollout of version 1.15 for managed Anthos Service Mesh has completed in all regions.
App Engine flexible environment Go / Java / Node.js / PHP / Python / Ruby
The option to set IP mode to
internal for App Engine flexible environment instances is now generally available.
Bare Metal Solution
Enhancements to Bare Metal Solution resource management–Adds the following self-service functionality:
- Manage networks–You can create, attach, detach, and delete networks. You can also add, update, and delete VLAN attachments.
- Manage boot volume snapshots–You can create, delete, and restore boot volume snapshots.
- Manage NFS file storage–You can create, update, and delete NFS storage volumes.
- Advanced networking–You can add connections to multiple networks on a single server. You can now view advanced networking information through the Google Cloud console too.
- Labels–You can organize your Bare Metal Solution resources by using labels. You can add labels to servers, networks, storage volumes, and NFS file storage.
- Manage the power state of servers–You can turn power on and off for your server and restart your server. You can also check the status of a server.
- Transfer files that are hive partitioned.
- Load semi-structured JSON source data into BigQuery without providing a schema by using JSON columns in the destination table.
- Encrypt destination tables using customer managed encryption keys.
- Transfer data to
Chronicle Curated Detections has been enhanced with the following additional detection content:
- Windows-based threats:
- Security Posture Downgrade: detects activity attempting to disable or decrease the effectiveness of security tools.
- Cloud threats:
- Suspicious Behavior: detects activity that is thought to be uncommon and suspicious in most environments.
- Service Disruption: detects destructive or disruptive actions that, if performed in a functioning production environment, may cause a significant outage.
- Suspicious Infrastructure Change: detects modifications to production infrastructure that align with known persistence tactics.
Cloud Composer 1.19.14 and 2.0.31 release started on November 7, 2022. Get ready for upcoming changes and features as we roll out the new release to all regions. This release is in progress at the moment. Listed changes and features might not be available in some regions yet.
Cloud Composer 1.19.14 and 2.0.31 images are available:
- composer-1.19.14-airflow-1.10.15 (default)
Cloud Data Fusion
DNS Resolution is generally available (GA). You can use domain or hostnames for sources instead of IP addresses for pipeline design-time activities, such as getting schema, wrangling, and previewing pipelines.
Cloud Data Loss Prevention
- Exclude a column from inspect findings if the column name matches a regular expression.
- Exclude a finding from inspect findings if that finding is proximate to a string that matches a regular expression.
Previously, you could do these only by setting up a hotword rule that lowers the likelihood of the matching findings.
For more information on excluding findings, see Exclusion rules.
You can now dynamically include your log content in your alert notifications for easier troubleshooting. For more information about extracting log content into labels, see Create a log-based alert (Monitoring API).
You can now dynamically include your log content in your alert notifications for easier troubleshooting. For more information about extracting log content into labels, see Create a log-based alert (Monitoring API).
Cloud Spanner now supports cross-region and cross-project backup use cases. You can copy a backup of your database from one instance to another instance in a different region or project to provide additional data protection and compliance capabilities.
Expanded Cloud Storage monitoring dashboards are now available in Preview.
- Available metrics include server and client error rates, write request counts, network ingress rates, and network egress rates.
- Dashboards can be filtered by bucket location.
- Dashboards are customizable, including the ability to set up alerts.
In addition to the project-wide dashboard, per-bucket dashboards are available in a new Observability tab in the Bucket Details for each bucket.
The Autoclass feature is now available.
- When enabled, Autoclass transitions the storage classes of your objects automatically based on their access patterns.
- Currently, Autoclass can only be enabled at the time of bucket creation.
gcloud storage GA release 1.1 is now available.
- The 1.1 release adds more support for managing bucket settings, including lifecycle configurations, CORS configurations, and labels.
The Trace scatterplot now indicates traces with error codes as red. For more information, see Finding and viewing traces.
Per VM Tier_1 networking performance now includes up to 25 Gbps egress for traffic going to public IP addresses (increased from 7 Gbps).
Share sole-tenant node groups with other projects or with your entire organization. For more information, see Share sole-tenant node groups.
- Frankfurt, Germany (europe-west3-a,b)
- Eemshaven, Netherlands (europe-west4-a,b)
- Council Bluffs, Iowa, USA (us-central1-a,b)
- Las Vegas, Nevada, USA (us-west4-a,b)
See VM instance pricing for details.
New sub-minor versions of Dataproc images:
1.5.76-debian10, 1.5.76-rocky8, 1.5.76-ubuntu18
2.0.50-debian10, 2.0.50-rocky8, 2.0.50-ubuntu18
preview 2.1.0-RC3-debian11, preview 2.1.0-RC3-rocky8, preview 2.1.0-RC3-ubuntu20,
Document AI Warehouse
Enable the validation check for Enum property values by default. Enum values that are not defined in the schema will not be allowed to be set to the corresponding document property Enum fields. The
validationCheckDisabled flag in
EnumTypeOptions disables the ENUM Validation.
Enable text extraction feature.
Fix partial document update which could cause loss of raw document name entry.
Fix plain_text unintended movement in API response messages.
Fix an issue when a user supplies multiple property filters for the same schema in the search query, the service returns error.
You can now use use compact placement for node auto-provisioning in Standard clusters with GKE version 1.25 and later. To learn more, see Use compact placement for node auto-provisioning.
GKE Gateway for Single Cluster is now generally available in GKE version 1.24 and later. Use the Gateway API to express the intent of your inbound HTTP(S) traffic into your GKE cluster and the Gateway controller will instrument and fully manage the external and/or internal HTTP(S) load balancer(s) that forwards traffic to your applications. For complete details about the GKE Gateway controller, refer to the following documentation.
A security vulnerability, CVE-2022-39278, has been discovered in Istio, which is used in Anthos Service Mesh, that allows a malicious attacker to crash the control plane. GKE doesn't ship with Istio and isn't affected by this vulnerability. However, if you separately installed Anthos Service Mesh or Istio in your GKE cluster, refer to the Anthos Service Mesh security bulletin for more information.
When you create a LoadBalancer service in GKE, the Google Cloud controllers automatically create the following firewall rules and apply them to the GKE nodes to allow inbound connections on the Service port:
- Internal load balancer with GKE subsetting or external load balancer with regional backend services (RBS):
- Internal load balancer without GKE subsetting or external load balancer with target pool:
These rules now include the load balancer IP address in the destination ranges field to further control the inbound connections to the nodes. You can use the
gcloud compute firewall-rules describe command to check a relevant firewall.
Google Distributed Cloud Edge
This is a minor release of Google Distributed Cloud Edge (version 1.2.0).
The following new features have been introduced in this release of Google Distributed Cloud Edge:
- Anthos VM Runtime replaces Kubevirt in Google Distributed Cloud Edge starting with this release. To continue using your existing virtual machines, you must shut them down and back them up before your Distributed Cloud Edge deployment is upgraded to release 1.2.0, and then re-create them as described in Manage virtual machines.
- A new Google Distributed Cloud Edge hardware configuration is available. This new configuration supports GPU-based workloads that run on NVIDIA Tesla T4 GPUs in both containers and virtual machines. To order a GPU-enabled configuration, see Order Google Distributed Cloud Edge. To learn more about running workloads on GPUs, see Manage GPU workloads.
- Google Distributed Cloud Edge now supports the following networking features:
- Cross-project VPN Connections. To learn more, see Manage cross-project VPN Connections.
- Layer 3 load balancing using Anthos on bare metal load balancers. To learn more, see Layer 3 load balancing with Anthos on bare metal load balancers.
- MacVLAN driver support for creating secondary network interfaces for Pods running containerized workloads. The MacVLAN driver is not supported on Pods running virtual machines. To learn more, see Configure a secondary network interface on a Pod using the MacVLAN driver.
- Multi-network support for creating secondary network interfaces for Pods. To learn more, see Configure a secondary network interface on a Pod using Distributed Cloud Edge multi-networking.
- NodePort service. To learn more, see NodePort service.
ClusterDNSresource. To learn more, see ClusterDNS resource.
- Virtual Routing and Forwarding (VRF). This feature will be enabled by Google once your Distributed Cloud Edge deployment has been upgraded to release 1.2.0.
The following changes have been introduced in this release of Google Distributed Cloud Edge:
- Google Distributed Cloud Edge now ships with the NVIDIA Tesla T4 GPU driver version 470.63.01.
- The Network Function operator feature of Google Distributed Cloud Edge has been updated as follows. To learn more, see Network Function operator.
NodeSystemConfigUpdateresource now supports additional
NodeSystemConfigUpdateresource now supports fields for specifying the IP address lists and domain lists of private image registries.
CustomNetworkInterfaceConfigresource no longer supports certain previously supported fields.
- You can now scope both safe and unsafe
sysctlsparameters to a specific Pod or namespace using the
tuningContainer Networking Interface (CNI) plug-in.
- Webhook-level enforcement of valid field values is now in effect.
- The Kubernetes control plane has been updated to version 1.23.5-gke.1505.
corednsservice has been updated to version 1.8.6-gke.0.
The following issues have been resolved in this release of Google Distributed Cloud Edge:
- Google Distributed Cloud Edge nodes no longer become temporarily unresponsive due to excessive memory utilization.
You can use the Google Cloud console to view authentication activities, which indicate when your service accounts and keys were last used to call a Google API.
Network Connectivity Center
The Google Cloud console now lets you do all of the following:
- See a list of existing hubs
- Create multiple hubs
- Edit an existing hub's description and/or labels
Previously, you could complete these actions only by using the Google Cloud CLI or the API.
Also, the Network Connectivity Center Quotas page has been updated to describe the limit of 60 hubs per project.
For more information about creating and managing hubs, see Work with hubs and spokes.
Security Command Center
Security Command Center released two new error detectors:
KTD blocked by admission controller
KTD image pull failure
These detectors report configuration errors that prevent the Container Threat Detection service from functioning properly.
Remediation guidance is provided for each finding type. For more information, see Security Command Center errors.
Text-to-Speech now offers these new voices. See the supported voices page for a complete list of voices and audio samples.
Users can now use SMB to transfer data by enabling SMB file share.
AutoML Image Classification Error Analysis
Error analysis allows you to examine error cases after training a model from within the model evaluation page. This feature is available in Preview.
For each image you can inspect similar images from the training set to help identify the following:
- Label inconsistencies between visually similar images
- Outliers if a test sample has no visually similar images in the training set
After fixing any data issues, you can retrain the model to improve model performance.
Preview: You use the
restricted.googleapis.com VIPs to access Google APIs and services using IPv6 addresses. For more information, see the following pages:
- Use the VIPs from VMs with internal IPv6 addresses
- Use the VIPs from VMs with external IPv6 addresses
- Use the VIPs from on-premises hosts with IPv6 addresses
Workflows is available in the following additional regions:
asia-east2(Hong Kong, China)
us-east5(Columbus, United States)
us-south1(Dallas, United States)
us-west2(Los Angeles, United States)
us-west3(Salt Lake City, United States)
Microsoft Azure Releases And Updates
Azure Managed HSM now supports SSL/TLS Offload for F5 and Nginx.
Azure Front Door Standard and Premium supports enabling managed identities for Azure Front Door to access Azure Key Vault.
Wayfinding in Azure Maps Creator enables you to provide your users with the shortest path between two points within a facility.
You can use this feature to migrate Azure Front Door (classic) to Azure Front Door Standard and Premium with zero downtime.
You can now use this feature to upgrade your Azure Front Door Standard profile to Premium tier without downtime.
New GA features include the ability to automate auto-shutdown/auto-start schedules, configure and customize a compute instance, seamlessly build NLP/vision models, and assess AI systems.
Azure Percept DK and support from associated Azure services will be retired on March 30th, 2023.
New public preview features include reading Delta Lakes in fewer steps, debugging training jobs, and performing data wrangling.
Deploy Azure Database for PostgreSQL – Flexible Server workloads in two new China regions.
Multivariate Anomaly Detection will allow you to evaluate multiple signals and the correlations between them to find sudden changes in data patterns.
Assess, get right-sized Azure recommendations for Azure migration targets, and migrate databases offline from on-premises SQL Server to Azure SQL Database.
Develop your applications with more flexibility with retryable writes in Azure Cosmos DB for MongoDB.
Intra-account container copy helps you create offline copies of containers within Azure Cosmos DB SQL API and Cassandra API accounts.
You can use new cost recommendations for Virtual Machine Scale Sets for optimization.
Deploy applications to staging environments using Azure DevOps.
Deploy Static Web Apps using Gitlab and Bitbucket as CI/CD providers.
Use stable URLs with Azure Static Web Apps preview environments.
You can skip the default API builds via GitHub Actions and Azure pipelines.
Azure Static Web Apps now supports building and deploying full-stack Node 18 applications.
Azure NetApp Files for datastores is now generally available in Azure VMware Solution.
You can now update SSH keys on existing AKS nodepools post deployment.
The update brings enhancements to IoT Edge 1.4 and more.
Log Analytics Tables can now be managed from the tables entry of Azure Portal in Log Analytics workspaces.
Azure Automation now supports Availability zones to provide improved resiliency and reliability to the service, runbooks and other automation assets.
Faster Azure Synapse Analytics Spark performance at no cost.
All newly created Azure Front Door, Azure Front Door (classic) or Azure CDN Standard from Microsoft (classic) resources will block any HTTP request that exhibits domain fronting behavior.
Create applications with .NET 7 runtime to take advantage of serverless functions in Azure.
Azure Static Web Apps now supports building and deploying full-stack .NET 7.0 isolated applications.
Optimize workloads with new node sizes, AV52, and AV36P, now generally available in Azure VMware Solution.
You can now save on Virtual Machine software from third-party publishers by purchasing software reservations.
Logic Apps Standard VS Code Extension now allows you to export groups of logic apps workflows deployed to Azure.
Increase your security posture and reduce false positives with Default Rule Set 2.1, now generally available on Azure's global Web Application Firewall running on Azure Front Door.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (includes forever free tier):