Hava Blog and Latest News

In Cloud Computing This Week [Jan 13th 2023]

Written by Team Hava | January 13, 2023

 

Happy New Year.

Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday January 13th 2023.

To stay in the loop, make sure you subscribe using the box on the right of this page.

Of course we'd love to keep in touch at the usual places. Come and say hello on:

Facebook.      Linkedin.     Twitter.

AWS Updates and Releases

Source: aws.amazon.com

 

AWS AppConfig allows tracking of stale feature flags, improving code hygiene

This week, AWS AppConfig announced new options for customers to track and clean up stale feature flags. Previously, customers needed to build their own schedules for managing stale feature flags. Now, customers can do this through AWS AppConfig. Feature flagging is a powerful technique that allows engineering teams to change application behavior on production without pushing out new code.

By using feature flags, engineers can develop new capabilities, but hide them behind a feature flag configuration. Once ready to launch, AWS AppConfig allows you to roll flags out slowly. However, a common pain point with feature flags is the management of stale or unused flags. After a feature is launched, the flag may no longer be needed and becomes stale. Stale flags add clutter to your application code and configuration, and can make debugging your application challenging. Cleaning up unused flags improves application hygiene.

AWS AppConfig already allowed customers to designate a flag as ‘short-term.’ By indicating which flags are short-term, customers can find which flags are candidates for deprecation and removal. Now, engineers can also set an optional target deprecation date for each flag, and can sort and filter flags.

Furthermore, flags that have passed their deprecation dates are labelled as overdue. The AWS AppConfig console now has has been updated with improved search and filtering capabilities as part of this release. These tools allow customers to better manage their flags and keep their application code and configuration easier to maintain.

This new capability for any workload using AWS AppConfig Feature Flags, including AWS AppConfig’s Feature Flags for AWS Lambda and Feature Flags for Amazon ECS and Amazon EKS.

Announcing the general availability of AWS Local Zones in Perth and Santiago

AWS Local Zones are now available in two new metro areas—Perth and Santiago. You can now use these Local Zones to deliver applications that require single-digit millisecond latency or local data processing.

In early 2022, AWS announced plans to launch AWS Local Zones in over 30 metro areas across 27 countries outside of the US. AWS Local Zones are also generally available in 10 metro areas outside of the US (Bangkok, Buenos Aires, Copenhagen, Delhi, Helsinki, Hamburg, Kolkata, Muscat, Taipei, and Warsaw) and 16 metro areas in the US (Atlanta, Boston, Chicago, Dallas, Denver, Houston, Kansas City, Las Vegas, Los Angeles, Miami, Minneapolis, New York City, Philadelphia, Phoenix, Portland, and Seattle).

AWS Clean Rooms is now available in preview

This week, AWS announced the preview release of AWS Clean Rooms, a new analytics service that helps customers collaborate with their partners to more easily and securely analyze their collective datasets—without sharing or revealing underlying raw data. Instead of spending weeks or months developing clean room solutions, customers can use AWS Clean Rooms to create their own clean rooms in minutes and collaborate with any other company on the AWS Cloud to generate unique insights about advertising campaigns, investment decisions, and research and development.

Customers can use a broad set of built-in, privacy-enhancing controls for clean rooms—including query controls, query output restrictions, and query logging—that allow companies to customize restrictions on the queries run by each clean room participant. AWS Clean Rooms also includes advanced cryptographic computing tools that keep data encrypted—even as queries are processed—to help comply with stringent data handling policies. To learn more, visit AWS Clean Rooms.

AWS Clean Rooms (Preview) is available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), and Europe (Stockholm).

AWS Clean Rooms is part of AWS for Advertising and Marketing, an industry initiative offering a comprehensive set of purpose-built services, solutions, and partner offerings for advertising intelligence and measurement, advertising platforms, audience and customer data management, privacy-enhanced data collaboration, and customer experience. To learn more about the initiative, visit AWS for Advertising and Marketing.

AWS Config supports 22 new resource types

AWS Config now supports 22 more resource types for services including Amazon MQ, AWS AppConfig, AWS Cloud9, Amazon EventBridge, Amazon Fraud Detector, AWS IoT, AWS IoT Analytics, Amazon Lightsail (Virtual Server), AWS Elemental MediaPackage, Amazon Route 53 Recovery Readiness, AWS Resilience Hub, and AWS Transfer.

With this launch, customers can now use AWS Config to monitor configuration data for the following newly supported resource types:

1. AWS::AmazonMQ::Broker

2. AWS::AppConfig::Environment

3. AWS::AppConfig::ConfigurationProfile

4. AWS::Cloud9::EnvironmentEC2

5. AWS::EventSchemas::Registry

6. AWS::EventSchemas::RegistryPolicy

7. AWS::EventSchemas::Discoverer

8. AWS::FraudDetector::Label

9 .AWS::FraudDetector::EntityType

10. AWS::FraudDetector::Variable

11. AWS::FraudDetector::Outcome

12. AWS::IoT::Authorizer

13. AWS::IoT::SecurityProfile

14. AWS::IoT::RoleAlias

15. AWS::IoT::Dimension

16. AWS::IoTAnalytics::Datastore

17. AWS::Lightsail::Bucket

18. AWS::Lightsail::StaticIp

19. AWS::MediaPackage::PackagingGroup

20. AWS::Route53RecoveryReadiness::RecoveryGroup

21. AWS::ResilienceHub::ResiliencyPolicy

22. AWS::Transfer::Workflow

Amazon Kendra releases Microsoft Exchange Connector to enable email-messaging search

Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting today, AWS customers can use the Amazon Kendra Microsoft (MS) Exchange Connector to index and search emails from MS Exchange.

Critical information can be scattered across multiple data sources in an enterprise, including messaging platforms like MS Exchange. Amazon Kendra customers can now use the Kendra MS Exchange Connector to index messages and search for information across this content using Kendra Intelligent Search. The Kendra MS Exchange Connector indexes e-mail messages, attachments, OneNote documents, contacts, and calendar events in mailboxes and public folders from MS Exchange.

AWS Resource Groups now emits lifecycle events

This week, AWS Resource Groups is launching a new feature that emits lifecycle events when resources are added or removed from your groups and when resource groups are created, updated or deleted. These events allow you to initiate automated, event driven workflows for your applications. For example, with these events you can automate initiation of common operational tasks such as installing software packages, creating backups, or creating Amazon Elastic Block Store snapshots.

You can use AWS Resource Groups to model logical collections of resources such as applications, projects, and cost centers, and act on them using AWS services such as AWS Systems Manager and Amazon CloudWatch. With this new feature, AWS Resource Groups is now integrated with Amazon EventBridge as a new event source.

This integration enables you to invoke over 35 AWS services, including AWS Lambda, Amazon Simple Notification Service (Amazon SNS), and Amazon Kinesis. As actionable event notifications are pushed to you via Amazon EventBridge, they eliminate the need for complex polling mechanisms to monitor for changes to your resource groups.

Amazon ElastiCache for Redis data tiering is now available in the AWS Europe (Stockholm) Region

You can now use data tiering for Amazon ElastiCache for Redis as a lower cost way to scale your clusters to up to hundreds of terabytes of capacity in the Europe (Stockholm) Region. Data tiering provides a new price-performance option for Redis workloads by utilizing lower-cost solid state drives (SSDs) in each cluster node in addition to storing data in memory. It is ideal for workloads that access up to 20% of their overall dataset regularly, and for applications that can tolerate additional latency when accessing data on SSD.

When using clusters with data tiering, ElastiCache is designed to automatically and transparently move the least recently used items from memory to locally attached NVMe SSDs when available memory capacity is completely consumed. When an item that moves to SSD is subsequently accessed, ElastiCache moves it back to memory asynchronously before serving the request. Assuming 500-byte String values, you can expect an additional 300µs latency on average for requests to data stored on SSD compared to requests to data in memory.

ElastiCache data tiering is available when using Redis version 6.2 and above on Graviton2-based R6gd nodes. R6gd nodes have nearly 5x more total capacity (memory + SSD) and can help you achieve over 60% savings when running at maximum utilization compared to R6g nodes (memory only).

AWS Lambda now supports Maximum Concurrency for Amazon SQS as an event source

AWS Lambda now supports setting Maximum Concurrency to the Amazon SQS event source. Maximum Concurrency for SQS as an event source allows customers to control the maximum concurrent invokes by the Amazon SQS event source. When multiple Amazon SQS event sources are configured to a function, customers can control the maximum concurrent invokes of individual SQS event source. 

Lambda makes it easy to consume events from Amazon SQS at scale. A Lambda function subscribes to a SQS queue using an event source mapping (ESM). The ESM consists of processing instances that poll the queue for messages and invoke Lambda function. Processing instances scale up when there are more messages to process, and scale down when they encounter function errors or when the number of messages in the queue drop. 

Previously, customers looking to limit the maximum concurrent invokes by the ESM needed to set a reserved concurrency limit which would limit the concurrency used by the function, but at the cost of less consistent throughput and retrying messages due to function throttling. This new control on the event source mapping directly limits the number of concurrent invokes without having to configure reserved concurrency to perform a similar action. 

Amazon Kendra releases the Microsoft Teams Connector to enable Microsoft Teams messaging search

Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting this week, AWS customers can use the Amazon Kendra Microsoft Teams Connector to index and search messages and other entities from Microsoft Teams.

Critical information can be scattered across multiple data sources in an enterprise, including messaging platforms like Microsoft teams. Amazon Kendra customers can now use the Microsoft Teams Connector to index chat messages, channel posts, wikis and attachments and meeting chat, files and notes and search for information across this content using Kendra Intelligent Search.

This connector supports full and incremental sync. The connector helps to index documents and their access control information to limit the search results to only those documents the user is allowed to access. To show the search results based on user access rights and using only the user information, the connector provides an identity crawler to load principal information, such as user and group mappings, automatically into the Principal Store. 

AWS announces changes to AWS Billing, Cost Management, and Account consoles permissions

AWS announces the retirement of IAM actions for AWS Billing, Cost Management, and Account consoles under aws-portal service prefix, purchase-orders:ViewPurchaseOrders, and purchase-orders:ModifyPurchaseOrders and is replacing them with fine-grained service specific actions. This launch gives AWS customers more control over access to Billing, Cost Management, and Account services. These new permissions will also provide a single set of IAM actions that govern console and programmatic access to these services.

The fine-grained actions launch enables customers to provide individuals in their organization access to only services that are necessary for their job. For example, with these new permissions a customer can provide a developer access to Budgets and Cost Explorer, while denying access to Bills or Tax Settings services. This allows customers to put engineers and business unit leaders in charge of cost and usage optimization and control and implement decentralized cloud cost management.

Starting today, the fine-grained actions are available in all commercial regions, except China regions. The IAM actions for AWS Billing, Cost Management, and Account consoles under aws-portal service prefix, purchase-orders:ViewPurchaseOrders, and purchase-
orders:ModifyPurchaseOrders will no longer be available after July 6, 2023, and we encourage customers to update their policies.

To understand when the fine-grained actions will be effective for your accounts, how and which policies you need to update, and to know more about the newly launched Affected Policies identification tool, please visit this blog post 

AWS CloudFormation enhances Fn::FindInMap language extension to support default values and additional intrinsic functions

This week, AWS CloudFormation updated the language extension transform to support default values and additional intrinsic functions in Fn::FindInMap. Customers can use these features to minimize the size of their CloudFormation templates, and improve their readability.

The language transform extends CloudFormation template language with functions such as Fn::Length, Fn::JsonToString, and more. Customers can modularize their templates into groups with different attributes in Parameters and Mappings, and use Fn::FindInMap to refer to attributes of these groups. For example, you can use Fn::FindInMap for EC2 resource type with a Mappings section containing a single map, RegionMap, that associates AMIs with AWS Regions to your EC2 instances.

With these language enhancements, you can use intrinsic functions such as Fn::Split, Fn::Select, and others within Fn::FindInMap. Previously, Fn::FindInMap only supported Ref intrinsic functions. Additionally, you can define string or list type default values in Fn::FindInMap. To see the list of supported intrinsic functions and learn about Fn:FindInMap, refer to the user guide.

You can use Fn::Select and Fn::Split with Fn::FindInMap for AWS::KMS::Key resource type to enforce parameter value constraints such as KeyUsage property. You can achieve this in fewer lines of code, instead of declaring multiple conditions. Similarly, you can use default values in Fn::FindInMap to minimize the need for specifying all possible permutations of values in a mapping. For example, you do not have to create a mapping for every Region, and instead can use default values to specify not equal conditions. To see these and other examples, refer to the AWS GitHub repo.

AWS WAF is now available in the AWS Middle East (UAE) Region

Starting this week, AWS WAF is available in the AWS Middle East (UAE) region. This is the second region where AWS WAF is available in the Middle East, joining the AWS Middle East (Bahrain) Region and giving customers more choice and flexibility.

AWS WAF is a web application firewall that helps you protect your web application resources against common web exploits and bots that can affect availability, compromise security, or consume excessive resources. You can protect the following resource types: Amazon CloudFront distributions, Amazon API Gateway REST APIs, Application Load Balancer, AWS AppSync GraphQL API, and Amazon Cognito user pools.

With AWS WAF, you can control access to your content. Based on conditions that you specify, such as the IP addresses that requests originate from or the values of query strings, your protected resource responds to requests either with the requested content, with an HTTP 403 status code (Forbidden), or with a custom response.

Amazon SageMaker Canvas announces up to 3x faster ML model training time

Amazon SageMaker Canvas now delivers up to 3x faster machine learning (ML) model training time, enabling rapid prototyping and faster time-to-value for business outcomes. SageMaker Canvas is a visual interface that enables business analysts to generate accurate ML predictions on their own — without requiring any machine learning experience or having to write a single line of code. 

With SageMaker Canvas, you can build models using Quick build that focuses on speed or Standard build that focuses on accuracy. Both these methods give you a fully trained model with feature importance, explaining the impact of individual features in your data on the outcome of your model.

With improved performance optimizations, quick build models are now up to 3x times faster and standard build models are up to 2x times faster than previous runs. With faster model building times, you can now generate a quick build model in ~7 mins and a standard build model in ~75 mins for data sets up to 100MB in size. Accelerated model training times now enable you to prototype and experiment rapidly, resulting in quicker time to value for generating predictions using SageMaker Canvas. 

Announcing Amazon Pinpoint Singapore Sender ID registration workflow

This week Amazon Pinpoint is launching a new web form experience in the Amazon Pinpoint console to help make it easier for Amazon Pinpoint and Amazon SNS customers to register SMS Sender IDs in Singapore. Customers are no longer required to create a support ticket to register Sender IDs for Singapore. A Sender ID is an alphanumeric originator that identifies the sender of an SMS message, e.g. “AMAZON”. 

The new web form experience is in support of the new SMS Sender ID registration requirements that were introduced by Singapore's Infocomm Media Development Authority (IMDA) which comes into effect on January 30, 2023. This feature is available in all regions where Amazon Pinpoint is available. If using Amazon SNS in a region where the Amazon Pinpoint console is not available, customers can create a support ticket through the AWS Support Center to register Sender IDs.

Amazon RDS Optimized Reads is now available for up to 2X faster queries on Amazon RDS for MariaDB

Amazon Relational Database Service (Amazon RDS) for MariaDB now supports Optimized Reads for up to 2X faster query processing compared to previous generation instances. Optimized Read-enabled instances achieve faster query processing by placing temporary tables generated by the MariaDB server on the NVMe SSD-based block-level instance storage that’s physically connected to the host server. 

Complex queries that utilize temporary tables, such as queries involving sorts, hash aggregations, high-load joins, and Common Table Expressions (CTEs) can now execute up to 2X faster with Optimized Reads on RDS for MariaDB.

Optimized Reads is available by default on RDS for MariaDB version 10.4.25, 10.5.16, 10.6.7 and higher on Intel-based X2iedn, M5d and R5d instances and AWS Graviton2-based M6gd and R6gd database (DB) instances. R5d and M5d DB instances provide up to 3,600 GiB of NVMe SSD-based instance storage for low latency, high random I/O and sequential read throughput.

X2iedn, M6gd and R6gd DB instances are built on the AWS Nitro System, and provide up to 3,800 GiB of NVMe-based SSD storage and up to 100 Gbps of network bandwidth. 

Optimized Reads for Amazon RDS for MariaDB is available today on M5d, R5d, M6gd, R6gd and X2iedn instances in the same AWS Regions where these instances are available. For complete information on pricing and regional availability, please refer to the Amazon RDS for MariaDB pricing page.

Amazon EC2 G5g instances now available in Frankfurt region

Starting this week, the Amazon Elastic Compute Cloud (Amazon EC2) G5g instances powered by AWS Graviton2 processors and featuring NVIDIA T4G Tensor Core GPUs are now available in Europe (Frankfurt). G5g instances can be used for a wide range of graphics intensive and machine learning use cases.

They provide the best price performance in Amazon EC2 for Android game streaming use-cases. With G5g instances, developers can run Android games natively, encode the rendered graphics, and stream the game over network to end user devices. This helps simplify development effort and can lower the hourly cost per stream by up to 30% compared to G4dn instances.

G5g instances are also ideal for machine learning developers who are looking for cost-effective inference, have ML models that are sensitive to CPU performance, and leverage NVIDIA’s AI libraries. Embedded developers using Arm-based compute with GPU acceleration can also leverage G5g instances to scale their CI/CD and simulation workloads in the cloud.

G5g instances, are supported by popular Linux operating systems including Red Hat Enterprise Linux, SUSE, and Ubuntu. Many popular applications and services for security, monitoring and management, containers, and CI/CD from AWS and Independent Software Vendors also support AWS Graviton2-based instances.

The G5g instances are now available in 6 regions globally, including the AWS Europe (Frankfurt), US East (N. Virginia), US West (Oregon), and Asia Pacific (Tokyo, Seoul, Singapore) Regions, and are purchasable On-Demand, as Reserved instances, as Spot instances, or as part of Savings Plans. They are available in 6 sizes providing up to 64 vCPUs, 2 NVIDIA T4G Tensor Core GPUs, 32 GB memory, 25 Gbps of networking bandwidth, and 19 Gbps of Amazon Elastic Block Store (Amazon EBS) bandwidth.

Amazon Kendra releases new Google Drive Connector to enable document indexing and search on Google Drive

Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting today, AWS customers can use the new Amazon Kendra Google Drive Connector to index and search documents from Google Drive.

Critical information can be scattered across multiple data sources in an enterprise, including document storage platforms like Google Drive. Amazon Kendra customers can now use the Kendra Google Drive Connector to index content in documents and their comments. Users can search for answers to their questions accurately and quickly across this content using Kendra Intelligent Search. In addition to supporting HTML, PDF, Microsoft Word, Microsoft PowerPoint and plain text, the connector also supports Google Docs, Google Presentations, and Google Forms as document types.

This connector supports full and incremental sync. The connector helps to index documents and their access control information to limit the search results to only those documents the user is allowed to access. To show the search results based on user access rights and using only the user information, the connector provides an identity crawler to load principal information, such as user and group mappings, automatically into the Principal Store.

Amazon Personalize now supports tag based resource authorization

This week AWS announced support for tags in IAM policies to allow granular control over access to Amazon Personalize resources and operations. Amazon Personalize enables developers to improve customer engagement through personalized product and content recommendations – no ML expertise required.

Tags are labels in the form of key-value pairs that can be attached to individual Amazon Personalize resources to manage resources, or allocate costs. With this launch, customers can also perform tag based access control for Amazon Personalize resources and operations such as modify, update or delete.

For example, you can limit access to delete or update operations to specific individuals to avoid any accidental impact to your production environment. This functionality also allows customers with multi-tenant deployments to partition access to resources across their end customers.

This functionality is available for several Amazon Personalize resources such as dataset groups, solutions, campaigns, recommenders, import jobs, batch inference, batch segment jobs and other resources. 

For a complete list of the supported resources and to learn more on how to perform tag-based authorization for your Amazon Personalize resources, see the Amazon Personalize Developer Guide. Tagging support for Amazon Personalize resources is available in all Amazon Personalize regions.

Amazon Personalize launches new recipe “Trending-Now”

This week, Amazon Personalize was excited to announce a new Trending-Now recipe that will help customers recommend items gaining popularity at the fastest pace among their users. Amazon Personalize enables developers to improve customer engagement through personalized product and content recommendations – no ML expertise required.

User interests can change based on a variety of factors, such as external events or the interests of other users. It is critical for customers to tailor their recommendations to these changing interests to improve user engagement. With Trending-Now, you can surface items from your catalogue that are rising in popularity faster than other items, such as breaking news articles, popular social content or newly released movies.

Amazon Personalize looks for items that are rising in popularity at a faster rate than other catalogue items to help provide an engaging experience. Amazon Personalize also allows customers to define the frequency at which it identifies trending items, with options for refreshing recommendations every 30 mins, 1 hour, 3 hours or 1 day, based on the most recent interactions data from users.

Getting started with Trending Now is easy. You can create a Trending Now solution in your existing custom dataset group, or create a new custom dataset group and a new solution with the Trending Now use case.

Announcing the general availability of Amazon Route 53 Application Recovery Controller zonal shift

AWS are excited to announce the general availability of Amazon Route 53 Application Recovery Controller zonal shift which helps you quickly recover from application failures in an AWS Availability Zone (AZ). You can now shift application traffic away from using an AZ with a single action for multi-AZ resources with support of Application Load Balancer and Network Load Balancer. This will help you quickly recover an unhealthy application in an AZ, and reduce the duration and severity of impact to the application due to events such as power outages and hardware or software failures.

To initiate a zonal shift you can simply go to the Amazon Route 53 Application Recovery Controller console to start a zonal shift for a load balancer in your AWS account, in an AWS Region. You can also use the AWS SDK to start a zonal shift and programmatically move application traffic out of an AZ, and move it back once the affected AZ is healthy. Zonal shift is available for Application Load Balancers and Network Load Balancers with cross-zone load balancing turned off.

There is no additional charge for using zonal shift. Zonal shift is now available in: US East (Ohio), US East (Northern Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Tokyo), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Stockholm), and Asia Pacific (Jakarta).

Amazon RDS now supports restoring database snapshots from Multi-AZ with two readable standbys

Amazon Relational Database Service (Amazon RDS) now supports restoring database (DB) snapshots taken from Amazon RDS Multi-AZ deployments with two readable standbys to Amazon RDS Single-AZ DB instances and Amazon RDS Multi-AZ DB instances with one standby. You can now use the DB snapshot restore capability to create new MySQL or PostgreSQL DB instances for your development or test environments.

Amazon RDS Multi-AZ deployments provide enhanced availability and durability, making them a natural fit for production database workloads. Deployment of Amazon RDS Multi-AZ with two readable standbys supports up to 2x faster transaction commits than a Multi-AZ deployment with one standby instance. In this configuration, automated failovers typically take under 35 seconds. In addition, the two readable standbys can also serve read traffic without needing to attach additional read replicas. 

For a full list of the Amazon RDS Multi-AZ with two readable standbys regional availability and supported engine versions, refer the Amazon RDS User Guide.

Amazon Kendra releases S3 connector with VPC support to enable customers to index and search content from S3

Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting today, AWS customers can use the Amazon Kendra S3 Connector to index and search documents from S3 hosted in their VPC. 

Critical information can be scattered across multiple data sources in an enterprise, including data storage platforms like S3. Customers can host S3 buckets in their VPC for secure access to their data. Amazon Kendra customers can now use the Kendra S3 Connector to also crawl these S3 buckets in a secure environment such as in a customer’s VPC to index documents and search for information across this content using Kendra Intelligent Search.

Amazon Location Service adds GrabMaps in Southeast Asia

Amazon Location Service adds a new data source in Southeast Asia, GrabMaps, offering maps, search, and routing. Developers building applications in Southeast Asia can display their data on local up-to-date maps, use search boxes to locate end-user addresses and points of interest, and calculate routes using real-time traffic conditions.

GrabMaps, an enterprise division of Grab, is built on the principles of community-based mapping that leverages Grab’s consumers, merchants, and fleets of drivers and delivery partners. GrabMaps extracts data on a daily basis from millions of orders and rides, with real-time feedback from partners on traffic, road closures, business address changes and more.

This allows GrabMaps to remain highly cost-effective while offering the best quality regional mapping data in accuracy, coverage, and freshness. Today, GrabMaps supports eight Southeast Asian countries: Singapore, Cambodia, Malaysia, Vietnam, Philippines, Indonesia, Myanmar, and Thailand. 

GrabMaps for Amazon Location Service is available in the Asia Pacific (Singapore) region. To learn more, visit to the Amazon Location Service Developer Guide.

Amazon Location Service is a location-based service that helps developers easily and securely add maps, points of interest, geocoding, routing, tracking, and geofencing to their applications without compromising on data quality, user privacy, or cost. With Amazon Location Service, you retain control of your location data, protecting your privacy and reducing enterprise security risks.

Amazon Location Service provides a consistent API across a range of location-based service data providers (Esri, HERE, Open Data Maps, and GrabMaps), all managed through one AWS console.

AWS Network Firewall adds support for reject action for TCP traffic

AWS Network Firewall now supports reject as a firewall rule action so you can improve performance of latency-sensitive applications and improve internal security operations.

AWS Network Firewall’s flexible rules engine lets you define firewall rules that give you fine-grained control over network traffic. Before today, you could configure stateful rules to pass, drop, or alert on network traffic. When drop action is configured, the firewall drops the traffic, but sends no response to the source sender. This impacts TCP connections because sessions remain open until the time to live threshold is exceeded.

If you want to understand why packets were dropped then you need to spend additional time and effort to complete a traceroute test or review your logs. Starting today, AWS Network Firewall will allow you to configure a stateful rule and apply a reject action when the rule is matched for TCP traffic. The firewall drops the packet and sends a TCP reset (RST) to notify the sender that the TCP connection failed. You can apply the reject action to firewall rules using the default action order, or you can set an exact order using the strict rule ordering method.

There is no additional charge for using this new AWS Network Firewall feature, but you are responsible for any additional logging costs. This feature is available in all commercial AWS Regions and AWS GovCloud (US) Regions where AWS Network Firewall is available. AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon VPCs. The service automatically scales with network traffic volume to provide high-availability protections without the need to set up or maintain the underlying infrastructure.

Amazon S3 Storage Lens introduces tiered pricing for cost-effective monitoring at scale

Amazon S3 Storage Lens now offers lower pricing tiers for customers with 25 billion or more objects, making continuous monitoring of large storage footprints more cost effective. Customers with hundreds of billions of objects will now see progressively lower prices, saving up to 40% on organization-wide storage monitoring.

The first 25B objects are billed at $0.20, the next 75B at $0.16, and all objects beyond 100B at $0.12 per million objects monitored monthly. This new pricing takes effect in the monthly billing cycle starting on January 1, 2023. S3 Storage Lens is pre-configured to include 28 free metrics by default for all customers with 14 days of historical data.

By upgrading to S3 Storage Lens advanced metrics and recommendations, customers can receive 35 additional metrics with 15 months of historical data, providing insights related to activity, deeper cost optimization, data protection, and detailed status codes.

Amazon Kendra launches Kendra Intelligent Ranking for self-managed OpenSearch

Amazon Kendra is an intelligent search service powered by machine learning, that enables organizations to provide more relevant information to customers and employees, when they need it.

This week, AWS are excited to announce the launch of Amazon Kendra Intelligent Ranking for self-managed OpenSearch, allowing OpenSearch users with document search applications (i.e. help pages, documentation, informational web sites, internal wikis) to improve the quality of their search results.

With Amazon Kendra’s Intelligent Ranking plugin for self-managed OpenSearch, developers can leverage Kendra’s ML-powered semantic ranking technology to quickly improve the quality of their OpenSearch search results, without any ML expertise. To enable Intelligent Ranking, developers can either simply turn the feature on for a given OpenSearch index or apply intelligent ranking at query time via DSL, as needed.

Developers can also use the new search comparison UI in the OpenSearch dashboard to help them assess the search results improvements over the default OpenSearch ranker.

Introducing Amazon EMR Serverless Custom images: Bring your own libraries and application dependencies

Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. This week AWS announced that EMR Serverless now allows you to customize images for Apache Spark and Hive. This means that you can package application dependencies or custom code in the image, simplifying running Spark and Hive workloads.

Running custom images simplifies many big data analytics use cases. For example, data engineers can customize the default release image to package common dependencies, custom code, specific Java or Python versions, or SSL certificates required by workloads. They can then store these customized images in Amazon Elastic Container Repository (ECR), making it easy to run Spark workloads with custom dependencies.

Security engineers can scan these images to comply with organizational standards. Data Scientists can customize runtime images to include proprietary libraries or specific Python packages. Further, EMR Serverless releases can directly be integrated with your organization's Docker build, test and deployment processes, simplifying continuous integration and continuous delivery (CI/CD) of applications.

Amazon EC2 Auto Scaling now forecasts frequently for more accurate predictive scaling

Amazon EC2 Auto Scaling now produces predictive scaling policy forecasts 4x per day, up from 1x per day prior to now. Reducing the forecast interval from 24 hours to 6 hours produces more accurate predictive scaling policies that quickly adapt to changing demand trends. All customers will benefit from this new default for predictive scaling. Customers use predictive scaling policies to scale out the capacity of their Auto Scaling groups based on forecasted demand, improving application availability, and alleviating the need for costly capacity buffers to accommodate spikes in demand.

Predictive Scaling policies are best-suited for applications that experience repeatable patterns of demand changes - such as daily spikes in user traffic or service demand. Predictive scaling learns from historical demand patterns and scales out capacity in advance of the forecasted demand. Predictive scaling policies are tuned to account for time consuming initialization steps to prepare instances to serve traffic - such as loading gigabytes of data, provisioning services, or running custom scripts. 

Using the Auto Scaling console, customers can validate the forecast accuracy by visually comparing the generated predictive scaling policy against actual demand. Predictive scaling can be enabled by customers through the Amazon EC2 Auto Scaling console, SDK, CLI, AWS CloudFormation, or AWS Cloud Development Kit.

AWS App Runner now supports retrieving secrets and configuration from AWS Secrets Manager and AWS Systems Manager

Amazon EC2 Auto Scaling now produces predictive scaling policy forecasts 4x per day, up from 1x per day prior to now. Reducing our forecast interval from 24 hours to 6 hours produces more accurate predictive scaling policies that quickly adapt to changing demand trends. All customers will benefit from this new default for predictive scaling. Customers use predictive scaling policies to scale out the capacity of their Auto Scaling groups based on forecasted demand, improving application availability, and alleviating the need for costly capacity buffers to accommodate spikes in demand.

Predictive Scaling policies are best-suited for applications that experience repeatable patterns of demand changes - such as daily spikes in user traffic or service demand. Predictive scaling learns from historical demand patterns and scales out capacity in advance of the forecasted demand. Predictive scaling policies are tuned to account for time consuming initialization steps to prepare instances to serve traffic - such as loading gigabytes of data, provisioning services, or running custom scripts. 

Using the Auto Scaling console, customers can validate the forecast accuracy by visually comparing the generated predictive scaling policy against actual demand. Predictive scaling can be enabled by customers through the Amazon EC2 Auto Scaling console, SDK, CLI, AWS CloudFormation, or AWS Cloud Development Kit.

Amazon MWAA now supports Apache Airflow version 2.4 with Python 3.10

You can now create Apache Airflow version 2.4 environments on Amazon Managed Workflows for Apache Airflow (MWAA) with Python 3.10 support.

Amazon MWAA is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud. With Apache Airflow 2.4 on Amazon MWAA, customers can enjoy the same scalability, availability, security, and ease of management that Amazon MWAA offers with the improvements of Apache Airflow 2.4. Further, with Airflow 2.4 MWAA has increased the Python version to 3.10, providing support for newer Python libraries, features, and improvements.

You can launch a new Apache Airflow 2.4 environment on Amazon MWAA with just a few clicks in the AWS Management Console in all currently supported Amazon MWAA regions. To learn more about Apache Airflow 2.4 visit the Amazon MWAA documentation and the Apache Airflow 2.4 change log in the Apache Airflow documentation.

Amazon S3 now automatically encrypts all new objects

Amazon S3 now automatically applies S3 managed server-side encryption (SSE-S3) as a base level of encryption to all new objects added to S3, at no additional cost and with no impact on performance. SSE-S3 uses 256-bit Advanced Encryption Standard and has been configured for trillions of objects by customers.

This new base level of encryption helps customers meet their encryption requirements, with no changes to applications. Alternatively, customers can still choose to update this default configuration using customer-provided encryption keys (SSE-C) or AWS Key Management Service keys (SSE-KMS).

Since 2017, AWS customers have used the S3 Default Encryption feature to apply a base level of encryption for every object added to their buckets. S3 Default Encryption is an optional bucket-level setting that customers use to establish a default level of encryption.

With this update, Amazon S3 will automatically apply SSE-S3 as the base level of Default Encryption setting for all new buckets and for existing buckets without any customer configured encryption setting. Existing buckets currently using S3 Default Encryption configuration will not change.

Customers can continue to update the Default Encryption configuration but can no longer remove this setting from any S3 bucket to disable automatic encryption on new objects. As a result, all new data uploaded to S3 will be encrypted at rest.

The automatic encryption status for new object uploads and S3 Default Encryption configuration is available in AWS CloudTrail logs. Over the next few weeks, this status will begin to show in the S3 management console, S3 Inventory, S3 Storage Lens, and as an additional S3 API header in the AWS CLI and AWS SDK.

AWS will update the S3 documentation once this additional information is available in all AWS Regions. This update is available in all AWS Regions, including the AWS GovCloud (US) Regions and AWS China Regions. For detailed information on the expected experience, see the AWS News Blog post for this new base level of encryption or visit the Amazon S3 encryption documentation.

Amazon Route 53 Resolver endpoints for hybrid cloud are now available in the Middle East (UAE) Region

You can now use Amazon Route 53 Resolver endpoints for hybrid cloud configurations in the Middle East (UAE) Region.

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) service. Amazon Route 53 Resolver endpoints make hybrid cloud configurations easier to manage by enabling seamless DNS query resolution across your entire hybrid cloud. Create DNS endpoints and conditional forwarding rules to allow resolution of DNS namespaces between your on-premises data center and Amazon Virtual Private Cloud (Amazon VPC).

For detailed information on how you can manage Resolver endpoints using the console or the API, refer to our guides (Console Reference Guide and API Reference Guide).

Announcing Amazon Elastic Fabric Adapter Installer v1.21

AWS released a newer version of Elastic Fabric Adapter (EFA) Installer (v1.21). This version introduces support for the Rocky Linux v9.0 and OpenSUSE Leap 15.4 operating systems. Support for Rocky Linux 9.0 is being requested by customers who deploy High Performance Computing applications as well as customers of AWS Elemental. Starting with EFA Installer v1.21, OpenSUSE Leap 15.4 will also be supported.

The EFA Installer v1.21 can be used in any region on any of the many instance types where EFA is available. To learn how to use EFA for your workloads, please refer to the user guide.

EFA is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS, such as High Performance Computing and Machine Learning. Its custom-built operating system (OS) bypass hardware interface is designed to enhance the performance of inter-instance communications, which is critical to scaling these applications.

Amazon EC2 R6g, R6gd, and M6gd instances are now available in additional regions

Starting this week, Amazon Elastic Compute Cloud (Amazon EC2) R6g instances are available in AWS Region Asia Pacific (Osaka). R6gd instances are available in AWS Regions Asia Pacific (Osaka), Asia Pacific (Seoul) and Europe (Stockholm). 

M6gd instances are available in AWS Regions Asia Pacific (Osaka) and Africa (Cape Town). These instances are powered by AWS Graviton2 processors, and they are built on AWS Nitro System. The Nitro System is a collection of AWS designed hardware and software innovations that enables the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.

R6g instances are built for running memory-intensive workloads such as open-source databases, in-memory caches, and real time big data analytics. R6gd instances provide local SSD storage and are ideal for memory-intensive workloads that need access to high-speed, low latency storage. M6gd offer a balance of compute, memory, networking, and local SSD resources for a broad set of workloads.

They are built for applications such as application servers, microservices, gaming servers, mid-size data stores, and caching fleets that also need access to high-speed, low latency storage.

Application Auto Scaling now offers better visibility into scaling decisions

Application Auto Scaling now offers customers more visibility about the scaling decisions it takes for an auto scaled resource. Application Auto Scaling (AAS) is a service that offers standardized experience across 13 different AWS services beyond Amazon EC2, for example Amazon DynamoDB provisioned read and write capacity, and Amazon Elastic Container Service (ECS) services.

Application Auto Scaling takes scaling actions based on the customer-defined scaling policies that act as a guideline for scaling decisions. Until now, customers only got details about successful and not about deferred scaling actions. With this feature, customers get more insights about scaling decisions that do not lead to a scaling action in both descriptive and machine-readable format.

The feature is primarily useful for customers to understand why a scaling action is not triggered. Sometimes, the scaling policy may decide not to take a scaling action, giving the impression that it hasn’t followed the customer guideline. However, this is done to safeguard customers or because of some configurations that customers need to change.

For example, AAS always respects the minimum and maximum capacity range defined by the customers which could be blocking a scaling action. Until now, customers didn’t have enough visibility around such decisions taken by AAS. Often, customers had to raise support tickets to understand the issue, delaying the possible remediation.

Amazon CloudWatch Logs removes Log Stream transaction quota and SequenceToken requirement

Starting this week, Amazon CloudWatch Logs is removing the 5 requests per second log stream quota when calling Amazon CloudWatch Logs PutLogEvents API. There will be no new per log stream quota. With this change AWS have removed the need for splitting your log ingestion across multiple log streams to prevent log stream throttling.

Amazon CloudWatch Logs is also removing the requirement of providing a sequence token when calling Amazon CloudWatch Logs PutLogEvents API. CloudWatch Logs will still accept PutLogEvents API request with sequence token and return a PutLogEvents API response with a sequence token to maintain backwards compatibility.

Your PutLogEvents API calls will be accepted, and CloudWatch Logs won't return InvalidSequenceToken errors irrespective of providing an invalid sequence token. We expect this change to simplify your integration with CloudWatch Logs because you won't need to coordinate across different clients writing to the same log stream. 

AWS Glue Studio Job Notebooks are now available in the Asia Pacific (Jakarta) AWS region

AWS Glue Studio Job Notebooks provide interactive job authoring in AWS Glue, which helps simplify the process of developing data integration jobs. Job Notebooks also provide a serverless, built-in interface for AWS Glue Interactive Sessions, a feature of AWS Glue that allows customers to run interactive Apache Spark workloads on demand.

AWS Glue Studio Job Notebooks need minimal setup so developers can get started quickly, and they feature one-click conversion of notebooks into AWS Glue data integration jobs. They also support live data integration directly from the notebook, fast startup times, and built-in cost management. Job Notebooks add no additional cost; customers pay only for the Interactive Sessions they use while authoring.

Backtrack Support for Aurora MySQL Version 3 (Compatible with MySQL 8.0) is generally available

Amazon Aurora MySQL Version 3 (Compatible with MySQL 8.0) now offers support for Backtrack. Backtrack allows you to move your database to a prior point in time without needing to restore from a backup, and it completes within seconds, even for large databases.

When you enable Aurora Backtrack in the RDS management console, you specify how long to retain data records, and you pay for the space these records use. For example, you could set up Backtrack to allow you to move your database up to 72 hours back. Backtrack is also useful for development and testing, particularly in situations where your test deletes or otherwise invalidates the data. Simply backtrack to the original database state, and you're ready for another test run.

Please refer to the Aurora documentation for more details and the AWS Region Table for complete regional availability.

Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services.

Amazon Kinesis Data Streams for Amazon DynamoDB is now available in 11 additional AWS Regions

Amazon Kinesis Data Streams for Amazon DynamoDB is now available in 11 additional AWS Regions around the world. With Amazon Kinesis Data Streams, you can capture item-level changes in your DynamoDB tables as a Kinesis data stream with a single click in the DynamoDB console, or by using the AWS API, CLI or CloudFormation templates.

You can use this capability to build advanced streaming applications with Amazon Kinesis services. For example, Amazon Kinesis Data Analytics reduces the complexity of building, managing, and integrating with Apache Flink, and provides built-in functions to filter, aggregate, and transform streaming data for advanced analytics.

You also can use Amazon Kinesis Data Firehose and take advantage of managed streaming delivery of DynamoDB table data to other AWS services such as Amazon OpenSearch Service, Amazon Redshift, and Amazon S3. You don’t have to write or maintain complex code to load and synchronize your data into these services. 

With this launch, Amazon Kinesis Data Streams for DynamoDB is available in the Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Osaka), Europe (Spain), Europe (Zurich), Middle East (Bahrain), Middle East (UAE) Regions, AWS GovCloud (US-East), and AWS GovCloud (US-West). 

Amazon Neptune announces graph-explorer, an open-source visual exploration tool for low-code users

Today, Amazon Neptune announced a new open-source low-code visual exploration tool, the graph-explorer, that is available under the Apache-2.0 license. With this launch, customers can effortlessly browse either labeled property graphs (LPG) or Resource Description Framework (RDF) data in a graph database and discover connections between data without having to write graph queries. 

The graph-explorer open source tool provides a React-based web application that can be deployed as a Docker image to visualize graph data. You can connect to Amazon Neptune or another graph database that provides an Apache TinkerPop Gremlin or SPARQL 1.1 endpoint.

You can search the data quickly using faceted search filters and interactively explore connections around nodes and edges. You can also customize the graph layout, colors, icons, and default properties to display for nodes and edges, and save images for future use.

AWS Private Certificate Authority publishes Matter PKI Compliance Customer Guide

This week, the Matter PKI Compliance Customer Guide for AWS Private Certificate Authority (AWS Private CA) is avaliable on AWS Artifact. This Guide provides information about how you can use AWS Private CA to help you create and operate Matter-compliant Certificate Authorities (CAs).

Matter is a new smart home connectivity standard, governed by the Connectivity Standard Alliance, that allows smart home devices from different vendors to work together. For smart home devices to be Matter-compliant, manufacturers are required to certify these devices and provision them with Device Attestation Certificates (DACs).

Matter has two sets of requirements for the issuance of DACs. First, the certificates provisioned on Matter enabled devices must adhere to the Matter format, standard X.509 certificates with additional, Matter-specific attributes. Second, the CAs issuing the device certificates must be operated in a manner compliant with Matter policies.

Customers can already use AWS Private CA to issue certificates that follow the Matter format. AWS Private CA customers can now use the Matter PKI Compliance Customer Guide to help identify operational controls needed to create and manage a CA compliant with Matter policies, saving time and development effort.

Amazon CloudFront now supports the removal of response headers

Amazon CloudFront now supports the removal of response headers using response header policies, giving customers a native capability to remove specified headers served from CloudFront. This new capability, along with the existing ability to add and override headers, provides comprehensive flexibility for customers to customize response headers.

Until today, response header policies have allowed customers to specify HTTP headers that Amazon CloudFront adds to responses sent to viewers, including CORS headers, security headers, or custom headers. Now, customers can use response header policies to selectively remove headers sent to viewers, hiding from them the headers that are needed for application logic or CDN-specific caching policies but don't need to be shared.

For example, a customer may have a blog application that sends a "x-powered-by" header, which, if revealed, could be targeted by attackers for specific known vulnerabilities of the technology. To protect against this, the customer can use a response header policy to prevent it from being sent to viewers. Additionally, an origin may generate a "Vary" header to indicate headers that have influenced the origin response, but this information may not be needed for viewers and can be removed using a response header policy.

Removing headers using response header policies is now available through the CloudFront Console, AWS SDKs, and the AWS CLI. There are no additional fees associated with this feature.

Google Cloud Releases and Updates
Source: cloud.google.com

 

Apigee Integration

Cloud Scheduler trigger (Preview)

The Cloud Scheduler trigger lets you schedule your integration executions for defined time periods or regular intervals across multiple regions. Cloud Scheduler triggers leverage the Cloud Scheduler services to provide a fully managed enterprise-grade cron job scheduler within Apigee Integration.

For more information, see Cloud Scheduler trigger.

Apigee UI 

GA release of the new Proxy Editor

The new Proxy Editor simplifies the process of adding policies to an API proxy, configuring those policies, and then deploying the proxy. See Introducing the new Proxy Editor.

BigQuery 

Cloud Scheduler trigger (Preview)

The Cloud Scheduler trigger lets you schedule your integration executions for defined time periods or regular intervals across multiple regions. Cloud Scheduler triggers leverage the Cloud Scheduler services to provide a fully managed enterprise-grade cron job scheduler within Apigee Integration.

For more information, see Cloud Scheduler trigger.

More than 20 BigQuery ML components for Vertex AI Managed Pipelines are generally available. These components benefit AI/ML users for

Major Google Cloud pipeline components available in Vertex AI are.

The following generally available (GA) features have been added for sessions:

  • In a session, temporary functions are now maintained until the session ends.

  • In a session, statements that include the TEMP keyword can also include the OR REPLACE and IF NOT EXISTS keywords.

Chronicle

Multiple enhancements were made to the UDM Search capability, including the additions of search templates and shared searches. You can now do the following in UDM Search:

  • Use Chronicle-provided pre-made search templates in Quick Searches and Search Manager
  • Create, edit, and share searches in Search Manager (an enhancement to Saved Searches)
  • Use reference lists in UDM searches
 

Cloud Composer

Cloud Composer 1.20.3 and 2.1.3 release started on January 10, 2023. Get ready for upcoming changes and features as we roll out the new release to all regions. This release is in progress at the moment. Listed changes and features might not be available in some regions yet.

Fixed a problem where the number of active workers was reported as 0 after an environment's cluster update.

Cloud Composer 1.20.3 and 2.1.3 images are available:

  • composer-1.20.3-airflow-1.10.15 (default)
  • composer-1.20.3-airflow-2.2.5
  • composer-1.20.3-airflow-2.3.4
  • composer-2.1.3-airflow-2.2.5
  • composer-2.1.3-airflow-2.3.4 (default)

Cloud Composer versions 2.0.1, 2.0.0, and 1.17.8 have reached their end of full support period.

Cloud Functions

Cloud Functions has added support for a new runtime, Python 3.11, at the Preview release level.

Cloud Monitoring

Managed Service for Prometheus: Dashboards for exporter integrations are available and automatically installed when you configure the integration. You can also view static previews of dashboards without configuring the integration. For more information, see the exporter documentation at Set up commonly used exporters.

Charts defined by Prometheus Query Language (PromQL) now support dashboard-wide filters and can be configured to support template variables. For more information, see Create a permanent filter.

Cloud Run

Terraform resources for Cloud Run Services and Cloud Run Jobs based on the Cloud Run Admin API v2 are now generally available (GA).

Cloud SQL for PostgreSQL 

For new Cloud SQL instances that have point-in-time recovery enabled or for existing instances that enable point-in-time recovery, Cloud SQL for PostgreSQL now stores write-ahead logs in Google Cloud Storage.

Before this release, write-ahead logs, which are used to perform point-in-time recovery, were stored on disk. Now, logs are stored in Google Cloud Storage in the same region as the instances.

All other existing instances that have point-in-time recovery enabled will continue to have their logs stored on disk. The change to storing logs in Google Cloud Storage will be made available at a later time.

Cloud SQL for SQL Server

 You can use striped import and striped export to reduce the time needed for BAK file operations and for other purposes. This feature is generally available.

Compute Engine 

Preview: You can now simulate host maintenance events on sole-tenant nodes.

For more information, see Simulate host maintenance events on sole-tenant nodes.

Preview: Use the Google Cloud console to rename VMs. For more information, see Rename a VM.

Config Connector 

Config Connector version 1.99.0 is now available.

Added support for DataCatalogPolicyTag resource. This resource has been auto-generated and is in alpha stability.

Added support for TagsTagKey resource. This resource has been auto-generated and is in alpha stability.

Added support for TagsTagValue resource. This resource has been auto-generated and is in alpha stability.

Fixed export error for IAMCustomRole in config-connector CLI with --resource-format=terraform.

Added fields spec.configmanagement.oci and spec.mesh.controlPlane in GKEHubFeatureMembership.

Added field spec.skipAwaitRollout in OSConfigOSPolicyAssignment.

Removed field spec.authorizationPolicyRef in NetworkServicesGateway (Alpha).

Added field spec.deletionPolicy in BigtableGCPolicy.

Added field spec.deletionProtection in BigtableTable.

Added field spec.cdnPolicy.cacheKeyPolicy.includeHttpHeaders in ComputeBackendService.

Added fields spec.privateIpAddressRef, spec.redundantInterfaceRef, spec.subnetworkRef in ComputeRouterInterface.

Added fields spec.recaptchaOptionsConfig, spec.rule.headerAction, spec.rule.preconfiguredWafConfig in ComputeSecurityPolicy.

Added fields spec.clusterAutoscaling.autoProvisioningDefaults.management, spec.clusterAutoscaling.autoProvisioningDefaults.shieldedInstanceConfig spec.clusterAutoscaling.autoProvisioningDefaults.upgradeSettings, spec.gatewayApiConfig, spec.masterAuthorizedNetworksConfig.gcpPublicCidrsAccessEnabled, spec.nodeConfig.loggingVariant, spec.nodeConfig.resourceLabels, spec.nodePoolDefaults.nodeConfigDefaults.loggingVariant, spec.privateClusterConfig.privateEndpointSubnetworkRef in ContainerCluster.

Added fields spec.networkConfig.enablePrivateNodes, spec.nodeConfig.loggingVariant, spec.nodeConfig.resourceLabels, spec.upgradeSettings.blueGreenSettings, spec.upgradeSettings.stategy in ContainerNodePool.

Added field spec.privateVisibilityConfig.gkeClustersRef in DNSManagedZone.

Added field spec.mesh.controlPlane in GKEHubFeatureMembership.

Added field spec.deletionPolicy in SQLDatabase.

Added fields spec.settings.connectorEnforcement, spec.settings.denyMaintenancePeriod, spec.settings.insightsConfig.queryPlansPerMinute in SQLInstance.

Added field spec.autoclass in StorageBucket.

Supported the regional spec.defaultRouteAction.requestMirrorPolicy.backendServiceRef, spec.defaultRouteAction.weightedBackendServices.backendServiceRef for the regional ComputeURLMap resources.

Field spec.labels in CloudIdentityGroup has become mutable.

Field spec.ipv6AccessType in ComputeSubnetwork has become mutable.

Extended faster reconciliation of resources with dependencies to support IAMPartialPolicy.

Datastream

The validate_only and force parameters were added to the projects.locations.connectionProfiles resource in the Datastream API. To learn more, see the Datastream API reference documentation.

Document AI

The Form Parser Release Candidate version has been renamed to pretrained-form-parser-v2.0-2022-11-10. See Document AI release notes--December 12, 2022 for more information about this release.

Error Reporting

Errors generated by GKE applications (that is, applications with a MonitoredResource type of k8s_container) now support additional labels for metadata extraction. The values of the following labels will now appear in the resource filter and in various tables and displays on Error Reporting pages in the Google Cloud console:

  • The app value from your GKE YAML configuration is now used for the primary resource label (also referred to as "service").
  • The pod_name label of the k8s_container monitored-resource type is now used for the secondary resource label (also referred to as "version").

GCP continue to support the use of the YAML metadata labels k8s-pod/serving_knative_dev/service and k8s-pod/serving_knative_dev/revision. These labels are prioritized for users already using them over the k8s-pod/app and pod_name labels described in this note. But we encourage users to begin using the new labels.

If you do not set the k8s-pod/app label, GKE application errors continue to use the default service name of gke_instances.

Firestore

The Firestore indexes pages in the Google Cloud and Firebase consoles now show the __name__ field in each composite index definition. The __name__ field is added by default to each index definition and affects the sorting of results. The __name__ field was always part of each index definition but was previously hidden by the console.

GKE

Two new vulnerabilities (CVE-2022-3786 and CVE-2022-3602) have been discovered in OpenSSL v3.0.6 that can potentially cause a crash. While this has been rated a High in the NVD database, GKE endpoints use boringSSL or an older version of OpenSSL that is not affected, so the rating has been reduced to a Medium for GKE. For more information, refer to the GCP-2022-026 security bulletin.

The release notes for 1.26 available in the Rapid channel were modified with an additional notable change:

Windows Server 2022 OS image is generally available on GKE. You can now create Windows Node pools with Windows Server 2022 OS images using the command line. For more information, see Creating a cluster using Windows Server node pools.

The release notes for 1.26 available in the Rapid channel were modified with an additional notable change:

Network Intelligence Center

 You can now configure fine-grained permissions by using Identity and Access Management (IAM) to perform tasks in Network Topology. For more information, see Roles and permissions.

Retail API

Browse search is generally available using Retail Search. Typically, browsing products using site navigation produces results that are all of equal relevance or sorted by best-selling items. Retail Search leverages AI to optimize how browse results are sorted by considering popularity, buyability, and personalization. See About text search and browse search with Retail Search.

Retail Search can now automatically deliver personalized results for your text query searches and browse searches. Results are personalized for each end-user based on their behavior on your site, including each user's history of product views, clicks, additions to carts, and purchases.

You can use the Data Quality panel on the Retail console Data page to get an assessment of whether the data you have imported is sufficient to turn on automatic personalization. See Personalization.

The Page-level Optimization model is now generally available. Page-level Optimization extends Recommendations AI from optimizing for a single recommendation panel at a time to optimizing for an entire page with multiple panels. The Page-level Optimization model selects the contents for each panel and determines the panel order on your page. For more about this feature, see Page-level Optimization.

Traffic Director

gRPC Java releases 1.51.0, 1.51.1, and 1.52.0 have an important bug that can cause them to stop receiving updates from Traffic Director. We encourage users of gRPC Java to avoid these releases and use the older v1.50.x until patch releases with fixes are available. See the public gRPC announcement for more information.

Vertex AI 

 Vertex AI Matching Engine is available in the following regions:

  • us-west2 – (Los Angeles)
  • us-west3 – (Salt Lake City)
  • northamerica-northeast1 – (Montréal)
  • northamerica-northeast2 – (Toronto)
  • europe-central2 – (Warsaw)
  • europe-west2 – (London)
  • europe-west3 – (Frankfurt)
  • europe-west6 – (Zurich)
  • asia-east1 – (Taiwan)
  • Asia-east2 – (Hong Kong)
  • me-west1 – (Tel aviv)

To see all of the available locations for Matching Engine, see the Vertex AI Locations page.

VPC Service Controls

 Support for Cloud Tasks is now at General Availability. To learn more, see the Cloud Tasks documentation on setting up a service perimeter using VPC Service Controls.

Workflows

 get_type function that returns a string indicating an argument's data type is available.

 

 


Microsoft Azure Releases And Updates
Source: azure.microsoft.com

 

Public Preview: Azure Automation Visual Studio Code Extension

Use the Azure Automation extension to perform local authoring and management of automation runbooks using VS Code

New integration with Azure Active Directory (AD) to export and import data to Azure Managed Disks improves security

Generally available: Azure Ultra Disk Storage in Switzerland North and Korea South

Now available in Korea South and Switzerland North region, Azure Ultra Disk Storage provides high performance along with sub-millisecond latency for your most demanding workloads.

Public preview: Azure Synapse Runtime for Apache Spark 3.3

You can now create Azure Synapse Runtime for Apache Spark 3.3. The essential changes include features which come from upgrading Apache Spark to version 3.3.1, and upgrading Delta Lake.

At-scale monitoring for Azure Site Recovery with Backup center

The new enhancements released in Backup center enable you to monitor replicated items, jobs and manage them across subscriptions, resource groups and locations from a single view.

Public Preview: Azure Cosmos DB to Azure Data Explorer Synapse Link

TARGET AVAILABILITY: Q1 2023

Now in public preview: managed ingestion from Azure Cosmos DB to Azure Data Explorer in near real time.

Public preview: Capture Event Hubs data with Stream Analytics no-code editor in Delta Lake format

Stream Analytics no-code editor now supports capturing Event Hubs data into ADLS gen2 with Delta Lake format.

 

Generally Available: Azure Red Hat OpenShift in Brazil Southeast

TARGET AVAILABILITY: Q1 2023

Azure Red Hat OpenShift is now available in Brazil Southeast region.

General availability: Apache log4J2 sink to Azure Data Explorer

Azure Data Explorer now supports ingestion of data from Apache Log4j 2.

General Availability: Azure Sphere support for European Data Boundary

Azure Sphere Security Service EU data processing and storage is now generally available.

Public Preview: IT Service Management Connector (ITSMC) is now certified with ServiceNow Tokyo version

The ITSM connector provides a bi-directional connection between Azure and ITSM tools to help track and resolve issues faster.

Azure Backup for SAP HANA: General availability updates for Dec 2022

In Dec 2022, the following updates and enhancements were made for Azure Backup for SAP HANA – a backint certified database backup solution for SAP HANA databases in Azure VMs.Long term retention for…

Private Preview: Featured Clothing

This insight provides information on key items worn by individuals within a video and the timestamp in which the clothing appears.

General availability: Encryption using CMK for Azure Database for PostgreSQL – Flexible Server

Use infrastructure encryption to add an additional layer of encryption for data at rest using customer-managed keys.

Public preview: Azure Cosmos DB V2 Connector for Power BI

Import data into Power BI dashboards using DirectQuery mode when filtering so aggregations can be pushed down to Azure Cosmos DB to reduce data movement and improve performance.

 

 


Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes.  Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
 
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free. 

When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
 
Check out the 14 day free trial here (includes forever free tier):