Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday November 4th 2022.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
Effective immediately, AWS Lambda customers in AWS GovCloud (US) Regions can configure up to 10,240 MB of ephemeral storage for their Lambda functions, a 20x increase compared to the previous limit of 512 MB.
With this release, you can now control the amount of ephemeral storage a function uses for reading or writing data, enabling you to use Lambda functions for data intensive workloads such as ETL jobs, financial computations, and machine learning inferences. You can configure ephemeral storage (/tmp) between 512 MB and 10,240 MB using the AWS Management Console, AWS CLI, AWS Serverless Application Model (AWS SAM), AWS Cloud Development Kit (AWS CDK), AWS Lambda API, and AWS CloudFormation. With 10 GB container image support, 10 GB function memory, and now 10 GB of ephemeral function storage, you can bring larger files to /tmp, which makes it easier to run data intensive workloads.
Amazon Simple Notification Service (Amazon SNS) recently launched the public preview of message data protection. Amazon SNS message data protection is a new set of capabilities that leverage pattern matching, machine learning models, and content policies to help security and engineering teams facilitate real-time data protection in their applications that use Amazon SNS to exchange high volumes of data. Now, with the general availability launch, you can de-identify data within a message payload in real-time via data redaction, or masking.
Amazon SNS is a fully managed, reliable, and highly available messaging service that enables you to connect distributed systems or send messages directly to users via SMS, mobile push, and email. With message data protection for Amazon SNS, you can discover and protect certain types of personally identifiable information (PII) and protected health information (PHI) data that is in motion between your applications. This can help support your compliance objectives, for example, with regulations such as the Health Insurance Portability and Accountability Act (HIPAA), General Data Privacy Regulation (GDPR), Payment Card Industry Data Security Standard (PCI-DSS), and Federal Risk and Authorization Management Program (FedRAMP). Message data protection enables topic owners to define and apply data protection policies that scan messages in real-time for sensitive data to provide detailed audit reports of findings, block message delivery, and de-identify data within a payload via redaction or masking.
You can now use data tiering for Amazon MemoryDB for Redis as a lower cost way to scale your clusters to up to hundreds of terabytes of capacity. Data tiering provides a new price-performance option for MemoryDB by utilizing lower-cost solid state drives (SSDs) in each cluster node, in addition to storing data in memory. Data tiering is ideal for workloads that access up to 20% of their overall dataset regularly, and for applications that can tolerate additional latency when accessing data on SSD.
When using clusters with data tiering, MemoryDB is designed to automatically and transparently move the least recently used items from memory to locally attached NVMe SSDs when available memory capacity is exhausted. When an item that was moved to SSD is subsequently accessed, MemoryDB moves it back to memory asynchronously before serving the request. Assuming 500-byte String values, you can typically expect an additional 450µs latency for read requests to data stored on SSD compared to read requests to data in memory.
MemoryDB data tiering is available when using Redis version 6.2.6 and above on Graviton2-based R6gd nodes. R6gd nodes have nearly 5x more total capacity (memory + SSD) and can help you achieve over 60% storage cost savings when running at maximum utilization compared to R6g nodes (memory only).
To get started using MemoryDB data tiering, create a new cluster using one of the R6gd node types using the AWS Management Console for MemoryDB, the AWS CLI, or one of the SDKs. Data tiering on R6gd nodes is available in the Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (Paris), South America (Sao Paulo), US East (N. Virginia), US East (Ohio), US West (N. California), and US West (Oregon) Regions.
This week, AWS are excited to announce operations support for SQL Server on Amazon EC2 through AWS Managed Services (AMS) Operations on Demand (OOD). This new offering reduces the undifferentiated heavy lifting by offloading database operations such as patching and backup to highly skilled AMS resources. With this launch, customers can now bring their existing SQL Server licenses (BYOL) to EC2 and take advantage of AMS to operate more efficiently.
AWS Managed Services Operations on Demand gives customers access to a full range of operational capabilities, above and beyond the extensive scope provided by AMS Operations Plans. With features like cluster-aware patching, ransomware backup protection, monitoring, and incident resolution, this new catalog item helps customers improve resilience and security posture of their SQL Server workloads. In addition, customers can now focus on strategic initiatives and drive greater business value, while AMS provides critical operational capabilities needed to support business continuity.
SQL Server on EC2 Operations is available to AMS customers for an additional fee and is available in all regions where AMS is available.
This week, AWS announced the general availability of AWS Copilot version 1.23 with support for AWS App Runner private services. App Runner makes it easier for developers to quickly deploy containerized web applications and APIs to the cloud, at scale, and without having to manage infrastructure. By default, App Runner services are accessible publicly over the internet. Now, with private services you can restrict network access to your internal websites, APIs, and applications to originate from within your Amazon VPC.
With AWS Copilot, you can quickly get started and deploy to Amazon ECS or AWS App Runner with a single command and a Dockerfile. Copilot provides a developer-focused interface and workflows, where users can focus on application architecture by choosing common application and services patterns. Copilot provisions and keeps up to date the necessary AWS infrastructure in your account, using the best-practices and infrastructure-as-code artifacts. Now, you have the option of toggling the request-driven Copilot services, powered by App Runner, to be private. Simply specify http.private: true in the Copilot service manifest and run copilot deploy command. Copilot will take care of configuring AWS App Runner services to accept traffic only from within Amazon VPC provisioned for your Copilot environment.
In addition, the new release adds support for Amazon Aurora Serverless v2 and makes it easier to enable Amazon VPC flow logs to monitor the network. For the full list of features in the new release see the release notes.
Amazon SageMaker Autopilot experiments using hyperparameter training are up to 2x faster to generate ML models on datasets greater than 100 MB running 100 or more trials. Amazon SageMaker Autopilot automatically builds, trains, and tunes the best ML models based on your data while allowing you to maintain full control and visibility.
SageMaker Autopilot offers two training modes - Hyperparameter optimization (HPO) and Ensemble. In the HPO mode, SageMaker Autopilot selects the algorithms that are most relevant to your dataset and selects the best range of hyperparameters to tune your models using Bayesian optimization. However for larger datasets (> 100MB), the tuning time with Bayesian optimization can be longer. Starting today, SageMaker Autopilot will use a new multi-fidelity hyperparameter optimization (HPO) strategy that employs the state-of-the-art hyperband tuning algorithm on datasets that are greater than 100 MB with 100 or more trials while continuing to leverage the Bayesian optimization strategy for data sets lesser than 100MB. With the multi-fidelity optimization strategy, trials that are performing poorly against a selected objective metric are stopped early thereby freeing up resources for well performing trials. This in turn reduces the tuning time for HPO training mode SageMaker Autopilot experiments on large data sets.
With this release the model training and tuning time is up to 2X faster than before enabling customers to deliver the best performing ML model sooner. To evaluate the performance improvements, we used multiple OpenML benchmark datasets with varying sizes ranging from 100 MB to 10 GB. Based on our results, moderately large datasets (100MB - 1 GB) saw 41% (from average 345 to 203 mins) and very large datasets (> 1 GB) saw a 48% improvement (from average 2010 to 1053 mins) runtime improvements respectively. With this enhancement, you can run your SageMaker Autopilot experiments faster without making any changes to existing job configurations.
Amazon SageMaker Autopilot now provides the ability to perform feature selection and change auto inferred data types while creating an AutoML experiment, enabling you with the flexibility to choose which features to include while training your machine learning (ML) models. SageMaker Autopilot automatically builds, trains and tunes the best ML models based on your data, while allowing you to maintain full control and visibility.
The features you choose to include in your data have a significant effect on the model’s results and predictions. As a part of its automated evaluation criteria, SageMaker Autopilot includes all features in the uploaded dataset. It now offers controls to the end user who understands their data to make their feature selections. Starting today, when creating an Amazon SageMaker Autopilot experiment, you can not only select or deselect the features from the training dataset but also change the data types that were automatically inferred by SageMaker Autopilot. This release also includes the ability to preview the uploaded input training dataset.
To get started, update Amazon SageMaker Studio to the latest release and launch SageMaker Autopilot either from SageMaker Studio Launcher or APIs.
With this week’s launch, you can automate the deployment and configuration of Amazon FSx for NetApp ONTAP volumes for running SAP HANA data, log, shared, and kernel file systems. After you answer a few questions about your SAP landscape, AWS Launch Wizard will recommend an Amazon EC2 instance type and minimum recommended capacity for Amazon FSx for NetApp ONTAP volumes based on AWS and SAP best practices. You can choose to use this capacity or change it based on your unique requirements.
This launch also allows you to take advantage of SAP HANA Host Auto-Failover, a built-in, fully automated high availability solution for recovering from the failure of a SAP HANA host in scale-out deployments.
AWS Launch Wizard offers a guided way of sizing, configuring, and deploying AWS resources for third party applications, such as Microsoft SQL Server Always On and HANA based SAP systems, without the need to manually identify and provision individual AWS resources. Amazon FSx for NetApp ONTAP provides fully managed shared storage in the AWS Cloud with the popular data access and management capabilities of ONTAP.
Amazon Textract is a machine learning service that automatically extracts text, handwriting, and data from any document or image. We continuously improve the underlying machine learning models based on customer feedback to provide even better accuracy. Today, we are pleased to announce quality enhancements to our text and forms extraction feature available via the AnalyzeDocument API.
Amazon Textract now provides enhanced key-value pair extraction accuracy and more specifically for single character boxed forms commonly found in documents such as Tax, and Immigration forms. These documents have traditionally been challenging to extract information from due to their complexity in how the words are captured in boxes. Textract is now able to utilize its knowledge of these single character boxed forms to provide higher accuracies in key-value pair extraction.
Additionally, AWS are pleased to announce support for E13B fonts commonly found in deposit checks/cheques, accuracy improvements to detect International Bank Account Numbers found in banking documents, and long words (e.g., email addresses) via the AnalyzeDocument API. Customers across industries like insurance, healthcare, and banking utilize these documents in their business processes and will automatically see the benefits of this update when they use Textract’s Analyze Document API.
This update will be available in US East (Ohio, N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai, Seoul, Singapore, Sydney), Canada (Central), Europe (Frankfurt, Ireland, London, Paris), and AWS GovCloud (US-East, US-West) Regions starting October 31st.
Starting this week, the AWS WAF geographic match statement adds labels to each request, to indicate ISO 3166 country and region codes. Customers have asked for more control of geographic regions within a country, such as a specific state in the United States. With the updated geographic match rule statements, customers can control access at the region level.
The geographic match rule statement now automatically annotates a request from Texas, USA with the label awswaf:clientip:geo:region:US-TX, and a request from Queensland, Australia with the label awswaf:clientip:geo:region:AU-QLD. Customers can add label matching rules to capture region labels and block specific regions, without blocking the entire country.
Getting started with the updated geographic match rule statements is easy. The geographic match rule adds geographic region and country labels to every request that it evaluates, enabling customers to write label match statements according to the regions they wish to block or allow. Geographic match rule statements can be combined with other AWS WAF rules to build sophisticated filtering policies.
AWS customers who want to block certain geographies while still allowing certain developer IP addresses from those locations can combine geo and IP match conditions to allow only authorized users. Other customers who want to prioritize users in their primary geography to optimize resource consumption can combine geo match conditions with AWS WAF rate-based Rules. These customers can set a higher rate limit for end users in preferred countries or regions while setting a lower rate limit for others.
There is no additional cost for using this feature. It is available in all AWS Regions where AWS WAF is available and for each supported service, including Amazon CloudFront, Application Load Balancer, Amazon API Gateway, AWS AppSync, and Amazon Cognito.
AWS Outposts rack can now be shipped and installed at your data center and on-premises locations in Bangladesh.
AWS Outposts rack, a part of the AWS Outposts family, is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any data center or co-location space for a truly consistent hybrid experience. Outposts rack is ideal for workloads that require low latency access to on-premises systems, local data processing, and migration of applications with local system interdependencies. Outposts rack can also help meet data residency requirements.
With the availability of Outposts rack in Bangladesh, you can use AWS services to run your workloads and data in country in your on-premises facilities and connect to your nearest AWS Region for management and operations.
AWS Migration Hub Orchestrator adds support for Microsoft SQL Server to help you simplify and accelerate SQL Server database migration to AWS. You can now easily create a SQL Server migration workflow, automate the manual tasks involved in the migration, and track the migration progress in the same console. With this capability, you can reduce SQL Server migration time and effort to avoid schedule and cost overruns.
AWS Migration Hub Orchestrator helps you optimize and scale your application migrations from start to finish with workflow templates. Now, Migration Hub Orchestrator allows you to rehost SQL Server to Amazon Elastic Compute Cloud (EC2) and replatform SQL Server to Relational Database Service (RDS) using native backup and restore with AWS prescribed workflow templates. You can choose a SQL Server workflow template based on your needs, create a migration workflow, backup on-premises SQL Server databases to Amazon S3, and restore to EC2 or RDS.
You can also customize the migration workflow by adding, reordering, and removing steps per specific needs like inserting an additional approval step for cutover. During migration, Migration Hub Orchestrator automates the manual tasks using various tools and scripts, manages dependencies across different steps, and orchestrates the migration workflow in one place.
Amazon S3 on Outposts now supports additional S3 Lifecycle rules to optimize capacity management. You can now optimize your storage capacity by expiring objects as they age or are replaced with newer versions. You can use S3 Lifecycle configurations for a whole bucket on your Outpost, or for a subset of the objects in the bucket by filtering by prefixes, object tags, or object sizes.
Prior to this release, you could use Lifecycle rules to initiate object deletion based on age or date. Now, you have additional fine-grained controls to expire current versions of objects and permanently delete non-current versions of objects. This allows you to more easily optimize your S3 on Outposts storage capacity.
Amazon S3 on Outposts delivers object storage to your on-premises AWS Outposts rack environment to help you meet your low latency, local data processing, and data residency needs. Using the S3 APIs and features, S3 on Outposts makes it easier to store, secure, tag, retrieve, report on, and control access to the data on your Outposts. Outposts rack, as part of the AWS Outposts Family, is a fully managed service that extends AWS infrastructure, services, and tools to virtually any data center, co-location space, or on-premises facility for a truly consistent hybrid experience.
Amazon Connect Customer Profiles now enables agents to see additional customer information stored in Customer Profiles in the Connect Agent Application, enabling them to further personalize customer interactions and resolve their problems. For example, admins at retail companies can now add reward points information to profiles, and enable agents to inform customers of points redeemable towards their orders. As another example, admins at a financial services company can add credit limit information enabling agents to quickly help customers calling regarding a declined credit card transaction.
With Amazon Connect Customer Profiles companies can deliver faster and more personalized customer service by providing access to relevant customer information for agents and automated experiences. Companies can bring customer data from multiple SaaS applications and databases into a single customer profile, and pay only for what they use based on the number of customer profiles.
Amazon Connect Customer Profiles is available in US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Canada (Central), Europe (Frankfurt), and Europe (London).
Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. This week, AWS were excited to announce the general availability of Laura, a new female Dutch neural Text-to-speech (NTTS) voice.
Amazon Polly now offers three voices supporting Dutch: Ruben, Lotte and the newly released Laura. Ruben and Lotte are standard voices whereas Laura is the first neural voice. In the Netherlands, English is spoken by the vast majority of the population. Amazon Polly’s, Laura, speaks both Dutch and English with equal ease and nativity.
Amazon Textract is a machine learning service that automatically extracts printed text, handwriting, and data from any document or image. AnalyzeExpense is a specialized API within Textract that understands the context of invoices and receipts and automatically extracts relevant data such as vendor name and invoice number.
This week, AWS were pleased to announce major enhancements to AnalyzeExpense that include support for new fields and higher accuracy for existing fields.
The latest AnalyzeExpense API provides support for 40+ normalized fields. The newly supported normalized fields include both summary fields such as Vendor Address, and line item fields such as Product Code. With this new capability, customers can directly extract their desired information and save time from writing, and maintaining complex post- processing code. Besides support for new fields, we have further improved the accuracy for fields such as Vendor Name and Total that were already supported in the previous version.
Along with normalized key-value pairs and regular key value pairs, AnalyzeExpense now provides the entire OCR output in the API response. Customers can obtain both key-value pairs and the raw OCR extract through a single API request.
This update will be available in US East (Ohio, N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai, Seoul, Singapore, Sydney), Canada (Central), Europe (Frankfurt, Ireland, London, Paris), and AWS GovCloud (US-East, US-West) starting October 31st.
Amazon RDS Custom for SQL Server is a managed database service that allows administrative access to the operating system. Starting today, RDS Custom for SQL Server provides a simple way to scale your database disk storage as needed. With storage scaling, RDS Custom for SQL Server simplifies the burdensome process of storage configuration as your database size grows.
By using Amazon RDS Custom for SQL Server’s storage scaling feature, you can avoid the complexity of manually configuring larger disks for your database partition. In addition, it can result in cost savings as you can minimize overprovisioning of storage and scale when you need to.
Storage scaling is typically an online operation, which can allow your applications to keep running without interruption. By simplifying the database disk expansion process, RDS Custom for SQL Server further reduces the undifferentiated heavy lifting of database administration, letting you focus on other high value work.
To scale your storage, you can use the AWS Management Console or AWS CLI. Read the AWS Documentation on modifying storage for RDS Custom for details. To get started with RDS Custom for SQL Server, read the setup guide in AWS Documentation, and deploy using the AWS CLI or AWS Management Console today. Amazon RDS Custom for SQL Server is generally available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Europe (Frankfurt, Ireland, London, Stockholm) and Asia Pacific (Mumbai, Singapore, Sydney, Tokyo).
Amazon Textract is a machine learning service that automatically extracts text, handwriting, and data from any document or image. Analyze ID is a specialized API within Textract that extracts data from identity documents, such U.S. Driver Licenses and U.S. Passports. This week, AWS are pleased to announce updates to Analyze ID extraction API.
Amazon Textract now provides data extraction for the machine readable zone, or MRZ code, on U.S. Passports. This is in addition to the other fields you can extract on U.S. passports today, such as document number, date of birth, and date of issue, for a total of 10 fields on U.S. passports. You can continue to extract 19 fields from U.S. Driver Licenses including inferred fields, such as first name, last name, and address. Besides support for the new MRZ code field, we have further improved the accuracy for fields such as expiration date and place of birth that were already supported in the previous version.
Along with normalized key-value pairs, Analyze ID now provides the entire OCR output in the API response. Customers can obtain both key-value pairs and the raw OCR extract through a single API request.This update will be available in US East (N. Virginia), US East (Ohio), US West (Northern California), US West (Oregon), AWS GovCloud (US-East), AWS GovCloud (US-West), Canada (Central), Europe (London), Europe (Paris), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Seoul), and Asia Pacific (Mumbai) starting November 1st.
AWS App Runner now supports private services which enables access to App Runner services from within an Amazon Virtual Private Cloud (VPC). App Runner makes it easier for developers to quickly deploy containerized web applications and APIs to the cloud, at scale, and without having to manage infrastructure. By default, App Runner services are accessible publicly over the internet. Now, with private services you can restrict network access to your internal websites, APIs, and applications to originate from within your VPC.
Private services in App Runner leverages AWS PrivateLink Interface VPC Endpoints, which provides highly available and scalable networking technology. You can specify which Amazon VPC you would like your App Runner service to be accessible in by passing an Interface VPC Endpoint. You can also add security groups, which act like a virtual firewall, to your Interface VPC Endpoints to further restrict network traffic. This also enables you to monitor your network traffic via VPC Flow logs.
You can create both the Interface VPC Endpoint and App Runner service in a single workflow using the App Runner console. By default, you get a domain name to access your App Runner service that can be customized based on your needs. To learn more about App Runner private services, see Networking section in the developer guide, feature deep dive blog post, and this blog post. To learn more about App Runner, see the AWS App Runner Developer Guide.
Apache Hudi 0.11.1 on Amazon EMR 6.8 includes support for Spark 3.3.0, adds Multi-Modal Index support and Data Skipping with Metadata Table that allows adding bloom filter and column stats indexes to tables which can significantly improve query performance, adds an Async Indexer service which allows users to create different kinds of indices (e.g., files, bloom filters, and column stats) in the metadata table without blocking ingestion, includes Spark SQL improvements adding support for update or delete records in Hudi tables using non-primary-key fields and Time travel query via timestamp as of syntax, includes Flink integration improvements with support for both Flink 1.13.x and 1.14.x and support for complex data types such as Map and Array etc. In addition, Hudi 0.11.1 includes bug fixes over Hudi 0.11.0 available in Amazon EMR release 6.7. For more details, refer to the OSS Hudi release docs.
Apache Iceberg 0.14.0 on Amazon EMR 6.8 includes support for Spark 3.3.0, adds Merge-on-read support for MERGE and UPDATE statements, adds support to rewrite partitions using Z-order that allows to re-organize partitions to be efficient with query predicates on multiple columns and also to keep similar data together, includes several performance improvements for scan planning in Spark queries, add support for row group skipping using Parquet bloom filters, etc. For more details, refer to the OSS Iceberg release docs.
This week, AWS announced Elastic IP transfer, a new Amazon VPC feature that allows you to transfer your Elastic IP addresses from one AWS Account to another, making it easier to move Elastic IP addresses during AWS Account restructuring.
Prior to this, when moving applications to a new AWS Account, you had to allocate new Elastic IP addresses for your applications requiring you to allowlist the new Elastic IP addresses in your connectivity resources, such as routers and firewalls, which slowed down migrations.
With Elastic IP transfer, you are now able to re-use the same Elastic IP addresses for your applications even after you move them to a new AWS Account, eliminating the need to allowlist connectivity resources and accelerating your migrations.
Additionally, if you are using Amazon VPC IP Address Manager (IPAM), you can now track your Elastic IP transfers using IPAM.
Starting this week, you can use certificate-based authentication with Amazon AppStream 2.0 fleets that are joined to Active Directory to remove the logon prompt for the domain password.
By using certificate-based authentication, you can rely on the security and logon experience features of your SAML 2.0 identity provider, such as passwordless authentication, to access AppStream 2.0 resources. Certificate-based authentication with AppStream 2.0 enables a single sign-on logon experience to access domain-joined desktop and application streaming sessions without separate password prompts for Active Directory.
AppStream 2.0 certificate-based authentication integrates with AWS Private Certificate Authority (AWS Private CA) to automatically issue short-lived certificates when users sign in to their sessions. AWS Private CA is a highly available, pay-as-you-go private CA service without the upfront investment and ongoing maintenance costs of operating your own public key infrastructure (PKI) in the cloud.
When you configure your private CA as a third-party root CA in Active Directory or as a subordinate to your Active Directory Certificate Services enterprise CA, AppStream 2.0 with AWS Private CA can enable rapid deployment of end user certificates to seamlessly authenticate users.
There are no additional AppStream 2.0 charges for using certificate-based authentication. AWS Private CA now offers separate pricing for short-lived certificate use cases, which can help lower the monthly cost of the CA and the price per certificate. See AWS Private CA Pricing for more information. Certificate-based authentication is available in all AWS Regions where AppStream 2.0 and AWS Private CA are offered. Learn more about how to get started with AppStream 2.0 certificate-based authentication by visiting the AppStream 2.0 Administration Guide.
Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store streaming data at massive scale. Data Viewer is a new capability for Amazon Kinesis Data Streams that allows viewing data records directly from AWS Management Console. As a result, you can easily inspect the data records without programming a dedicated consumer app just to view the data, quickly check the data structure of an unfamiliar stream, or query specific records for QA and troubleshooting.
To access Data Viewer, login to AWS Management Console for Amazon Kinesis Data Streams and navigate to detail page of your data stream. In the Data Viewer tab, you can select the shard, and then specify the starting position of the records that you want to view.
AWS IoT Core announces Location Action - a capability to route latitude and longitude data from IoT devices to Amazon Location Service, making it easier for software developers to add geospatial data and location functionality to IoT applications. With this launch, you can route live location data of an IoT device to Amazon Location Service for tracking and geo-fencing use cases, such as tracking the live location of a device or receiving alerts when a device crosses a geo-fence.
AWS IoT Core is a fully managed service that allows you to connect billions of IoT devices and route trillions of messages to AWS services without managing cloud infrastructure. Rules Engine is a feature in AWS IoT Core that allows you to filter and process IoT device data and route the data to 15+ AWS and third party services. You can now use Rules Engine to filter and process latitude-longitude data from messages generated by IoT devices.
You can then use the Location Action feature in Rules Engine to route the latitude-longitude data to Amazon Location Service, thus gaining the capability to track, record, and visualize IoT devices through Amazon Location Service.
To get started, connect your IoT devices to AWS IoT Core, define Rules to filter latitude-longitude data from your messages, and create a Location Action using Rules Engine console, CLI, or SDKs to monitor location data for your device.
AWS Launch Wizard now supports for placing Microsoft SQL Server tempdb in instance store volumes during SQL Server deployment on Amazon EC2. With the launch, you can save time and effort by easily configuring tempdb by one click during deployment without needing to manually configuring it after deployment.
AWS Launch Wizard enables you to easily size, configure, and deploy SQL Server single node and high availability deployments on EC2. Customers usually choose to place SQL Server tempdb in non-volatile memory express (NVMe) solid state drives (SSD) instance store volumes to optimize performance.
You can easily achieve it by selecting the tempdb check box in the Launch Wizard console and then specifying the EC2 instance type with NVMe SSD based on your needs. In addition, you can enable one-click monitoring of SQL Server with CloudWatch Application Insights to simplify monitoring. You can also choose to save the CloudFormation templates and associated configuration scripts to your Amazon S3 bucket for repeated deployments.
AWS are excited to announce that Amazon EMR release 6.8 includes Apache Flink 1.15.1. This feature is available on EMR on EC2.
Apache Flink is an open source framework and engine for processing data streams. Apache Flink 1.15.1 on EMR 6.8 includes 62 bug fixes, vulnerability fixes, and minor improvements over Flink 1.15.0. Key features include:
For more details, refer to the OSS Flink release docs.
Anthos Clusters on AWS
You can now launch clusters with the following Kubernetes versions:
Anthos on AWS nodepools now includes the iptables utility to resolve an issue with the installation of Anthos Service Mesh.
On clusters at version 1.24.3-gke.2200, the IMDS emulator fails to start. This issue is fixed for clusters at version 1.24.5-gke.200 and later.
Anthos Clusters on Bare Metal
Cluster lifecycle improvements in 1.13 and later
Preview: You can use the Google Cloud console to create user clusters, delete user clusters, and to add and remove node pools from a user cluster. To explore the new feature, try out the tutorial Create an Anthos on bare metal user cluster on Compute Engine VMs using the console.
Anthos Clusters on VMware
Anthos clusters on VMware 1.13.1-gke.35 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.1-gke.35 runs on Kubernetes 1.24.2-gke.1900.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.
kubeceptionfield in the user cluster configuration file to
Anthos Service Mesh
Anthos Service Mesh 1.15.3-asm.1 includes the features of Istio 1.15.3 subject to the list of Anthos Service Mesh supported features. If you've installed in-cluster 1.15.2, please update to 1.15.3 right away. Google will automatically upgrade customers running managed Anthos Service Mesh.
VPC-SC for managed Anthos Service Mesh is generally available (GA) in the rapid channel.
Version 1.15 is now available for managed Anthos Service Mesh and is rolling out to the Rapid Release Channel.
Upon rollout completion, the managed Anthos Service Mesh channels will contain the following versions:
Note that regions will have mixed availability during the 1.15 rollout. Additionally, stable and regular channel promotion occurs before 1.15 rolls out to rapid channel.
See Select a managed Anthos Service Mesh release channel for more information.
End-user authentication is being made available to managed Anthos Service Mesh in the rapid release channel. See the preceding release note for rollout timelines.
On November 2, 2022 GCP released an updated version of the Apigee hybrid software, v1.7.5.
For information on upgrading, see Upgrading Apigee hybrid to version 1.7.
App Engine standard environment Go / Java / Node.js / PHP / Python / Ruby
Build environment variables support is now generally available.
Included with this release are the following new key management functions:
The query execution graph is now in preview. You can use the query execution graph to diagnose query performance issues, and to receive query performance insights.
Enhancements to the Detection Engine API
The StreamDetectionAlerts method in the Detection Engine API has been enhanced to return detections generated by both user-created rules and Chronicle Curated Detections. For more information about this method, see StreamDetectionAlerts.
The Ingestion API udmevents and createentities methods now accept both uppercase and lowercase characters in the following fields:
<_Noun_>.mac: defined when calling the udmeevents method, where Noun is either principal, src, target, observer, intermediary, or about.
entity.asset.mac: defined when calling the createentities method.
These fields are defined in the UDM record in the request body when calling the method. For more information about these methods, see Chronicle Ingestion API documentation. For more information about UDM fields, see the Unified Data Model field list.
Chronicle Feed Management added a hostname field to the configuration workflow of certain log types. The hostname field enables you to configure the API endpoint for the feed. If you do not define a value for this field, the following default values are used:
Chronicle Feed Management API was also updated to support the hostname field for these log types.
Cloud Composer 1.19.13 and 2.0.30 release started on October 31, 2022. Get ready for upcoming changes and features as we roll out the new release to all regions. This release is in progress at the moment. Listed changes and features might not be available in some regions yet.
apache-airflow-providers-google package in images with Airflow 2.1.4 and 2.2.5 was upgraded to
2022.10.17+composer. Changes compared to version
GKEHookand GKE unit tests from #22852, without pulling changes for
google-api-core package was downgraded from
2.8.1. This change fixes integration with Cloud Spanner.
Cloud Composer 1.19.13 and 2.0.30 images are available:
Cloud Load Balancing
Cloud Load Balancing introduces the internal regional TCP proxy load balancer. This is an Envoy proxy-based regional layer 4 load balancer that enables you to run and scale your TCP service traffic behind an internal regional IP address that is accessible only to clients in the same VPC network or clients connected to your VPC network.
The internal regional TCP proxy load balancer distributes TCP traffic to backends hosted on Google Cloud, on-premises, or other cloud environments.
For details, see the following:
This capability is in General Availability.
You can now collect Aerospike logs from the Ops Agent, starting with version 2.22.0. For more information, see Monitoring third-party applications: Aerospike.
You can now add table widgets to custom dashboards that let you limit the number of table rows, persiste specific columns, display only those rows with the highest, or lowest values, and that display a visual indicator of the value as compared to the range of possible values. For more information, see Display data in tabular form on a dashboard.
Cloud Secure Web Gateway
Support for the NHibernate ORM is now generally available, enabling you to use Cloud Spanner as a backend database for the NHibernate framework. For more information, see NHibernate Dialect for Cloud Spanner.
Dataproc Serverless for Spark now allows the customization of driver and executor memory using the following properties:
Deep Learning Containers
Fixed a bug where Jupyter widgets through
ipywidgets were causing errors and not displaying.
Regular package updates.
Deep Learning VM Images
Fixed a bug where Jupyter widgets through
ipywidgets were causing errors and not displaying.
Updated TPU versions for TensorFlow 2.8, 2.9, and 2.10 Deep Learning VMs.
Improved error messages for debugging custom container Deep Learning VMs that were instantiated with a GPU but without installing NVIDIA drivers.
Regular package updates.
A new Release Candidate (RC) version of the Document OCR Processor,
pretrained-ocr-v1.1-2022-09-12, is available in the US and EU. This RC can detect document defects.
image_quality_scoresfield on the
Pageobject in the returned JSON. This additional feature adds latency comparable to OCR processing to the
You can now easily identify clusters that use certificates incompatible with Kubernetes version 1.23. Kubernetes 1.23 deprecation insights are now available in Preview for clusters of at least version 1.22.6-gke.1000.
BigQuery subscriptions now support the Avro logical types timestamp-micros, date, and time-micros. For more information about schema compatibility between a Pub/Sub topic and a BigQuery table, see Schema compatibility.
The feature for listing all tags that are attached to or inherited by your resources has entered general availability. For more information, see Creating and managing tags.
You can now use the Cloud Console UI to create and manage tags. For more information, see Creating and managing tags.
Private Service Connect supports internal regional TCP proxy load balancers as a service attachment target in General Availability. This lets you create hybrid TCP/UDP services where a clients in a VPC network can connect to an on-premise service by going through Private Service Connect and a TCP proxy with hybrid NEGs to reach a hybrid endpoint.
VPC Service Controls
Microsoft Azure Releases And Updates
Create confidential VMs using Ephemeral OS disks for your stateless workloads.
Azure Maps Creator enables you to create Indoor Maps with GeoJSON.
Azure Automation now supports runbooks in latest Runtime versions - PowerShell 7.2 and Python 3.10 in public preview.
All Azure Windows VMs provisioned in Azure US Government Cloud after February 1, 2023, will be activated via aazkms.core.usgovcloudapi.net, which points to one new KMS IP address, 188.8.131.52.
Logic apps Standard support for Azure Functions runtime version 4.x – is now generally available
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (includes forever free tier):