Here's the weekly cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday May 5th 2023.
3 Weeks ago we released Architectural Monitoring Alerts. We're pleased to announce it is now GA. This new capability brings a whole new level to Hava, giving you the ability to see the changes in your cloud environments directly on diff diagrams delivered directly into your inbox.
You can add your security team into the loop so they get to see ALL the changes no matter which team or client is making changes.
All the lastest Hava news can be found on our Linkedin Newsletter.
Of course we'd love to keep in touch at the other usual places. Come and say hello on:
AWS Updates and Releases
AWS Service Management Connector now supports provisioning with Terraform
Starting this week, AWS customers can use AWS Service Catalog to enable self-service provisioning of Terraform configurations in the AWS Service Management Connector for ServiceNow. These Terraform products will render as ServiceNow Service Catalog items that can leverage approvals and business workflows within ServiceNow. Customers can now use AWS Service Catalog as the single tool to organize, govern, and distribute their Terraform configurations within AWS at scale.
In addition to supporting the AWS Service Catalog for Terraform open source, this launch also enables discovery of Amazon WorkSpaces, Amazon ECS, Amazon EKS, Amazon EFS in ServiceNow CMDB (Configuration Management Database) through AWS Config integration. The launch also introduces a dashboard that gives you quick access to reports/charts for AWS Service Catalog, AWS Config and AWS Security Hub integrations.
The Connector also includes existing integrations with AWS Systems Manager Incident Manager, AWS Health, AWS Support, AWS Systems Manager Automation, AWS Systems Manager Change Manager. These AWS Cloud Operations integrations help simplify cloud provisioning, operations and resource management as well as streamline Service Management governance and oversight over AWS services.
Amazon CodeWhisperer now available as extension in JupyterLab and Amazon SageMaker Studio
This week AWS, were excited to announce that data scientists can use CodeWhisperer for no additional charge to generate real-time code suggestions for Python notebooks in JupyterLab and Amazon SageMaker Studio. With CodeWhisperer, you can write a comment in natural language that outlines a specific coding task in English and CodeWhisperer will recommend one or more code snippets directly in the Notebook that can accomplish the task.
With the CodeWhisperer extension, SageMaker Studio and JupyterLab users will be able to quickly develop code in their Python notebook to accelerate data analysis.
Amazon Kinesis Data Firehose adds support for document ID that is auto-generated by Amazon OpenSearch Service
Amazon Kinesis Data Firehose customers can now send data to Amazon OpenSearch Service using OpenSearch Service auto-generated document ID option. This configuration option enables write-heavy operations, such as log analytics and observability, to consume fewer CPU resources at the OpenSearch domain, resulting in improved performance.
Amazon Kinesis Data Firehose makes it easier to reliably load streaming data into Amazon OpenSearch Service and Amazon OpenSearch Serverless. With Amazon Kinesis Data Firehose, you don't need to write applications or manage resources. You can configure your producers to automatically deliver your data to the destination that you specify.
You can also configure Amazon Kinesis Data Firehose to transform your data before delivering it. Kinesis Data Firehose is a fully managed service that automatically scales to match the throughput of your data and without ongoing administration.
AWS announces Amazon Aurora I/O-Optimized
This week, AWS are announcing the general availability of Amazon Aurora I/O-Optimized, a new configuration that provides improved price performance and predictable pricing for customers with I/O-intensive applications. Aurora I/O-Optimized offers improved performance, increasing throughput and reducing latency for customers’ most demanding workloads.
With Aurora I/O-Optimized, there are zero charges for read and write I/O operations—you only pay for your database instances and storage usage, making it easy to predict your database spend up front. Aurora I/O-Optimized offers up to 40% cost savings for I/O-intensive applications where I/O charges exceed 25% of the total Aurora database spend.
You can now choose between two configurations: Aurora Standard or Aurora I/O-Optimized. For applications with low-to-moderate I/Os, Aurora Standard is a cost-effective option. For applications with high I/Os, Aurora I/O-Optimized provides improved price performance, predictable pricing, and up to 40% costs savings.
You can switch your cluster with a single click in the AWS Management Console or with a command through the AWS Command Line Interface. Aurora I/O-Optimized configuration is supported on Aurora Serverless v2 and provisioned instances, both on demand and reserved including existing Aurora reserved instances. You can switch existing database clusters to Aurora I/O-Optimized once every 30 days and switch back to Aurora Standard at any time.
Aurora I/O-Optimized is supported on Amazon Elastic Compute Cloud (Amazon EC2) R7g instances in the AWS Regions where R7g instances are currently available. R7g instances are powered by the latest generation AWS Graviton3 processors, delivering up to 30% performance gains and up to 20% improved price performance for Aurora, as compared to Graviton2 R6g instances. Aurora I/O-Optimized is available for Amazon Aurora PostgreSQL-Compatible Edition and Amazon Aurora MySQL-Compatible Edition in most AWS Regions where Aurora is available.
AWS App Mesh now supports AWS PrivateLink
You can now use AWS PrivateLink to privately access AWS App Mesh APIs from your Amazon Virtual Private Cloud (VPC) without exposing your data through public internet. Creating VPC Endpoints incurs charges. See the AWS PrivateLink pricing page for more information.
AWS App Mesh is a service mesh that provides application-level networking to make it easier for your services to communicate with each other across multiple types of compute infrastructure. AWS App Mesh standardizes how your services communicate, giving you end-to-end visibility and options to tune for high-availability of your applications.
Amazon ElastiCache for Redis now supports enabling Cluster Mode configuration on existing clusters
You can now update your Amazon ElastiCache for Redis cluster configuration to enable cluster mode without needing to rebuild you cluster, migrate data, or affect application availability. Cluster mode allows you to scale beyond the performance limitations of a single shard Redis cluster. For more information about cluster mode in ElastiCache for Redis, see the documentation.
Amazon Aurora MySQL and PostgreSQL support for Graviton3 based R7g instance family
AWS Graviton3-based R7g database instances are now generally available for Amazon Aurora with PostgreSQL compatibility and Amazon Aurora with MySQL compatibility in US East (N. Virginia, Ohio), US West (Oregon), and Europe (Ireland) regions. Graviton3 instances provide up to 30% performance improvement and up to 20% price-performance improvement over Graviton2 instances for Aurora depending on database engine, version, and workload.
AWS Graviton3 processors are the latest generation of custom-designed AWS Graviton processors built on the AWS Nitro System. The Graviton3 processors offer several improvements over the second-generation Graviton processors. Graviton3-based R7g are the first AWS database instances to feature the latest DDR5 memory, which provides 50% more memory bandwidth compared to DDR4, enabling high-speed access to data in memory. R7g database instances offer up to 30Gbps enhanced networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).
You can spin up Graviton3 R7g database instances in the Amazon RDS Management Console or using the AWS CLI. Graviton3is supported by Aurora MySQL version 3.03.1 and higher, and Aurora PostgreSQL version 13.10 and higher, Aurora PostgreSQL 14.7 and higher, and Aurora PostgreSQL 15.2 and higher. Upgrading a database instance to Graviton3 requires a simple instance type modification. For more details, refer to the Aurora documentation.
Monitor the health of your Amazon IVS channels with multiple hosts
Amazon IVS now offers developers the ability to monitor the health of live streams with multiple hosts. Hosts are connected to each other using the stage resource, which allows them to exchange audio and video with each other in real time. This release gives developers building with Amazon IVS the tools needed to better understand what is happening with stage resources and for any participant.
As part of this update, Amazon IVS has added new APIs and Amazon EventBridge events to give developers insight into the health and lifecycle of their stage resources. This new functionality can be used to diagnose and troubleshoot issues with sessions as they happen or after the sessions have ended.
The Amazon IVS console has also been updated to include a health dashboard to enable developers to see both current and past sessions, and details about each participant to help understand how each participant experienced their session.
Amazon Interactive Video Service (Amazon IVS) is a managed live streaming solution that is designed to be quick and easy to set up, and ideal for creating interactive video experiences. Video ingest and delivery are available around the world over a managed network of infrastructure optimized for live video.
Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available.
Amazon EMR Serverless is now available in Amazon Web Services China Regions
Amazon EMR is excited to announce that Amazon EMR Serverless is now available in the Amazon Web Services China (Beijing) region, operated by Sinnet, and in the Amazon Web Services China (Ningxia) region, operated by NWCD.
Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple and cost effective for data engineers and analysts to run petabyte-scale data analytics in the cloud. With Amazon EMR Serverless, you can run your Spark and Hive applications without having to configure, optimize, tune, or manage clusters.
Amazon EMR Serverless offers fine-grained automatic scaling, which provisions and quickly scales the compute and memory resources required by the application.
AWS Lambda now supports Kafka and Amazon MQ event sources in AWS GovCloud (US) Regions
AWS Lambda now supports Amazon MSK, self-manged Apache Kafka and Amazon MQ for Apache ActiveMQ and RabbitMQ as event sources in the AWS GovCloud (US) Regions. This gives customers more choices for how they want to send messages to Lambda.
Customers can build applications quickly and easily with Lambda functions that are invoked based on messages from these event sources connected to an event source mapping.
Lambda supports messaging event sources such as Amazon Simple Queue Service (SQS) and Amazon Kinesis Data Streams. Now, it is also easy to read from Amazon MQ message brokers or Kafka topics to process messages without needing to create and manage a consumer application.
The Lambda function is invoked when the messages exceed the batch size, the payload exceeds 6MB, or when the batch window expires.
Amazon FSx for NetApp ONTAP is now available in US West (N. California) and Asia Pacific (Jakarta)
Customers can now create Amazon FSx for NetApp ONTAP file systems in two additional AWS Regions: US West (N. California) and Asia Pacific (Jakarta).
Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities.
Amazon FSx for NetApp ONTAP, a member of the Amazon FSx family, provides the first and only complete, fully managed NetApp ONTAP file systems in the cloud. It offers the familiar features, performance, capabilities, and APIs of ONTAP with the agility, scalability, and simplicity of an AWS service.
Introducing Cedar, an open-source language for access control
This week, AWS open-sourced the Cedar policy language and authorization engine. You can use Cedar to express fine-grained permissions as easy-to-understand policies enforced in your applications, and you can decouple access control from your application logic.
Cedar supports common authorization models such as role-based access control and attribute-based access control. It follows a new verification-guided development process to give you high assurance of Cedar’s correctness and security: AWS formally models Cedar's authorization engine and other tools, proves safety and correctness properties about them using automated reasoning, and rigorously tests that the model matches the Rust implementation.
Amazon Verified Permissions uses Cedar to allow you to manage fine-grained permissions in your custom applications. With Amazon Verified Permissions, you can store Cedar policies centrally, have low latency with millisecond processing, and audit permissions across different applications.
And now with the Cedar open-source libraries, you can test and validate Cedar policies on a local computer before deploying them with Amazon Verified Permissions. You can also adapt these open-source libraries for your requirements and implement use cases such as running applications that are disconnected from the network.
Amazon CloudFront announces one-click security protections
You can now secure your web applications and APIs with AWS WAF with a single click in the Amazon CloudFront console. CloudFront can create and configure out-of-the-box AWS WAF protection for your application as a first line of defense against common web threats. Optionally, you can later configure additional security protections against bots and fraud or other threats specific to your application in the AWS WAF console.
Previously, you could secure your CloudFront distributions with AWS WAF by preconfiguring an AWS WAF web access control list (web ACL) containing the security rules you wanted to enable. While this approach offers flexibility, you had to decide which initial security rules to enable, and you needed to interact with both the CloudFront and AWS WAF management consoles.
Now, CloudFront handles creating and configuring AWS WAF for you with out-of-the-box protections recommended by AWS for all applications. This simple and convenient way to protect your web applications and APIs is available in the CloudFront console at the time you create or edit your distribution. Customers who prefer to use an existing web ACL may continue to select a preconfigured web ACL instead.
AWS Lambda now supports AWS X-Ray tracing for SnapStart-enabled functions
You can now use AWS X-Ray to trace and analyze your Lambda functions enabled with Lambda SnapStart. You can use X-Ray traces to gain deeper visibility into your function’s performance and execution lifecycle, helping you identify errors and performance bottlenecks for your latency-sensitive Java applications built using SnapStart-enabled functions.
Lambda SnapStart makes it easier for you to build highly responsive and scalable Java applications using Lambda by delivering up to 10x faster function startup performance at no extra cost. SnapStart offers improved startup times—known as cold starts—by initializing the function code ahead of time, taking a snapshot of the initialized execution environment, and caching it.
When the function is invoked and subsequently scales up, SnapStart restores new execution environments from the cached snapshot instead of initializing them from scratch, significantly improving startup latency. With X-Ray support for SnapStart-enabled functions, you can now see trace data about the restoration of the execution environment and execution of your function code. X-Ray also enables you to visualize the trace data, which helps you identify root cause of errors and performance issues in your function.
X-Ray support for Lambda SnapStart-enabled functions is available in all regions where Lambda SnapStart is available. For more information, see the AWS Region table.
Announcing Provisioned Concurrency for Amazon SageMaker Serverless Inference
This week, AWS are excited to announce general availability of Provisioned Concurrency support for Amazon SageMaker Serverless Inference. Provisioned Concurrency allows you to deploy models on serverless endpoints with predictable performance and high scalability.
You can add provisioned concurrency to your serverless endpoints, and for the pre-defined amount of provisioned concurrency SageMaker will keep the endpoints warm and ready to respond to requests instantaneously. Provisioned Concurrency is ideal for customers who have predictable traffic, with low throughput.
With on-demand serverless endpoints, if your endpoint does not receive traffic for a while and then your endpoint suddenly receives new requests, it can take some time for your endpoint to spin up the compute resources to process the requests. This is called a cold start.
A cold start can also occur if your concurrent requests exceed the current concurrent request usage. To reduce variability in your latency profile, you can optionally enable Provisioned Concurrency for your serverless endpoints. With provisioned concurrency, your serverless endpoints are always ready and can instantaneously serve bursts in traffic upto the configured number of Provisioned Concurrency, without any cold starts.
You can enable Provisioned Concurrency for serverless endpoints from the AWS console, AWS SDKs, or the AWS Command Line Interface (AWS CLI). Provisioned Concurrency for SageMaker Serverless Inference is generally available in all AWS Regions where SageMaker Serverless Inference is generally available.
AWS Systems Manager now allows customers to optimize the compute costs of their applications
Application Manager, a capability of AWS Systems Manager that helps DevOps engineers to investigate and remediate issues in the context of their applications, now enables customers to optimize the cost of compute resources associated with their applications. Customers can now view the cost of their applications in Application Manager and also take recommended actions, such as right sizing instances, to save costs.
Application Manager enables customers to monitor the operational status, metrics, and compliance of their applications from a central console. With this new feature, customers can now manage costs through integration with AWS Cost Explorer and AWS Compute Optimizer. Further, customers can now explore the cost trends of their applications, get recommendations to save costs, and take action to implement those with a few clicks – all from a single console.
AWS announces new AWS Direct Connect location in Lagos, Nigeria
This week, AWS announced the opening of a new AWS Direct Connect location within the Rack Center data center in Lagos, Nigeria. By connecting your network to AWS at the new location, you gain private, direct access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones.
The new Lagos location offers dedicated 1 Gbps and 10 Gbps connections, with MACsec encryption available for 10 Gbps. Using this new Direct Connect location to reach resources running in the Lagos AWS Local Zone is an ideal solution for applications that require single-digit millisecond latency or local data processing.
The Direct Connect service establishes a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet.
Using the Direct Connect SiteLink feature, you can send data between Direct Connect locations to create private network connections between the offices and data centers in your global network.
AWS Backup now supports AWS User Notifications
This week, AWS Backup is announcing support for managing your backup notifications from the AWS User Notifications console. AWS Backup is a fully managed service that centralizes and automates data protection across AWS services and hybrid workloads. This launch enables you to easily configure, monitor, and manage your notifications related to AWS Backup from a central location.
With this launch, you can view the progress of your backup, copy, and restore jobs and changes to your backup policies, vaults, recovery points, and settings from the User Notifications Notification Center, simplifying managing notifications and allowing you to respond to your backup events faster.
You can now consolidate your existing and new AWS Backup notifications such as Amazon CloudWatch and Amazon EventBridge alarms, and AWS Support case updates. Moreover, you can set up multiple delivery channels, such as email, AWS Chatbot chat notifications, or AWS Console Mobile App push notifications to receive your AWS Backup notifications.
Amazon SageMaker notebooks now support ml.p4d, ml.p4de and ml.inf1 instances
Amazon SageMaker Studio notebooks and Notebook Instances now support ml.p4d and ml.p4de GPU-based instances that provide the best performance for interactive machine learning (ML) workloads in the cloud for applications such as large language models with billions of parameters, natural language processing, object detection and classification, seismic analysis, genomics research, and more. These instances are powered by the latest Intel® Cascade Lake processors and eight NVIDIA A100 Tensor Core GPUs.
ml.p4d instances are available in US East (N. Virginia), US West (Oregon), US East (Ohio), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Seoul) AWS regions for both Studio notebooks as well as Notebook Instances. ml.p4de instances are available in US East (N. Virginia) and US West (Oregon) AWS regions for both Studio notebooks as well as Notebook Instances.
Additionally, SageMaker Notebook Instances now support ml.inf1 instances, which provide high performance and are cost effective. ml.inf1 instances are built from the ground up to support machine learning applications and feature up to 16 AWS Inferentia chips, machine learning chips designed and built by AWS to optimize cost for deep learning inference.
Amazon CodeGuru Security plugin for SageMaker Studio and Jupyter Notebooks now in preview
Amazon CodeGuru Security now supports security and code quality scans for Amazon SageMaker Studio and Jupyter notebooks. This new capability assists notebook users in detecting security vulnerabilities such as injection flaws, data leaks, weak cryptography, or missing encryption within the notebook cells.
Users can also detect many common issues that affect the readability, reproducibility, and correctness of computational notebooks, such as misuse of ML library APIs, invalid execution order, and nondeterminism. When vulnerabilities or quality issues are identified in the notebook, CodeGuru generates recommendations that enable users to remediate those issues based on AWS security best practices.
Notebook users on SageMaker Studio and Jupyter can start scanning their code for security and quality issues today by installing the Amazon CodeGuru plugin for notebooks, currently in preview.
Amazon CodeGuru Security is a developer tool that provides intelligent recommendations to improve code security and quality. CodeGuru uses machine learning and automated reasoning to identify critical issues, security vulnerabilities, and hard-to-find bugs during application development and provide recommendations to assist users in correcting the identified issues.
Amazon SageMaker Studio is a web-based, integrated development environment (IDE) for machine learning that lets you build, train, debug, deploy, and monitor your machine learning models.
Amazon SageMaker Canvas can now operationalize ML models in production
You can now register machine learning (ML) models built in Amazon SageMaker Canvas with a single click to SageMaker Model registry enabling you to operationalize ML models in production. SageMaker Canvas is a visual interface that enables business analysts to generate accurate ML predictions on their own — without requiring any ML experience or having to write a single line of code.
With SageMaker Canvas you can automatically create ML models to run what if analysis and generate single or bulk predictions. Now with SageMaker Model Registry integration, you can store all model artifacts including metadata and performance metrics baselines to a central repository and plug them into your existing model deployment CI/CD processes.
A model registry plays a key role in the model deployment process because it packages all model information and enables the automation of model promotion to production environments. Starting today, you can select a model version in SageMaker Canvas, register it to SageMaker Model Registry in your own account and track the approval status of the same.
Rejecting a model in registry prevents the model from being deployed into an escalated environment, whereas approving a model in the registry can trigger a model promotion pipeline that automatically copies the model to the pre-production AWS account, and get your model ready for production inferencing workloads.
Amazon MemoryDB for Redis adds support for Redis 7
Amazon MemoryDB for Redis now supports Redis 7. This release brings several new features to MemoryDB:
- Redis Functions: MemoryDB adds support for Redis Functions, and provides a managed experience enabling developers to execute LUA scripts with application logic durably stored on the MemoryDB cluster.
- ACL improvements: MemoryDB adds support for the next version of Redis Access Control Lists (ACLs). With MemoryDB, clients can now specify multiple sets of permissions on specific keys or keyspaces in Redis.
- Sharded Pub/Sub: MemoryDB now gives you the ability to run Redis’ Pub/Sub functionality in a sharded way. With MemoryDB, channels are bound to a shard in the MemoryDB cluster, eliminating the need to propagate channel information across shards resulting in improved scalability.
- Enhanced I/O Multiplexing: MemoryDB now includes enhanced I/O multiplexing, which delivers significant improvements to throughput and latency at scale. As an example, when using r6g.4xlarge node and running 5200 concurrent clients, you can achieve up to 46% increased throughput (read and write operations per second) and up to 21% decreased P99 latency, compared with MemoryDB for Redis 6.
Amazon MemoryDB for Redis 7 is available in all regions where MemoryDB is generally available. For more details about MemoryDB, refer to Supported MemoryDB for Redis versions.
You can upgrade the engine version of your cluster or replication group by modifying it and specifying 7 as the engine version. To learn more about upgrading engine versions, refer to Version Management.
AWS announces new AWS Direct Connect location in Atlanta, Georgia
This week, AWS announced the opening of a new AWS Direct Connect location within the QTS Atlanta DC1 data center in Atlanta, Georgia. By connecting your network to AWS at the new location, you gain private, direct access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones.
The new location is the second in Atlanta and 34th Direct Connect location in the United States and offers dedicated 1 Gbps, 10 Gbps, and 100 Gbps connections, with the option for MACsec encryption available at 10 Gbps and 100 Gbps speeds. Hosted connections with additional speeds may be available from Direct Connect Service Delivery Partners.
AWS Direct Connect enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet. Using the Direct Connect SiteLink feature, you can send data between Direct Connect locations to create private network connections between the offices and data centers in your global network.
Amazon MemoryDB for Redis now supports IAM Authentication
Amazon MemoryDB for Redis now supports AWS Identity and Access Management (IAM) authentication access to its clusters. With this launch, you can associate IAM users and roles with MemoryDB users and manage their cluster access.
You can configure IAM authentication by creating an IAM-enabled MemoryDB user and then assigning this user to an appropriate MemoryDB user group via the AWS Management Console, AWS CLI, or the AWS SDK. Using IAM policies, you can grant or revoke cluster access to different IAM identities. Redis applications can now use IAM credentials to authenticate to your MemoryDB clusters while connecting to them.
Private Access to the AWS Management Console is generally available
This week, AWS announces the general availability of AWS Management Console Private Access. Private Access is a new security feature that allows customers to limit access to the AWS Management Console from their Virtual Private Cloud (VPC) or connected networks to a set of trusted AWS accounts and organizations.
AWS Management Console Private Access is built on VPC Endpoints, which uses AWS PrivateLink to establish a private connection between a customer VPC and the AWS Management Console. Customers can designate which networks are allowed to access their accounts and AWS Organizations from the AWS Management Console. It also denies attempts to access any other AWS accounts in the AWS Management Console from their network.
Amazon SNS now supports faster automatic deletion of unconfirmed subscriptions
Amazon Simple Notification Service (Amazon SNS) now supports automatic deletion of unconfirmed subscriptions once they have been in a pending confirmation state for 48 hours. This reduces the time to delete your unconfirmed subscriptions from the previous 72 hour period. This applies to all new subscriptions and does not require any on-boarding.
Amazon SNS is a messaging service for Application-to-Application (A2A) and Application-to-Person (A2P) communication. The A2A functionality provides high-throughput, push-based, many-to-many messaging between distributed systems, microservices, and event-driven serverless applications. The A2P functionality enables you to communicate with your customers via mobile text messages (SMS), mobile push notifications, and email notifications.
Unconfirmed Amazon SNS subscriptions are subscriptions in a pending confirmation state. When you subscribe an endpoint to a topic in another AWS account, or if the endpoint type is either HTTP/S or email, your Amazon SNS subscription will be in a pending confirmation state until the subscription owner performs the ConfirmSubscription API action to confirm the subscription. In case the subscription owner fails to confirm their subscription, Amazon SNS then automatically removes the unconfirmed subscription.
The 48 hour deletion of unconfirmed subscriptions is available in all AWS Regions, including the AWS GovCloud (US) Regions.
To learn more about Amazon SNS subscription management, see the following:
- Subscribe, ConfirmSubscription, and Unsubscribe API actions in the Amazon SNS API Reference
- Cross-account message delivery in the Amazon SNS Developer Guide
- Subscribing an HTTP/S endpoint to a topic in the Amazon SNS Developer Guide
- Subscribing an email to a topic in the Amazon SNS Developer Guide
SageMaker Autopilot supports training ML models with weights, eight additional objective metrics
Amazon SageMaker Autopilot, a low-code machine learning (ML) service which automatically builds, trains and tunes the best ML models, now supports training with weighted objective metrics in Ensemble mode and also supports eight additional objective metrics. Assigning weights to each data sample in the training data set can improve overall model performance by helping the model learn better, reduce bias towards a particular class, and increase stability.
When training upon imbalanced datasets where some classes have significantly fewer data samples than others, assigning higher weights to those can help the model learn better and reduce bias towards the majority classes. Starting today, you can pass a weight column name in your input dataset while creating an Autopilot experiment. SageMaker Autopilot will use these weight values to learn more about your dataset and apply the learnings while training the ML model.
SageMaker Autopilot now also supports eight additional objective metrics such as RMSE, MAE, R2, Balanced Accuracy, Precision, Precision Macro, Recall and Recall Macro (documented here). The selected objective metric is optimized during training to provide the best estimate for model parameter values from the data. If you do not specify a metric explicitly, the default behavior is to automatically use MSE for regression, F1 for binary classification and Accuracy for multi-class classification.
AWS IoT SiteWise now supports 15-minute intervals for automatically computing aggregated asset property values
AWS IoT SiteWise now provides the ability for customers to automatically compute aggregated asset property values every 15 minutes. Customers can now get aggregated values for asset properties, metrics and transforms (mathematical expressions that map asset properties' data points from one form to another) for every minute, 15-minute, hour, and day. AWS IoT SiteWise computes commonly used statistical aggregates such as average, count, maximum, minimum, standard deviation, and sum over multiple time intervals.
Customers can map data streams and define static or computed equipment and process properties across all facilities so they're readily available for analysis. For non-numeric properties, such as Strings and Booleans, AWS IoT SiteWise computes only the count aggregate.
The ability to pre compute 15-minute aggregated asset property values has been a highly requested feature for customers who use AWS IoT SiteWise for energy and manufacturing industrial applications. Customers can automatically benefit from this feature by using the AWS IoT SiteWise GetAssetPropertyAggregates API.
To use this API, customers will need to specify asset property identifiers, the time interval (1 minute, 15 minute, 1 hour, 1 day) and list of aggregates such as AVERAGE, COUNT, MAXIMUM, MINIMUM, STANDARD_DEVIATION, and SUM to retrieve aggregated values for an asset property. Customers can visualize 15 minutes auto-computed asset property aggregates using AWS SiteWise Monitor web applications, or retrieve them for feeding analytics applications.
AWS IoT SiteWise enhances optimized storage in hot path data for higher throughput
This week, AWS IoT SiteWise announced new performance enhancements that will significantly improve the retrieval of industrial data from its hot storage tier. We have improved the GetAssetPropertyValueHistory and BatchGetAssetPropertyValueHistory APIs by increasing the maximum number of results for each paginated response from 250 up to 20K.
With these upgrades, developers can now retrieve asset property data, including measurements, attributes, metrics, and transforms, at high velocity to build industrial applications.
AWS increased the max page size limit for GetAssetPropertyValueHistory, and BatchGetAssetPropertyValueHistory APIs, which means you can specify higher maxResults in the API request in order to start retrieving data using less API calls. A reduced number of API calls results in a faster response time for analytics and data visualization applications. For example, applications that need to retrieve property data for hundreds of asset in order to create interactive dashboards showing historical trends from multiple machines in a single screen.
Amazon QuickSight adds new scatterplot options supporting additional use cases
Amazon QuickSight now supports advance use cases for scatterplots with new options like support for visualizing unaggregated data, new ‘label’ field well and performance improvements.
With the new aggregate option called ‘None’ from the field menu, users can plot unaggregated values even when using a field on Color. In cases where one value is aggregated, the other value will be automatically set as aggregated, and the same applies to unaggregated scenarios.
Furthermore, users can now leverage the new ‘Label’ field well alongside the existing Color field to add more flexibility in data visualization by allowing them to color by one field and label by another. Lastly, the performance of scatterplots has been improved to load up to six times faster, which applies to both new and existing use cases. For more information, please visit the documentation page.
AWS Glue large instance types are now generally available
This week, AWS announced the general availability of AWS Glue G.4X and G.8X, the next series of AWS Glue workers for your most demanding serverless data integration workloads. Glue G.4X and G.8X workers provide higher compute, memory, and storage resources than current Glue workers.
These new types of workers help you scale and run your most demanding data integration workloads, such as memory-intensive data transforms, skewed aggregations, machine learning transforms, and entity detection checks with petabytes of data.
G.4X and G.8X workers provides the most Data Processing Units (DPU) of the workers in Glue, with each G.4X worker providing 4 DPUs and each G.8X providing 8 DPUs. To use G.4X and G.8X workers, set the worker type parameter of your jobs to G.4X or G.8X from the CLI, API, or Glue Studio.
Introducing Amazon EC2 I4g storage-optimized instances
Amazon Elastic Compute Cloud (EC2) I4g storage-optimized instances powered by AWS Graviton2 processors are now generally available. I4g instances deliver the best compute price performance for a storage-optimized instance versus comparable x86-based storage optimized instances, and the best storage performance per TB for a Graviton-based storage instance.
Based on AWS Nitro SSDs that are custom built by AWS and reduce both latency and latency variability, I4g instances are optimized for workloads that perform a high mix of random read/write and require very low I/O latency, such as transactional databases (Amazon DynamoDB, MySQL, and PostgreSQL) and real-time analytics such as Apache Spark.
I4g instances improve real-time storage performance up to 2x compared to prior generation storage-optimized instances. Offering 8 GB of memory per vCPU, I4g instances help you maximize storage application throughput. I4g instances deliver higher compute performance and offer the lower cost per TB compared to existing I3 and I4i instances with similar memory and storage ratios. This helps you effectively use Graviton-based storage instances for your existing storage workloads and lower your costs.
I4g instances are available in six sizes: I4g.large, I4g.xlarge, i4g.2xlarge, I4g.4xlarge, I4g.8xlarge, and I4g.16xlarge. You can use I4g instances in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), and Europe (Ireland). I4g instances are available through Saving Plans, Reserved Instances, On-Demand, and Spot Instances.
AWS Network Firewall ingress TLS inspection is now available in all regions
Ingress Transport Layer Security (TLS) inspection for AWS Network Firewall is now available in all AWS Regions where AWS Network Firewall is available today, including the AWS GovCloud (US) Regions. With this launch, you can use AWS Network Firewall to decrypt, inspect, and re-encrypt TLS traffic originating from the internet, another VPC, or another subnet.
AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon VPCs. This new feature enables customers to decrypt TLS sessions and inspect inbound VPC traffic without having to deploy and manage any additional network security infrastructure.
AWS CloudTrail Lake enhances query support for all Presto SQL SELECT functions
AWS CloudTrail Lake, a managed data lake that lets organizations aggregate, immutably store, and query their audit and security logs for auditing, security investigations and operational troubleshooting, now supports all Presto SQL SELECT query functions for easy and flexible queryability of data . This new release includes support for popular query functions such as REGEXP_EXTRACT for sophisticated pattern matching such as finding all S3 requests made on a specific S3 bucket prefix, and UNNEST an array such as resources to query over its objects like resourceType. With this release, you can also add comments within the query for better readability.
To help you get started, the sample queries page in the CloudTrail Lake console provides sample queries leveraging the new supported Presto functions. For more information, see Viewing sample queries in the CloudTrail console.
AWS IoT Device Defender and secure tunneling is now available in Middle East (UAE) region
AWS are excited to announce the General Availability of AWS IoT Device Defender and the secure tunneling feature of AWS IoT Device Management in the AWS Middle East (UAE) Region, providing flexibility to customers and extending the service footprint to an additional location in the Middle East besides AWS Middle East (Bahrain) Region.
AWS IoT Device Defender is a fully managed AWS IoT service that makes it easy to audit configurations, detect anomalies, and receive alerts to help secure your IoT device fleet. With AWS IoT Device Defender, you can improve the security posture of your device fleet and automate your fleet’s security assessment. With this launch, customers in Middle East (UAE) region will have access within their region to AWS IoT Device Defender features, including audit, rule detect and ML detect.
AWS IoT Device Management helps you register, organize, monitor, and remotely manage IoT devices at scale. Secure tunneling feture of AWS IoT Device Management enables you to establish bidirectional communication to remote devices that are behind a firewall over a secure connection managed by AWS IoT. With this launch, AWS Middle East (UAE) Region customers can access secure tunneling feature to easily build a secure connection between source and destination devices that is brokered through a cloud proxy service on AWS.
Amazon Athena now supports Apache Hudi 0.12.2
You can now use Amazon Athena to query tables created with Apache Hudi 0.12.2 which includes support for improved scalability of queries accessing data sets in Amazon S3 data lake. The updated integration enables you to use Athena to query Hudi 0.12.2 tables managed via Amazon EMR, Apache Spark, Apache Hive or other compatible services.
Apache Hudi is an open-source data management framework used to simplify incremental data processing in S3 data lakes. Hudi provides record-level data processing that can help you simplify development of Change Data Capture (CDC) pipelines, comply with GDPR-driven updates and deletions, and better manage streaming data from sensors or devices that require data insertion and event updates.
The 0.12.2 release includes support for Metadata Tables which are designed to eliminate the requirement for the “list files” operation in order to better support efficient scaling over larger data sets. The Metadata Table will instead proactively maintain the list of files and remove the need for recursive file listing operations to avoid running into request limits in the case of storage systems like Amazon S3.
Amazon MemoryDB for Redis simplifies creating new clusters in the AWS Management Console
Amazon MemoryDB for Redis now makes it simpler and faster for you to get started with setting up a new MemoryDB cluster. The new console experience offers streamlined navigation and minimal settings required to configure a cluster, in just a few clicks.
You can now create a MemoryDB cluster by selecting from one of the three pre-defined configurations: Production, Dev/Test, and Demo. Each configuration includes default cluster settings based on best practices that are tailored to your use case.
The new console experience is now available in all regions where MemoryDB is generally available.
Amazon QuickSight now supports State Persistence and Bookmarks for embedded dashboards
Amazon QuickSight now supports State Persistence for embedded dashboards (and consoles) for registered users. Embedded QuickSight dashboards now remember filter selections between visits when accessed as a registered QuickSight user. Registered users can also Bookmark specific views of the embedded dashboard, making it simple to access your preferences from one convenient location.
For example, you can create a Bookmark for an embedded dashboard with a specific filter setting that differs from the original dashboard. This allows you to quickly switch between relevant views without having to re-initializing the filters. You can enable Bookmarks and State Persistence using the GenerateEmbedUrlForRegisteredUser API. To learn more about, please visit our documentation.
The state persistence and bookmark for embedded dashboards are only available with the Amazon QuickSight Enterprise Edition.
Amazon Redshift Data Sharing now available in AWS China Regions
Amazon Redshift Data Sharing, a secure way to share live data across Amazon Redshift clusters, is now available in AWS China regions. Customers can now share data to provisioned clusters and serverless workgroups in the same account and different accounts, as long as they are within the same region.
Redshift Data Sharing enables instant, granular, and high-performance data access across Redshift clusters without the need to copy or move data. Redshift Data Sharing provides live access to the data so that your users always see the most up-to-date and consistent information as it is updated in the data warehouse. Redshift Data Sharing can be used on your Amazon Redshift RA3 clusters at no additional cost.
With Redshift Data Sharing, you can isolate diverse workloads across different Redshift clusters while still sharing live, transactionally consistent data by leveraging Redshift Managed Storage across these clusters without the complexity and delays associated with data copies and data movement.
Queries accessing shared data are run using the compute resources of the consumer cluster and do not impact the performance of the producer cluster. Redshift Data Sharing enables seamless collaboration across business groups for broader analytics and to analyze cross-product impact as well share data securely with your business ecosystem external to your organization.
Amazon QuickSight now supports VPC Connections via public APIs with Multi-AZ support
This week, Amazon QuickSight announced the general availability of managing Virtual Private Cloud (VPC) connections via public APIs and an enhanced UX with Multi-AZ support. APIs enable you to create, update, delete, list and describe VPC connections. This launch enables you to create private VPC connections as part of your Infrastructure as Code (IaC) efforts with full support for AWS CloudFormation.
With VPC connection management via APIs, administrators who need to automate deployment processes via IaC can now use these APIs as part of their CI/CD pipelines, removing the need to manually create, delete and update these assets via console. In addition, Amazon QuickSight is also introducing Multi-AZ support for VPC to improve availability in the event of AZ failure.
Our enhanced UX/API offering to manage connections will mean customers will also be able to update in addition to creating and deleting VPC Connections as well as describe the state of VPC connections operation in the console and API.
AlloyDB for PostgreSQL
The storage per cluster limit has increased to 32 TiB.
The columnar engine now supports columns with
jsonb data types.
App Engine flexible environment Ruby
Ruby 3.2 is now generally available. This version requires you to specify an operating system version in your
app.yaml file. Learn more.
Documentation has been added to explain how to run Nextflow pipelines on Batch. For more information, see Orchestrate jobs by running Nextflow pipelines on Batch.
Object tables are now generally available (GA).
Object tables are read-only tables containing metadata for unstructured data stored in Cloud Storage. They enable you to analyze and perform inference on images, audio files, documents and other file types by using BigQuery ML and BigQuery remote functions. Object tables extend the data security and governance best practices currently applied to structured data to unstructured data as well.
The GA release includes the following new and updated functions:
ML.DECODE_IMAGE: Decodes image data so that it can be interpreted by the
ML.CONVERT_COLOR_SPACE: Converts images with an RGB color space to a different color space.
ML.CONVERT_IMAGE_TYPE: Converts the data type of the pixel values in an image.
ML.RESIZE_IMAGE: Resizes images.
ML.DISTANCE: Computes the distance between two vectors.
ML.LP_NORM: Computes the Lᵖ norm for a vector, where ᵖ is the degree.
BigQuery is now available in the Dallas (us-south1) region.
You can now view BI Engine Top Tables Cached Bytes, BI Engine Query Fallback Count, and Query Execution Count as dashboard metrics for BigQuery. This feature is now generally available (GA).
EXTERNAL_QUERY SQL pushdown optimizes data retrieval from external sources like Cloud SQL or Cloud Spanner databases. Transferring less data reduces execution time and cost. SQL pushdown encompasses both column pruning (
SELECT clauses) and filter pushdowns (
WHERE clauses). SQL pushdown applies to
SELECT * FROM T queries, a significant percentage of all federated queries. Pushdowns have limitations, for example not all data types are supported for filter pushdowns. This feature is generally available (GA).
You can now restrict the creation of Cloud Build builds, triggers, and repositories to a particular location using an Organization Policy Service constraint. This feature is generally available. To learn more, see Restricting Resource Locations.
The Cloud Router custom learned routes feature is in Preview. This feature lets you configure a Border Gateway Protocol (BGP) session to include learned routes that you manually specify. Cloud Router then behaves as if it learned the routes from the BGP peer.
Custom learned routes can be helpful if you want to avoid the limitations of static routes. For example:
Static routes can't detect a loss of reachability in the next hop of a route. In contrast, custom learned routes can detect a loss of reachability, and they react accordingly to avoid dropping traffic without notification.
Static routes do not support using HA VPN tunnels or Cloud Interconnect VLAN attachments as next hops. Custom learned routes do.
For more information, see Custom learned routes.
Cloud Run integrations (Preview) are now available in
Cloud Run services can now connect to Firebase Hosting for custom domains and CDN capabilities, using Integrations (Preview).
Cloud Run now logs container health check failures, including default TCP startup probe failures.
Support for logging the processing duration of your Cloud Spanner read and write requests is now available in Cloud Audit Logs. For more information, see Processing duration.
Custom audit logging for Cloud Storage is now available in Preview.
- JSON API requests now support user-defined headers that are prefixed with
- Cloud Audit Logs can subsequently include these headers as part of your request's audit log entry.
Cloud Workstations is generally available (GA) and is backed by a Service Level Agreement (SLA).
This release includes support for the following features:
- API and gcloud support for the me-west1 region.
- API support for GPUs is available in preview.
- Terraform support is available in preview.
- Posit Workbench (including RStudio Pro) integration is available in preview.
- BeyondCorp Enterprise integration for the Cloud Workstations API is available in preview.
GKE cluster versions have been updated.
- Version 1.25.8-gke.500 is now the default version.
Now in GA for both GKE Standard and Autopilot clusters with GKE version 1.26 and later, you can add more IPv4 secondary Pod ranges to a new or existing cluster with the
--additional-pod-ipv4-ranges flag. To learn more, see Adding Pod IP addresses.
Looker 23.8 includes the following changes, features, and fixes.
Expected rollout start: Monday, May 15, 2023
Expected final deployment and download available: Thursday, May 24, 2023
Previously, a LookML validation error occurred when a
project_name parameter was added to a project manifest file that also defined a Looker extension. This LookML error was triggered when the Local Project Import Labs feature was disabled for the Looker instance. Looker extensions do not require local project import, so with this bug fix this scenario will no longer trigger a LookML validation error.
The API3 keys setting on the Admin API page is now named API keys, in preparation for the deprecation of API3 in June 2023.
Users will now be warned when text on a dashboard tile is close to reaching the maximum length of 256 characters.
The Hide dashboard filters feature is now generally available.
The New Explore Visualizations Labs feature is now generally available. The Explore page, Looks, embedded Looks or Explores, and dashboard tile edit windows will display the same style of funnel chart, timeline, single value, and table visualizations as those that appear on dashboard tiles. Additionally, the drill overlay that appears when you drill into an Explore will match the style of the drill overlay that appears in dashboards, instead of the style that appears in Looks.
Customers who do not have the
oem_jar license feature enabled can now access the
set_smtp_settings API endpoint.
The Looker IDE will now display an error when incompatible types are being compared in Liquid statements.
The Source column in the Admin > Queries panel now correctly displays the API version for queries that are initiated from the Looker API.
Cookieless embed API endpoints are now marked as stable.
When the filter definition for
matches_filter is empty,
1=1 will be added to the WHERE clause so that there are no SQL errors and the query can run. This functionality mirrors the
is equal to [empty] standard filter option.
Generative AI Support for Vertex AI
Generative AI Support for Vertex AI is now available in (Preview). With this feature launch, you can leverage the Vertex AI PaLM API to generate AI models that you can test, tune, and deploy in your AI-powered applications.
Features and models in this release include:
- PaLM 2 for Text: text-bison@001
- PaLM 2 for Chat: chat-bison@001
- Embedding for Text: textembedding-gecko@001
- Generative AI Studio for Language
- Tuning for PaLM 2
- Vertex AI SDK v1.25, which includes new features such as TextGenerationModel(text-bison@001), ChatModel(chat-bison@001), TextEmbeddingModel(textembedding-gecko@001)
You can interact with the generative AI features on Vertex AI by using Generative AI Studio in the Google Cloud console, the Vertex AI API, and the Vertex AI SDK for Python.
- Learn more about Generative AI Support for Vertex AI
- See an Introduction to Generative AI Studio
- Get started with a Generative AI Studio quickstart
Vertex AI Model Garden
The Vertex AI Model Garden is now available in (Preview). The Model Garden is a platform that helps you discover, test, customize, and deploy Vertex AI and select OSS models. These models range from tunable to task-specific - all available on the Model Garden page in the Google Cloud console.
- To get started, see Explore AI models and APIs in Model Garden.
You can apply call logging to a workflow definition as well as to the execution of a workflow, and specify the level of logging required. The execution log level takes precedence over any workflow log level, unless the execution log level is not specified.
Microsoft Azure Releases And Updates
Generally available: Azure Bastion now support shareable links
Shareable links allows users to connect to target resources via Azure Bastion without access to the Azure portal.
General availability: Azure Sphere OS version 23.05 expected on May 24
Participate in the retail evaluation now to ensure full compatibility. The OS evaluation period provides 14 days for backward compatibility testing.
Public Preview: App Service Minimum TLS Cipher Suite Now on Azure Portal
Disable Weak TLS Cipher Suites for Your App Service Web Apps (Preview)
General availability: Inbound ICMPv4 pings are now supported on Azure Load Balancer
Inbound ICMPv4 pings are now supported on Azure Load Balancer.
Generally available: Azure DNS Private Resolver is available in 8 additional regions
Azure DNS Private Resolver is now available in West US, Canada East, Qatar Central, UAE North, Australia Southeast, Norway East, Norway East, and Poland Central.
Generalized Azure Compute Gallery custom image support in Update management center
VM Image support for Update management center has expanded with the addition of support for generalized Azure Compute Gallery custom images.
Public preview: Always Serve for Azure Traffic Manager
Azure are excited to announce that the AlwaysServe feature for Azure Traffic Manager is now available in Public Preview. This feature allows customers to disable endpoint health checks from an ATM profile and always serve traffic to that given endpoint.
Preview: Automatic Scaling for App Service Web Apps
App Service has an automatic scaling capability that adjusts the number of running instances of your application based on incoming HTTP requests.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Not knowing exactly what is in your cloud accounts, or those of your client's can be a worry. What exactly is running in there and what is it costing? What obsolete resources are you still being charged for? What legacy dev/test environments can be switched off? What open ports are inviting in hackers? You can answer all these questions with Hava.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (No credit card required and includes a forever free tier):