Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday November 18th 2022.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
AWS Updates and Releases
Application Load Balancers now support the ability to route traffic only to targets in the same zone as load balancer nodes. This allows you to maintain zonal isolation of your software stacks, while using Application Load Balancers across multiple zones, providing increased availability during zonal failures.
You can turn off cross-zone load balancing on a per target group basis, per region. You can also use an AWS SDK to turn off cross-zone load balancing programmatically for multiple target groups associated with one or more load balancers. This feature works for auto-scaling scenarios in the background, and ensures that target applications are scaled on a per AZ basis. This feature also adds the required zonal dimensions for various existing load balancer metrics.
There is no additional charge for turning off cross-zone load balancing.
This week, Amazon Simple Queue Service (SQS) announces support for attribute-based access control (ABAC) using queue tags, enabling customers to bolster their overall security postures with a flexible and scalable access control solution.
Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Amazon SQS significantly reduces the complexity and overhead associated with managing and operating message-oriented middleware, and empowers developers to focus on differentiating work.
ABAC is an authorization strategy that defines permissions based on tags attached to users and AWS resources. Today, you can already assign metadata to your SQS resources as tags. Each tag is a simple label comprising a customer defined key and an optional value that can make it easier to manage, search for, and filter resources.
With this release, you can now use your tags to configure access permissions and policies. ABAC leverages multiple dimensions of attributes on your SQS resources to determine access permissions. With the flexibility of using multiple custom resource tags in your security policies, you can now easily set more granular access permissions based on resource attributes reflecting your organizational structures.
This enhancement also allows you to easily scale your tag-based permissions to new employees and changing resource structures, without rewriting the permissions policy as organizations grow.
Getting started with ABAC for SQS is easy. SQS supports using tags while creating queues. You can simply add tags while creating your SQS resources and then create an IAM policy that allows or denies access to SQS resources based on your tags. You can use the AWS API, the AWS CLI, or the AWS Management Console to tag your resources.
Attribute-based access control (ABAC) for SQS is available in all AWS Commercial Regions where Amazon SQS is available.
Amazon CloudFront now supports Cloudfront-viewer-ja3-fingerprint headers, enabling customers to access incoming viewer requests’ JA3 fingerprints. Customers can use the JA3 fingerprints to implement custom logic to block malicious clients or allow requests from expected clients only.
A Cloudfront-viewer-ja3-fingerprint header contains a 32-character hash fingerprint of the TLS Client Hello packet of an incoming viewer request. The fingerprint encapsulates information about how the client communicates and can be used to profile clients that share the same pattern.
You can add the Cloudfront-viewer-ja3-fingerprint header to an origin request policy and attach the policy to your CloudFront distributions. You can then inspect the header value in your origin applications or in your Lambda@Edge and CloudFront Functions, and compare the header value against a list of known malware fingerprints to block the malicious clients. You can also compare the header value against a list of expected fingerprints to allow only requests bearing the expected fingerprints.
There are no additional fees to use JA3 fingerprint headers. For more information, see the CloudFront Developer Guide.
Fleet indexing’s search and aggregation queries, including fleet metrics will now now support up to twelve query terms, up from the prior limit of seven terms. These queries enable multiple use cases, such as helping customers capture fleet level insights, define and monitor alarms on specific attributes, and filter on a subset of devices for easier troubleshooting activities.
With this update, AWS IoT customers can enrich their fleet management capability and perform even more granular filtering through additional query terms.
Amazon Connect now provides the ability for contact center administrators to view and delete all saved reports in an instance, including reports created by users who may have left the organization.
Saved reports are custom real-time, historical, and login/logout reports that users can create to monitor contact center performance, as well as share and publish to other users in the organization.
Using these capabilities, administrators can identify and delete unused reports to help manage against their saved report limit.
Amazon Connect Voice ID can now be used by Amazon Connect customers in the Canada (Central) Region. Voice ID uses machine learning to offer both real-time caller authentication and fraud risk detection, to help make contact center voice interactions more secure.
In addition to the Canada (Central) region, Voice ID is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Europe (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Sydney), and Europe (London) AWS regions.
AWS Trusted Advisor adds two new checks from AWS Resilience Hub that enable customers to view their latest application resilience score and resilience policy status. You can find more information about the checks here.
AWS Resilience Hub provides customers with a place to define, track, and manage the resiliency and availability of all their applications. AWS Trusted Advisor evaluates your AWS account with automated best practice checks and provides recommendations to reduce costs, monitor service quotas, and improve resilience, performance and security.
With this launch, AWS Trusted Advisor customers can now view recommendations for applications associated with their account that have been assessed by AWS Resilience Hub. AWS Trusted Advisor shows you the latest resilience scores and indicates if your applications have met the recovery time objective (RTO) and recovery point objective (RPO) based on your resilience policy. Each time an assessment is run, AWS Resilience Hub updates AWS Trusted Advisor with the latest results.
AWS are excited to announce the new AWS IoT ExpressLink Technical Specification v1.1 for hardware connectivity modules, and expand their qualified module list with Realtek’s Ameba Z2 AWS IoT ExpressLink - the first Wi-Fi connectivity module to qualify with the new specification.
AWS IoT ExpressLink modules enable easy AWS cloud-connectivity and implement AWS-mandated security requirements for device to cloud connections. Integrating these wireless modules into their hardware design, customers can accelerate the development of their Internet of Things (IoT) products, including consumer products, industrial and agricultural sensors and controllers.
The v1.1 release adds integrated AWS IoT Device Defender support, new commands for AWS IoT Device Shadow, and a new onboarding by-claim mechanism. With the new ‘SHADOW’ command, you can retrieve or update a (named or unnamed) device shadow document as a whole, or subscribe to receive automatic notifications when any part of it has been modified, making it easier for even resource constrained devices to access. Similarly, with the AWS IoT Device Defender service integration, the user can set a time period parameter (DefenderPeriod), enabling the module to automatically send a number of Device Defender custom metrics.
The new AWS IoT ExpressLink onboarding by-claim process, offered in this release, makes it easier and more flexible for customers to associate a physical device with a thing in their AWS IoT account. At activation, each AWS IoT ExpressLink powered module automatically connects to an AWS staging-account endpoint.
Only later, when the end-user registers the finished product, it is automatically moved to the customer/OEM endpoint. This allows for greater flexibility in the endpoint selection and can help streamline the manufacturing process. It also removes the need for the customer/OEM to share secrets (credentials) with any other party in the supply chain.
This week, AWS Amplify Hosting announces Next.js 12 and 13 support, including middleware, on-demand incremental static regeneration (ISR), and image optimization. With this release, AWS Amplify Hosting offers fully managed CI/CD deployments and hosting for server-side rendered (SSR) apps built using Next.js and static web apps.
In addition to supporting more Next.js features, AWS Amplify Hosting improves the experience of running Next.js apps on AWS:
- Next.js apps deploy at least 3x faster, helping developers deliver changes to production faster.
- Server-side logs are delivered to Amazon CloudWatch, allowing teams to observe, monitor and troubleshoot their apps.
- Fully managed infrastructure reduces operational overhead for development teams, with fewer resources to manage in their AWS account.
AWS Amplify Hosting and its improved support for Next.js hosting is generally available in the following 19 AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (Milan), Europe (Ireland), Europe (London), Europe (Paris), Middle East (Bahrain) and South America (São Paulo). For details on pricing, please visit the AWS Amplify pricing page.
Starting this week, you can choose the private IP address that your NAT Gateway uses for network address translation.
A NAT Gateway enables instances in a private subnet to connect to services outside the subnet using the NAT Gateway’s IP address. NAT Gateway uses its private IP address to perform network address translation when used for private communication with other VPCs or on-premises environments via Transit Gateway or Virtual Private Gateway.
Before this launch, your NAT Gateway would select a random private IP address from the subnet it is created in. As a result, customers who use NAT Gateway to access their partner networks were required to allowlist the entire subnet CIDR of the NAT Gateway. This enhancement allows you to select a specific private IP address for your NAT Gateway from the subnet and allowlist that specific IP address with the partner network.
This week, AWS were excited to announce the launch of the new Applications widget on Console Home, providing one-click access to applications in AWS Systems Manager Application Manager and their associated collections of AWS resources, code, and related data.
Using the new Applications widget, you can quickly access, visualize, and operate application resources in AWS Systems Manager Application Manager. Start by searching the list in the Applications widget and click any name to open Application Manager.
From Application Manager, you can view the resources that power your application, as well as application cost using AWS Cost Explorer, CloudWatch alarms, AWS Config rule compliance, and Systems Manager OpsItems. AWS Systems Manager Automation runbooks are also available to take action on your application’s resources.
You can access the Applications widget on Console Home by signing into AWS Management Console. This widget is available in all public AWS Regions.
Amazon Simple Notification Service (Amazon SNS) has enhanced its integration with AWS Service Quotas. Previously, you could use AWS Service Quotas to view the default quotas of Amazon SNS. Now, with the enhanced integration, you can view your account-specific quota overrides as well as submit your quota increase requests for Amazon SNS.
The quota increase requests for 'Messages Published per Second', 'Number of Topics per Account', 'Filter Policies per Account', and 'Filter Policies per Topic' are processed automatically, speeding up approval times.
Moreover, you can now view the applied Amazon SNS quotas for your AWS account per AWS Region, view your utilization metrics per quota, and create Amazon CloudWatch alarms to notify you when your utilization of a given quota exceeds your configurable threshold. This enables you to more precisely adapt your utilization of Amazon SNS based on your applied quotas.
Amazon SNS is a messaging service for Application-to-Application (A2A) and Application-to-Person (A2P) communication. The A2A functionality provides high-throughput, push-based, many-to-many messaging between distributed systems, microservices, and event-driven serverless applications. These applications include Amazon Simple Queue Service, Amazon Kinesis Data Firehose, AWS Lambda, and HTTP/S endpoints. The A2P functionality enables you to communicate with your customers via mobile text messages (SMS), mobile push notifications, and email notifications.
This week, AWS CloudFormation StackSets launched event notifications via Amazon EventBridge. You can trigger event-driven actions after creating, updating, or deleting your CloudFormation stack sets. You can achieve this without developing or maintaining custom solutions that periodically poll for changes in your CloudFormation stack sets deployments via CloudFormation APIs.
With this launch, you can build your event-driven applications across multiple AWS accounts, Organizational Units (OUs), and Regions with AWS CloudFormation StackSets and Amazon EventBridge.
Customers use AWS CloudFormation StackSets to model, provision, and manage their cloud applications and infrastructure in a safe, predictable, and repeatable way across multiple AWS accounts and Regions.
Customers use EventBridge to create event-driven applications by routing events between their applications, third-party SaaS applications, and other AWS services.
You can now ingest event notifications, and trigger event-driven actions when deploying stack sets. For example, you can use stack set operation events to sequence your stack set deployment Region by Region.
You can consume a stack set’s create and update operation events using a Lambda Function to trigger such regional deployment workflow. Furthermore, you can initiate post-provisioning operations for stack instances even before the stack set is completely deployed.
For example, you can immediately setup Amazon CloudWatch for stack instances with successful deployment to collect, access, and analyze your stack instance data with visualization tools. We look forward to the cross-account and cross-region applications you will build using this feature.
AWS IoT TwinMaker makes it easier to create digital twins of real-world systems such as buildings, factories, industrial equipment, and production lines. Now with the feature launch of TwinMaker Knowledge Graph, customers can query their digital twins, contextualize data from disparate data sources, and gain deeper insights into their real-world systems.
As a result, customers can save time performing functions like root cause analysis and drive more informed business decisions.
To build a TwinMaker Knowledge Graph, customers create entities which are virtual representation of real-world systems, and then define the physical or logical relationships between those entities. Customers can then query the TwinMaker Knowledge Graph with open source query language partiQL.
For example, customers can query all entities with name containing “pump”, or find all entities connected to an entity of interest. The query capability enables customers to perform functions like root cause analysis and predict the impacted entities and systems when changes are introduced into their physical systems.
This can improve operational efficiency and reduces time to resolve issues. With data contextualized from disparate data sources, customers can drive informed decisions and anticipate where issues are likely to occur in the future. To learn more, visit the developer guide and API reference.
Amazon GameSparks, a managed AWS service introduced in preview in March 2022 provides game developers with features for building, running, and scaling the backend for their games and is now available in preview in AWS Asia Pacific (Tokyo) Region.
Amazon GameSparks makes building a game backend easy for game developers who have little to no cloud experience. Amazon GameSparks comes with Unity game engine support and out-of-the-box backend features that require minimal setup for game developers.
AWS will not charge for the use of Amazon GameSparks during the preview period. With this launch, Amazon GameSparks is now available in the US East (N. Virginia) and Asia Pacific (Tokyo) Regions. Amazon GameSparks provides a downloadable AWS SDK for Unity and other resources that includes a sample game, Cloud Code API Reference material, and a developer guide.
Additional features, regions, platform support, and game engine integrations will be made available in future updates.
Amazon FinSpace is an analytic data hub for capital markets customers that enables analysts and data engineers to access data from multiple sources and transform it using Amazon FinSpace’s managed Apache Spark Engine with Capital Markets Time Series Analytics Library.
Starting this week, datasets developed in FinSpace can also be used by AWS Analytics and Machine Learning services, such as Amazon Redshift, Amazon Athena, Amazon QuickSight, Amazon EMR, and Amazon SageMaker. This allows customers to integrate data from FinSpace into their analytics and ML workflows.
Access is enabled using Amazon FinSpace’s new data view sharing capability. Amazon FinSpace data views provide access to query the data stored in Amazon FinSpace. Once a customer enables data view sharing, all data views are available as tables in a customer's Lake Formation data lake as a Lake Formation table share.
Then, Lake Formation data lake administrators can grant permissions to data lake users, which can access the shared data in the data lake through AWS analytics services such as Amazon Redshift, Amazon Athena, Amazon QuickSight, Amazon EMR, and Amazon SageMaker.
In an example scenario, a risk analyst using FinSpace could perform Value at Risk (VaR) calculations on a portfolio using the Apache Spark Engine with Capital Markets Time Series Analytics Library. The output of this process would be a set of risk scores for the portfolio that could then be shared using data view sharing. Then, the customer could run an automated Athena query to combine it with other risk data and display it in a consolidated risk dashboard in QuickSight.
With the general availability of Amazon Elastic Block Store (EBS) Snapshots Archive in AWS Asia Pacific (Jakarta) Region, customers in that region can save up to 75% on storage costs for Amazon EBS Snapshots that they rarely access and intend to retain for more than 90 days.
Amazon EBS Snapshots are incremental in nature, storing only the changes since the last snapshot. This makes them cost-effective for daily and weekly backups that need to be accessed frequently.
If you have snapshots that you access every few months or years, and would like to retain them long-term for legal or compliance reasons, you can use Amazon EBS Snapshots Archive to store full, point-in-time snapshots at a lower cost than what you would incur if stored in the standard tier.
You can also use Amazon Data Lifecycle Manager to create snapshots and automatically move them to EBS Snapshots Archive based on your specific policies, further reducing the need to manage complex custom scripts and the risk of having unattended storage costs.
Amazon SageMaker Autopilot now supports batch/offline inference within Amazon SageMaker Studio so you can run batch predictions on machine learning (ML) models. SageMaker Autopilot automatically builds, trains and tunes the best ML models based on your data, while allowing you to maintain full control and visibility.
Previously, if you wanted to perform offline inference on the ML models created by Amazon SageMaker Autopilot, you would have to first obtain SageMaker Autopilot’s candidate definitions using DescribeAutoMLJob API, then use those container definitions to create a SageMaker model with the CreateModel API and eventually create SageMaker transform job using the CreateTransformJob API, which could then be invoked programmatically to obtain batch inferences.
Starting this week, you can select any of the SageMaker Autopilot models and proceed with batch inference within SageMaker Studio. To perform batch predictions, you can provide input and output data configurations and create a batch transform job. The transform job upon completion will output the Amazon S3 location of the predictions. Now you can seamlessly perform offline inferencing from Amazon SageMaker Studio without having to switch to a programmatic mode.
To get started, update Amazon SageMaker Studio to the latest release and launch SageMaker Autopilot either from SageMaker Studio Launcher or APIs.
To simplify local development of Resolvers, AppSync is also releasing two new NPM libraries: @aws-appsync/eslint-plugin, to catch and fix problems quickly during development; and @aws-appsync/utils to provide type validation and autocompletion in code editors. Finally, to make it easier to test and debug code, AppSync is releasing a new API command (that can be called from the AWS CLI or AWS SDK), evaluate-code, that helps developers unit test their resolvers and functions code from their local environment.
Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for you to add speech-to-text capabilities to your applications. This week, AWS were excited to announce Thai and Hindi language support for streaming audio transcriptions. These new languages expand the coverage of Amazon Transcribe streaming and enable customers to reach a broader global audience.
Live streaming transcription is used across industries in contact center applications, broadcast events, meetings captions, and e-learning. For example, contact centers use transcription to remove the need for note taking and improve agent productivity by providing recommendations for next best action.
Companies also make their live sports events or real-time meetings more accessible with automatic subtitles. In addition, customers who have a large social media presence use Amazon Transcribe to help moderate content and detect inappropriate speech in user-generated content.
Now you can send a copy of your chat messages to Amazon Simple Storage Service (Amazon S3), Amazon CloudWatch Logs, or Amazon Kinesis Firehose for real-time chat log streaming. You can use chat logging to play back chat messages from previously recorded live streams, or to audit past conversations as part of moderation workflows. To get started, visit the documentation Getting Started with Amazon IVS Chat.
Amazon Interactive Video Service (Amazon IVS) is a managed live streaming solution that is designed to be quick and easy to set up, and ideal for creating interactive video experiences. Send your live streams to Amazon IVS using the broadcast SDKs or standard streaming software such as Open Broadcaster Software (OBS) and the service is designed to provide everything you need to make low-latency live video available to any viewer around the world, letting you focus on building interactive experiences alongside the live video.
Video ingest and delivery are available around the world over a managed network of infrastructure optimized for live video. Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available.
AWS are excited to announce that AWS IoT Device Defender is now integrated with AWS Security Hub. This integration allows customers to ingest alarms and their attributes from AWS IoT Device Defender Audit and Detect features in one central location, without custom coding.
This update can also help offload or reduce the complexity of managing disparate workflows from multiple security consoles when they review devices monitored by AWS IoT Device Defender.
Customers can access these benefits by accepting the findings from AWS IoT Device Defender in the Integrations Page of their AWS Security Hub console. Once accepted, users will see AWS IoT Device Defender alarms alongside the security posture status of other AWS services in one unified tool. This feature is available in all regions where AWS IoT Device Defender is available.
You can now efficiently manage and rotate passwords for Amazon ElastiCache for Redis clusters using AWS Secrets Manager.
To get started, first create a new Lambda function from the AWS Secrets Manager rotation function library and then select it as your rotation function in Secrets Manager. You can use ElastiCache and Secrets Manager integration in all regions at no additional cost.
AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, and AD Connector are now available in the AWS Middle East (UAE) Region.
Built on actual Microsoft Active Directory (AD), AWS Managed Microsoft AD enables you to migrate AD-aware applications while reducing the work of managing AD infrastructure in the AWS Cloud.
You can use your Microsoft AD credentials to connect to AWS applications such as Amazon Relational Database Service (RDS) for SQL Server, RDS for PostgreSQL, and RDS for Oracle databases. You can keep your identities in your existing Microsoft AD or create and manage identities in your AWS managed directory.
AD Connector is a proxy that enables AWS applications to use your existing on-premises AD identities without requiring AD infrastructure in the AWS Cloud. You can also use AD Connector to join Amazon EC2 instances to your on-premises AD domain and manage these instances using your existing group policies.
Starting this week, AWS customers can search, request and provision products from AWS Service Catalog via AWS Service Management Connector. This integration is built using Atlassian Forge for Atlassian's Jira Service Management (JSM) Cloud.
With this connector, administrators can use existing AWS Service Catalog configurations, including curated products, portfolios, constraints, and tagging, and expose them to JSM Cloud administrators and users. Administrators can view AWS Service Catalog portfolios and products, align them to organizational structures, grant access to JSM Cloud users, and connect JSM workflows to provisioning requests.
JSM Cloud end users can browse and request provisioning of AWS Service Catalog products, including AWS Marketplace software products that have been copied to AWS Service Catalog.
In addition to the AWS Service Catalog integration, this release also introduces integrations of AWS Systems Manager Incident Manager incident creation and management, and AWS Security Hub for bidirectional synchronization of AWS Security Hub findings.
These AWS CloudOps integrations help simplify cloud provisioning, operations and resource management as well as streamline Service Management governance and oversight over AWS services.
Zoom Meeting Media plugin allows audio/video optimization for smooth video conferencing on Amazon WorkSpaces. This feature greatly enhances Zoom Meetings audio and video (webcam) performance on WorkSpaces by offloading the audio/video traffic to local device for processing. The plugin is intended for users who want native AV performance when using Zoom on their PCoIP WorkSpaces.
Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Today, we are excited to announce the general availability of two voices: Ola, a new female Polish neural Text-to-speech (NTTS) voice and Hala, a new female NTTS voice in Arabic (Gulf).
Ola is Amazon Polly’s first Polish neural TTS voice. We have designed it to sound friendly, welcoming and professional, which makes it applicable to a wide range of TTS use cases such as Interactive Voice Response systems, articles and educational content. Amazon Polly added support for Arabic in 2019 by launching Zeina, a standard TTS voice following the Modern Standard Arabic (MSA) pronunciation. The voice we launched today, Hala, is a dialectical Gulf Arabic (ar-AE) voice that also supports MSA when invoked with the "arb" <lang> tag.
Amazon S3 Storage Lens is a cloud storage analytics feature that delivers organization-wide visibility into object storage usage and activity. This week, 34 additional metrics have been added to uncover deeper cost optimization opportunities, identify data protection best practices, and improve the performance of application workflows.
With these new metrics, you can now achieve greater cost optimization for buckets that are not following your organization-wide policies.
For instance, with S3 Lifecycle rule counts, you can identify buckets without Lifecycle rules for deleting incomplete multipart uploads older than 7 days.
Similarly, you can now identify and track resolution of buckets that do not follow data protection best practices, including use of encryption and data replication. With detailed status codes, it is now easier than ever to spot buckets with unusual access patterns or where errors are consistently occurring. For example, with 403 authorization error counts, you can identify workloads that are attempting to access buckets without proper permissions applied.
S3 Storage Lens is pre-configured to include 28 free metrics by default for all customers with 14 days of historical data. By upgrading to Storage Lens advanced metrics, you receive 35 additional metrics with 15 months of historical data.
Amazon CloudWatch RUM (Real User Monitoring) adds the ability for customers to send customer-defined events to RUM (in addition to predefined events) by instrumenting their web applications.
The customer-defined events gives customers flexibility to monitor specific functions of their application and troubleshoot end user impacting issues unique to the application components. Customers can view, and slice and dice these events using filters in the RUM console.
CloudWatch RUM gives customers visibility into their web application’s client side performance by helping them to collect performance and error data in real time. RUM can reduce MTTR by providing visualizations that allow customers to troubleshoot and debug issues such as high page load times, error messages and stack traces.
Customers can add customer defined RUM events in the events payload using a new API in the CloudWatch RUM Web Client (version 1.12.0). To learn more, see the user guide.
Starting this week, AWS customers can detect and resolve AWS security findings from AWS Security Hub via AWS Service Management Connector. This integration is built using Atlassian Forge for Atlassian's Jira Service Management (JSM) Cloud.
AWS Security Hub helps customers to perform security best practice checks, aggregate alerts and enables automated remediation. It enables users to view security findings from AWS services, such as Amazon Guard Duty, Amazon Inspector, as well as AWS Partner solutions. This bidirectional integration between AWS Security Hub and Jira Service Management incidents enables JSM users/developers to manage AWS Security findings while leveraging their existing workflows in Jira Service Management (incidents).
In addition to the AWS Security Hub integration, this release also introduces integrations with AWS Systems Manager Incident Manager incident creation and management, and provisioning of AWS resources via AWS Service Catalog. These AWS CloudOps integrations help simplify cloud provisioning, operations and resource management as well as streamline Service Management governance and oversight over AWS services.
Amazon WorkSpaces announces general availability of version 2.0 of the WorkSpaces Streaming Protocol (WSP) host agent. WSP is a high performance cloud-native streaming protocol designed to enable your users to access a highly responsive remote desktop experience and features such as 2-way audio video and smart card support.
Powered by DCV technology, the WSP host agent version 2.0 offers significant streaming quality and performance improvements, including a better 2-way audio/video experience with distortion free high resolution video and clear audio for better web conferencing experience, end to end UDP connectivity to improve responsiveness during poor network conditions, and a reduction of network bandwidth usage without compromising on responsiveness and streaming quality.
The new WSP host agent is available on WorkSpaces for Windows and Ubuntu operating systems, and requires an updated Windows native client (version 5.4.0 or above), MacOS native client (version 5.5.0 or above), or Web Access. You need to reboot your existing Amazon WorkSpaces to update the WSP host agent to the latest available version.
Amazon Connect now provides an API to programmatically initiate supervisor monitoring on an ongoing contact. This week, contact center supervisors can monitor live conversations between agents and customers by selecting which conversations to monitor via the real-time metrics page in Amazon Connect. Using the new MonitorContact API, businesses can now build custom supervisor dashboards that include the ability to initiate monitoring on a specific contact.
The MonitorContactAPI is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London).
Amazon OpenSearch Service now supports OpenSearch and OpenSearch Dashboards version 2.3. With this version, Amazon OpenSearch Service adds several features such as new algorithms to the machine learning (ML) commons library, improvements to aggregations, improvements to map visualizations, alerting, anomaly detection, and more.
OpenSearch 2.3 uses Lucene 9.1, and the move to the latest version of Lucene offers a number of exciting advancements, and will continue to offer more value in future releases.
Some of the improvements in OpenSearch Service that come with OpenSearch 2.3 (includes features that were launched as part OpenSearch versions 2.0, 2.1, and 2.2) include new ML algorithms that have been added to ML Commons such as Linear regression, Logistic Regression, Localization, and RCFSummarize, support for Lucene-based k-NN search, and improved Dashboards user experience for anomaly detection.
In addition, this release adds support for document-level alerting, support for multi-terms aggregation as part of OpenSearch core, the ability to upload custom GeoJSONs for region map visualizations and better zoom levels (14x), support for searching by relevance using SQL and PPL, centralized management of notifications on Dashboards, and more. For a complete list of new features and improvements, please see the release notes for OpenSearch versions 2.0, 2.1 2.2, and 2.3.
This release also introduces inclusive terminology (such as cluster manager instead of master, allowlist instead of blacklist) throughout OpenSearch. For information on upgrading to OpenSearch 2.3, please see documentation. For a list of breaking changes with the new major OpenSearch version, please see documentation.
Amazon Timestream is now available in the AWS GovCloud (US-West) Region. Support in the GovCloud Region allows U.S. government agencies and contractors to run sensitive time-series analysis in Amazon Timestream by addressing their specific regulatory and compliance requirements.
Amazon Timestream is a fast, serverless, secure, and purpose-built time series database for analytics, DevOps, and IoT applications which can scale to process trillions of time series events per day with minimal DevOps overhead.
Amazon Timestream simplifies data lifecycle management through the use of data tiers and user-defined data retention policies. The purpose-built query engine lets you access and analyze recent and historical data across these tiers. Additionally, visualizing your data is simple with integration and connector support through Amazon Quicksight, Grafana, and JDBC.
Amazon Timestream also automatically scales up or down to adjust capacity and performance, so you don’t need to manage the underlying infrastructure, freeing you to focus on building your applications.
AWS Amplify allows developers to set up In-app messaging notification campaigns that are triggered and shown to their users when specific events occur. This feature empowers developers to provide contextual messages to users on web apps on React and cross platform mobile apps with React Native.
Using In-app messaging, developers can set up personalized campaigns in the Amazon Pinpoint console, allowing them to segment and target specific users of their apps.
Developers can use the pre-built UI components supported by Amplify such as banners, full-screen overlay, and modals to display additional information to users, and prompt them for the next best actions to take. The UI components are customizable, and developers also have control over the frequency of how often users are shown the In-app messages.
Starting this week, Amazon EC2 C5n instances are available in AWS Asia Pacific (Jakarta) Region.
C5n instances, based on the AWS Nitro System and powered by 3.0 GHz Intel® Xeon® Scalable processors (Skylake), offer customers 100Gbps networking for network-bound workloads, while continuing to take advantage of the security, scalability and reliability of Amazon’s Virtual Private Cloud (VPC).
Customers can also take advantage C5n instances network performance to accelerate data transfer to and from S3, reducing the data ingestion wait time for applications and speeding up delivery of results. A wide range of applications such as High Performance Computing (HPC), analytics, machine learning, Big Data and data lake applications can benefit from these instances.
Amazon ElastiCache now supports AWS Identity and Access Management (IAM) authentication access to Redis clusters. By using IAM, you can associate IAM users and roles with ElastiCache for Redis users and manage their cluster access.
You can configure IAM authentication by creating an IAM-enabled ElastiCache user and then assigning this user to an appropriate ElastiCache user group via the AWS Management Console, AWS CLI, or the AWS SDK. Using IAM policies, you can grant or revoke cluster access to different IAM identities. Redis applications can now use IAM credentials to authenticate to your ElastiCache clusters while connecting to them.
AWS are excited to announce that Incident Manager, a capability of AWS Systems Manager, now integrates with PagerDuty, a popular tool for operational incident response. This extends AWS capabilities for operational incident response, helping operations teams more quickly engage, respond, and resolve critical application availability and performance issues when they occur.
Incident Manager helps you bring the right people and information together when a critical issue is detected, activating pre-configured response plans to engage responders using SMS, phone calls, and chat channels, as well as to run AWS Systems Manager Automation runbooks.
You can now incorporate PagerDuty directly in an Incident Manager response plan to automatically engage responders when an issue is detected by an Amazon CloudWatch alarm or Amazon EventBridge event. Using Incident Manager, you can also add responders on-the-fly while the incident response is ongoing.
This helps you engage the right people as soon as you uncover more information on root cause, speeding up response times. You can now view, select, and engage specific responders and teams as designated in PagerDuty at any time from the Incident Manager console.
This new integration with PagerDuty adds to existing Incident Manager integrations with ServiceNow and Atlassian Jira Service Management.
Amazon MQ now provides support for RabbitMQ version 3.10, a new release which includes several fixes and improvements to the previous versions of RabbitMQ supported by Amazon MQ, 3.8 and 3.9.
Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. You can reduce your operational burden by using Amazon MQ to manage the provisioning, setup, and maintenance of message brokers.
Amazon MQ connects to your current applications with industry-standard APIs and protocols to help you more easily migrate to AWS without having to rewrite code.
With RabbitMQ 3.10, AWS have enabled support for the latest version of classic queues (CQv2) as default, which comes with memory and performance improvements. If you are running earlier versions of RabbitMQ such as 3.9 or 3.8, we strongly encourage you to upgrade to RabbitMQ 3.10.
This can be accomplished with just a few clicks in the AWS Management Console. We also encourage you to enable automatic minor version upgrades on RabbitMQ 3.10 to help ensure your brokers take advantage of future fixes and improvements in 3.10.
This week, AWS launched automatic handling of IP address changes for Refactor Spaces Services. Customers use automatic DNS updates to avoid building infrastructure to manage service IP address changes and gain improved operational safety.
This new feature lets customers create services using DNS names in the URL and Refactor Spaces automatically re-resolves the DNS name when the DNS time-to-live (TTL) expires (or every 60 seconds for TTLs less than 60 seconds).
Refactor Spaces now handles more of the infrastructure needed for multi-account HTTP routing, so you can focus on safely and incrementally refactoring your application. Prior to this launch, customers who wanted to route traffic to services with IPs that might change (e.g. Application Load Balancer) had to set up custom mechanisms, like AWS Lambda functions, to manage IP address changes.
Starting this week, you can use a DNS name in the Refactor Spaces Service URL without needing to actively manage IP address changes. There is no additional cost for automatically re-resolving service URLs with DNS names.
AWS Migration Hub Refactor Spaces is the starting point for incremental application refactoring to microservices in AWS. Refactor Spaces automates the creation of application refactor environments including all of the infrastructure, multi-account networking, and routing to incrementally modernize.
Use Refactor Spaces to help reduce risk when evolving applications into microservices or extending existing applications with new features written in microservices.
Starting this week, you can now store and restore up to 5TB Amazon Machine Images (AMIs) to and from an Amazon S3 bucket. This enables storing and transferring of larger AMIs between partitions. The old limit was 1TB.
By storing and restoring an AMI using S3 buckets, you can copy AMIs from one AWS partition to another, for example, from the main commercial partition to the AWS GovCloud (US) partition. You can also make archival copies of AMIs by storing them in an S3 bucket.
AWS Microservice Extractor for .NET is an assistive tool that simplifies refactoring monolithic .NET applications into independent microservices. With this new feature, Microservice Extractor helps extract source code segments as microservices or shared libraries from legacy ASP.NET Web Forms and Windows Communication Foundation (WCF)-based applications, and ports those directly to modern cross-platform .NET.
The new capability enables developers to refactor older, Windows OS-dependent applications with minimal rewrite to newer .NET running on Linux containers, thereby reducing costs and improving performance.
Since the early days of .NET Framework, ASP.NET Web Forms and WCF-based applications had been powering enterprise business applications. With the arrival of cross-platform .NET, no new features have been added to Windows OS-dependent Web Forms and WCF.
Developers cannot take advantage of simpler development models and better syntax available in newer C# versions. All the performance and security-related improvements on cross-platform .NET are off limits to such legacy applications.
With Microservice Extractor adding the capability to extract and port business logic to newer cross-platform .NET, developers can now perform continuous, iterative modernization to gradually migrate off legacy ASP.NET Web Forms and WCF stacks.
Microservice Extractor follows a strangler fig pattern, where developers progressively create a new application around the edges of the old, letting it grow until all application capabilities are ported, and the old application becomes obsolete.
AWS App2Container (A2C) now supports EKS Blueprints for setting up a managed Kubernetes cluster on AWS. With this release, we are making it easier and faster to use A2C to deploy to clusters created using EKS Blueprints. Customers can leverage App2Container provided Kubernetes manifests with existing EKS infrastructure.
With this feature, customers can configure CPU and Memory limits for A2C created application pods. Customers can also create ingress configurations using AWS Application Load Balancer (ALB) or NGINX with AWS Network Load Balancer (NLB).
AWS App2Container (A2C) is a command-line tool for modernizing .NET and Java applications into containerized applications. A2C analyzes and builds an inventory of all applications running in virtual machines, on premises, or in the cloud.
You simply select the application you want to containerize, and A2C packages the application artifact and identified dependencies into container images, configures the network ports, and generates the ECS task and Kubernetes pod definitions.
EKS Blueprints is a collection of Infrastructure as Code (IaC) modules that will help you configure and deploy consistent, batteries-included EKS clusters across accounts and regions. You can use EKS Blueprints to easily bootstrap an EKS cluster with Amazon EKS add-ons as well as a wide range of popular open-source add-ons, including Prometheus, Karpenter, NGINX, Traefik, AWS Load Balancer Controller, Fluent Bit, Keda, Argo CD, and more.
AWS Proton now allows customers to specify custom commands used to provision infrastructure from their templates, enabling them to manage templates defined using the AWS Cloud Development Kit (CDK) and other templating and provisioning tools through Proton.
Platform engineers use Proton to define and keep infrastructure updated that developers can provision using a self-service interface. Now, platform engineers can define standardized infrastructure using CDK, in addition to the already supported AWS CloudFormation and Terraform.
AWS Proton is a managed service for platform engineers to increase the pace of innovation in their organization by defining, vending, and maintaining infrastructure templates for self-service deployments. With Proton, customers can standardize centralized templates to help them meet security, cost, and compliance goals.
Proton helps platform engineers scale up their impact with a self-service model, resulting in higher velocity for the development and deployment process throughout an application lifecycle.
AWS Proton supports CDK through a new feature called CodeBuild provisioning. Customers can use CodeBuild provisioning to execute commands to provision infrastructure, including, but not exclusive to, CDK commands.
With CodeBuild provisioning, platform engineers provide Proton with the commands that define their custom logic for provisioning the infrastructure in a particular template. This allows platform engineers to specify how CDK or their tool of choice is going to run.
For example, one team might use CodeBuild provisioning to provision the infrastructure by running cdk deploy, while another might choose to synthesize a CloudFormation template using cdk synth and then deploy it with CloudFormation using cfn create-stack.
Another customer might use CodeBuild provisioning to run Terraform, by installing Terraform and then executing terraform apply. When using codeBuild provisioning, Proton uses AWS CodeBuild to execute the commands provided by the customer in the order they were given.
AWS are excited to announce that Incident Manager, a capability of AWS Systems Manager, now simplifies the way you coordinate response when an incident is detected by an Amazon CloudWatch alarm or Amazon EventBridge event. The new coordination capabilities allow you to centrally provide updates, monitor actions, and view status from within the Incident Manager console.
With the new Incident Manager coordination capabilities, you can add notes to organize and track progress of mitigation activities, post updates and other relevant information, helping operational teams to more quickly resolve critical application issues.
The new additions to the status banner consolidate critical incident information allowing incident responders to view runbook progress and open engagements alongside overall status, duration and designated communication channel.
AWS Identity and Access Management (IAM) now supports multiple multi-factor authentication (MFA) devices for root account users and IAM users in your AWS accounts. This provides additional flexibility and resiliency in your security strategy by enabling more than one authentication device per user. You can choose from one or more types of hardware and virtual devices supported by IAM.
MFA is one of IAM’s leading security best practices to provide an additional layer of security to your account, and we recommend that you enable MFA for all accounts and users in your environments.
Now it is possible to add up to eight MFA devices per user, including FIDO security keys, software time-based one-time password (TOTP) with virtual authenticator applications, or hardware TOTP tokens. Configuring more than one device provides flexibility if a device is lost or broken, or when managing access for geographically diverse teams.
AWS Proton introduces the Proton dashboard, a centralized view of all Proton resources deployed and managed by AWS Proton. Platform engineers use Proton to define their environment and service infrastructure in AWS - now through the Proton dashboard they can monitor which of their templates are in use and the up-to-date status of their environments and services.
AWS Proton is a managed service for platform engineers to increase the pace of innovation at their organization by defining, vending, and maintaining infrastructure templates for self-service deployments. With Proton, customers can standardize centralized templates to meet security, cost, and compliance goals. Proton helps platform engineers scale up their impact with a self-service model, resulting in higher velocity for the development and deployment process throughout an application lifecycle.
Now with the Proton dashboard, it is easier for users to see the number of Infrastructure as Code (IaC) templates registered with Proton, the Proton resources that have been created from those templates, and the version of templates used for deployments. Platform engineers can quickly see what mix of their resources are using the latest recommended template versions, and use sorting and filtering to identify the most recent deployment events and deployment failures.
Proton users can see their dashboard by navigating to the Proton service page in the AWS Management Console, where the dashboard is visible for users in all Regions where Proton is available.
This week, AWS were excited to announce general availability of tooling support to build and deploy native AOT compiled .NET 7 applications to AWS Lambda. .NET 7 is the latest version of .NET and brings several performance improvements and optimizations, including support for the native AOT deployment model.
Native AOT compiles .NET applications to native code. By using native AOT with AWS Lambda, you can enable faster application starts, resulting in improved end-user experience. You can also benefit from reduced costs through faster initialization times and lower memory consumption of native AOT applications on AWS Lambda.
Native AOT allows .NET applications to be pre-compiled to a single binary removing the need for just-in-time (JIT) compilation, enabling native AOT enabled apps to start up faster. In our benchmarks, native AOT enabled applications demonstrated an average 44% (and up to 86%) improvement in cold start times. See the results here.
You can use .NET 7 native AOT with AWS Lambda in all regions where AWS Lambda is available.
Amazon Managed Service for Prometheus now supports 200M active metrics per workspace. Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service that makes it easy to monitor and alarm on operational metrics at scale.
Prometheus is a Cloud Native Computing Foundation open source project for monitoring and alerting that is optimized for container environments such as Amazon EKS and Amazon ECS.
With this release, customers can send up to 200M active metrics to a single workspace after filing a limit increase, and can create many workspaces per account, enabling the storage and analysis of billions of Prometheus metrics.
To get started, customers can create an Amazon Managed Service for Prometheus workspace and increase their workspace active series limits by filing a limit increase in AWS Support Center or AWS Service Quotas.
The AWS Serverless Application Model (SAM) Command Line Interface (CLI) announces the preview of AWS Lambda local testing and debugging on Terraform. The AWS SAM CLI is a developer tool that makes it easier to build, test, package, and deploy serverless applications. Terraform is an infrastructure as code tool that lets you build, change, and version cloud and on-premises resources safely and efficiently.
Customers can now use the SAM CLI to locally test and debug a Lambda function defined in their Terraform application. SAM CLI can read the infrastructure resource information from the Terraform project and start Lambda functions locally in a docker container to invoke with an event payload, or attach a debugger using AWS toolkits on IDE to step through the Lambda function code.
This feature is supported with Terraform version 1.1 +. For the most seamless experience, use it with the terraform-aws-modules/lambda version 4.6.1+. To learn more about this feature, please see the compute blog and documentation. You can install the latest version of the SAM CLI by following the instructions in the documentation.
AWS announced the availability of AWS Fluent Bit container images for Windows Server on Amazon ECS and Amazon EKS, to help customers easily process and forward their container logs to various AWS and third-party destinations, such as Amazon CloudWatch, Amazon S3, Amazon Kineses Data Firehose, Datadog, and Splunk.
This capability helps customers to centrally view, query, and manage their logs without needing to implement or manage a custom logging solution or agents to extract logs from their Windows containers. With this launch, customers have a common mechanism to process and route their logs across ECS and EKS for both Linux and Windows workloads. For more details about the supported Windows versions and the image tags for Fluent Bit, visit the public github repository here.
Fluent Bit is a high-performance logging and metrics processor and forwarder across various operating system families. Customers can run an AWS Fluent Bit image as a standalone daemon task on Amazon ECS or as a Kubernetes DaemonSet on Amazon EKS to collect and route logs from their Windows containers to a centralized location.
Customers can follow steps in the container blogs to stream Microsoft IIS logs generated by Windows nodes running on Amazon ECS and Amazon EKS. To learn more about running Fluent Bit image on ECS, visit this ECS tutorial.
Customers can currently run Windows containers on a number of AWS-managed container orchestration services (Amazon ECS (on EC2 and AWS Fargate), Amazon EKS, and Amazon ECS Anywhere), in addition to using self-managed options.
Notable changes in Kubernetes version 1.24 include containerd replacing Docksershim as the container runtime, a change to beta API behavior, and topology aware hints for efficient traffic routing being enabled by default.
Additionally, you should note that PodSecurityPolicy (PSP) is scheduled for removal in Kubernetes 1.25. For detailed information on these changes, see the EKS blog post and the Kubernetes project release notes.
You can learn more about the Kubernetes versions available on Amazon EKS and instructions to update your cluster to version 1.24 by visiting EKS documentation. Amazon EKS Distro builds of Kubernetes 1.24 are available through ECR Public Gallery and GitHub. Learn more about the EKS version lifecycle policies here.
AWS Elemental MediaConnect now supports RGB 10- and 12-bit 4:4:4 color spaces via AWS Cloud Digital Interface (AWS CDI) enabling workloads such as color grading, that require high-fidelity color at low latencies. The RGB 10- and 12-bit 4:4:4 color spaces are in addition to the currently supported option of YCbCr 10-bit 4:2:2.
Using lossless JPEG XS compression technology, AWS CDI automatically detects the incoming color space and passes it through to JPEG XS decoders with as little as 8ms of end-to-end latency. When coupled with color grading applications and a JPEG XS decoder, colorists can grade content with limited on-premises equipment.
With the additional color space options, AWS CDI now supports workloads where visually lossless fidelity, accurate color, and low latency are required.
Amazon Relational Database Service (Amazon RDS) now supports the delivery of message attributes, which provide structured metadata about a message. RDS event attributes are separate from the message, but are sent with the message body. The message receiver can use this information to decide how to handle the message, enabling routing and filtering without having to process the message body first.
With Amazon Simple Notification Service (SNS) or Amazon Simple Queuing Service (SQS) you can now consolidate multiple filters for each condition into a single topic subscription. SNS and RDS events with attributes allow you to offload the message filtering logic from subscribers and the message routing logic from publishers. Each subscriber receives and processes only the messages accepted by its filter policy.
Amazon RDS for SQL Server now supports a linked server to Oracle database. From your RDS for SQL Server instance, you can use linked servers to access external Oracle databases to read data and execute SQL commands. If you have existing solutions that use linked servers to integrate Oracle databases, you can now migrate your SQL Server workloads directly to Amazon RDS.
Now, you can enable a linked server to Oracle by adding a Linked Heterogenous Server (LHS) option to Option Group associated with your instances. After enabling Option Group, the status of your instance will become ‘pending-reboot’.
AWS do not force the instance to reboot after you modify Option Group. This is a new behavior of Option Groups, and you may be familiar with a similar experience from modifying a parameter in Parameter Groups. During the reboot, RDS will install an OLEDB driver on the host. After the reboot, you can configure a linked server using T-SQL statements.
Amazon Redshift concurrency scaling is leveraged by thousands of customers to support virtually unlimited concurrent users and queries, and meet their SLAs for BI reports, dashboards and other analytics workloads. In addition to the read queries, Amazon Redshift concurrency scaling is now extended to support scaling of most common write operations performed as part of workloads such as data ingestion and processing. The write workloads support with concurrency scaling is available on Amazon Redshift RA3 instance types.
With the new capability, customers who currently use concurrency scaling can automatically scale common write operations such as Redshift COPY, INSERT, UPDATE, DELETE on to the concurrency scaling clusters. The write workloads support works seamlessly with any configured usage controls and workload management queue configurations.
When concurrency scaling is enabled for a queue, eligible write queries are sent to concurrency scaling clusters without having to wait for resources to free up on the main Amazon Redshift cluster. The hourly credits that customers accrue with concurrency scaling for every 24 hours of their usage of the main Amazon Redshift cluster can be leveraged to support scaling write queries as well.
For any usage that exceeds the accrued free usage credits, you will be billed on a per-second basis based on the on-demand rate of their Amazon Redshift cluster and according to the cost controls configured.
Concurrency scaling support for write workloads is generally available all Amazon Redshift regions where RA3 instance types and concurrency scaling are supported. For more information on concurrency scaling refer to our documentation in the Amazon Redshift Cluster Developer Guide.
AWS re:Post is a cloud knowledge service designed to help AWS customers remove technical roadblocks, accelerate innovation, and operate efficiently. re:Post has only supported English since the launch at re:Invent 2021.
This week, re:Post has expanded the user experience to support five additional languages. Customers can now learn, design, build, and troubleshoot AWS technology by posting questions and consuming content in the following languages: Traditional Chinese, Simplified Chinese, French, Japanese, and Korean. Multi-lingual support makes the re:Post community more accessible to AWS enthusiasts globally, allowing them to collaborate and build connections with community members in their preferred or chosen language(s) and to locate the content they need faster.
Customers can search and browse available content in the five additional languages without logging in. re:Post members who are logged in can set their website, notifications, and content language(s) of choice, and preferences will be saved for the next time they log in.
Members can browse and consume content; post questions, answers, or articles; and, leverage existing features such as auto-tagging or content recommendations in their preferred language(s). Additionally, Premium Support customers who do not receive community responses to questions will receive an answer from a Support Engineer in the same language as the published question to help speed up their troubleshooting time.
The multi-lingual experience will help re:Post members feel more connected to the community as they can engage with one another in their preferred language(s) to promote self-learning, remove technical roadblocks, and accelerate their adoption to AWS technology.
Amazon Relational Database Service (Amazon RDS) for Oracle now supports integration with Amazon Elastic File System (EFS). You can now transfer files between the RDS for Oracle DB instance and Amazon EFS file system. Amazon EFS is designed for 99.999999999% (11 9s) of durability and up to 99.99% (4 9s) of availability. You can scale to petabytes on a single NFS file system.
Using Amazon EFS integration, you can stage transitory files like Oracle Data Pump export files on EFS file system and directly import from it. Furthermore, you can leverage Amazon EFS integration for sharing a file system between RDS Oracle DB instance and an application instance, or across multiple RDS Oracle DB instances to address your application architecture needs.
To use RDS for Oracle integration with Amazon EFS, enable the EFS_INTEGRATION option, see the documentation for more information. EFS integration feature is supported for DB instance(s) in the same AWS Region and VPC.
Amazon HealthLake Imaging is a new HIPAA-eligible capability now in preview that enables healthcare providers and their software partners to easily store, access, and analyze medical images at petabyte scale. With HealthLake Imaging, healthcare providers and their software partners can run their medical imaging applications in the cloud to increase scale while also reducing infrastructure costs.
HealthLake Imaging helps providers reduce the total cost of medical imaging storage up to 40% by running their medical imaging applications from a single, authoritative copy of patient imaging data in the cloud. HealthLake Imaging enables access to medical imaging data with sub-second image retrieval latencies at scale powered by cloud-native APIs and applications from AWS partners.
Providers can realize the cost savings of transitioning to the cloud while preserving low latency performance, enabling them to focus their time and resources to deliver high quality patient care.
Starting this week, Amazon EC2 High Memory instances with 3TiB (u-3tb1.56xlarge) of memory are now available in US East (Ohio). Instances with 6TiB of memory (u-6tb1.56xlarge, u-6tb1.112xlarge) are now available in Europe (Milan), instances with 18TiB of memory (u-18tb1.112xlarge) are now available in Europe (Ireland), and instances with 24TiB of memory (u-24tb1.112xlarge) are now available in US East (N. Virginia) and AWS GovCloud (US-West).
This launch also includes support for On Demand, Savings Plan, Reserved Instance, and 1-year Dedicated Host Reservation purchase options for instances with 18TiB and 24TiB of memory which were previously only available with 3-year Dedicated Host Reservations.
Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory.
Starting today, you can use AWS Nitro Enclaves in Asia Pacific (Osaka) and Asia Pacific (Jakarta) regions.
AWS Nitro Enclaves is an Amazon EC2 capability that enables customers to create isolated compute environments to further protect and securely process highly sensitive data within their EC2 instances. Nitro Enclaves helps customers reduce the attack surface area for their most sensitive data processing applications.
There is no additional cost other than the cost for the using Amazon EC2 instances and any other AWS services that are used with Nitro Enclaves.
With this expansion, Nitro Enclaves is now available in 24 AWS regions globally, including US East (N. Virginia, Ohio), US West (Oregon, N. California), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm), Middle East (Bahrain), Asia Pacific (Jakarta, Hong Kong, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), South America (Sao Paulo), Canada (Central), Africa (Cape Town), and AWS GovCloud (US-East, US-West).
Companies using the Amazon Connect Customer Profiles APIs for custom agent applications and automated interactions (e.g., IVR) can now search for profiles using multiple search terms, making it easier to find the right profile. Using the enhanced SearchProfiles API, customers can search for profiles using up to 5 terms to narrow down or expand search results.
For example, when dealing with common names, you can narrow your search results to one profile by searching for profiles that match more than one term such as phone number, and name. As another example, when uncertain on the search term that matches a specific profile, you can expand search results to all the profiles matching any of the terms provided such as phone number, name, or social security number.
With Amazon Connect Customer Profiles companies can deliver faster and more personalized customer service by providing access to relevant customer information for agents and automated experiences. Companies can bring customer data from multiple SaaS applications and databases into a single customer profile, and pay only for what they use based on the number of customer profiles. Amazon Connect Customer Profiles is available in US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Canada (Central), Europe (Frankfurt), and Europe (London).
Amazon HealthLake announces new analytics capabilities, making it easier for customers to query, visualize, and build machine learning models on their HealthLake data. With this launch, HealthLake transforms customer data into an analytics-ready format in AWS Lake Formation in near real-time.
This removes the need for customers to execute complex data exports and data transformations. Now customers can simply focus on querying the data with SQL using Amazon Athena, building visualizations using Amazon QuickSight or other third party tools, and using this data to build ML models with Amazon SageMaker.
Healthcare analytics use cases such as population health analysis and claims analytics require customers to use data from multiple, disparate sources such as EHR (Electronic Health Records), claims, and devices.
This entails building complex data pipelines and executing extraction and transformations that often takes months of undifferentiated heavy lifting. HealthLake reduces the time from months to days by normalizing this data from multiple disparate sources into an interoperable format and further activating this data for analytics within AWS Lake Formation.
Customers can then apply granular controls, share this data within the organization, and rapidly build applications such as patient longitudinal medical record. With a few clicks, customers can use a host of AWS services such as Amazon Athena, Amazon Quicksight, and Amazon SageMaker to build population health dashboards, execute claims analytics, and build care gap prediction models.
The AWS Controllers for Kubernetes (ACK) for the Amazon EC2 service controller is now generally available. ACK lets you provision and manage EC2 networking resources, such as VPCs, SecurityGroups and Internet Gateways using the Kubernetes API.
ACK lets you define and use AWS service resources directly from Kubernetes clusters. With ACK, you can take advantage of AWS managed services for your Kubernetes applications without needing to define resources outside of the cluster or run services that provide supporting capabilities like databases, message queues or instances within the cluster.
ACK now supports 14 AWS service controllers as generally available with an additional 12 in preview.
Amazon S3 server access logs and AWS CloudTrail logs will soon contain information to identify S3 requests that rely upon an access control list (ACL) for authorization to succeed. This feature, which will be activated over the next few weeks, will provide you with information that will simplify the process of adopting the S3 security best practice of disabling ACLs.
Amazon S3 launched in 2006 with access control lists as the way to grant access to S3 buckets and objects. Since 2011, Amazon S3 has also supported AWS Identity and Access Management (IAM) policies. Today, the majority of use cases in Amazon S3 no longer require ACLs, and instead are more securely and scalably achieved with IAM policies.
AWS therefore recommend disabling ACLs as a security best practice. The new information AWS are adding to Amazon S3 server access logs and AWS CloudTrail will allow you to discover any existing applications or access patterns that rely on ACLs for access to your data, so that you can migrate those permissions to IAM policies before you disable ACLs on your S3 bucket.
Amazon File Cache is now available in four additional AWS regions: Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Hong Kong), and Europe (Stockholm).
Amazon File Cache is a fully managed, scalable, and high-speed cache on AWS for processing file data stored in disparate locations—including on premises. You can create a cache on AWS in just a few minutes and link it to multiple on-premises NFS file systems, in cloud file systems (Amazon FSx for OpenZFS, Amazon FSx for NetApp ONTAP), and Amazon Simple Storage Service (Amazon S3) buckets.
Amazon File Cache is designed to deliver consistent sub millisecond latency, up to hundreds of GB/s of throughput, and up to millions of operations per second, helping you speed up workload completion times and optimize compute utilization. Using Amazon File Cache, you can accelerate and simplify cloud bursting and hybrid workflows including media and entertainment, financial services, health and life sciences, microprocessor design, manufacturing, weather forecasting, and energy.
Amazon EventBridge now supports additional filtering capabilities including the ability to match against characters at the end of a value (suffix filtering), to ignore case sensitivity (equals-ignore-case), and to have a single EventBridge rule match if any conditions across multiple separate fields are true (OR matching).
AWS are also increasing the bounds supported for numeric values to -5e9 to 5e9 from -1e9 to 1e9. With these new enhanced capabilities, you can now write complex rules that provide additional filtering options when building event-driven applications.
Amazon EventBridge is a serverless event bus that enables you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. You can set up routing rules to determine where to send your data, allowing for application architectures to react to changes in your systems as they occur.
Amazon EventBridge makes it easier to build event-driven applications by facilitating event ingestion, delivery, security, authorization, and error handling.
The new filtering capabilities virtually eliminate the need to write and manage custom filtering code in downstream services. For example, if you consume S3 events using EventBridge but only need to process certain file types such as PDFs, you can use a suffix filter to ensure your EventBridge rule only matches against S3 objects that end in “.pdf”.
To get started, you can navigate to the EventBridge Console and click on the Rules page to create or edit an existing rule. To learn more, please visit our documentation.
Amazon S3 Object Lambda is now available in the Asia Pacific (Osaka) AWS Region. With S3 Object Lambda, you can add your own code to S3 GET, HEAD, and LIST requests to modify and process data as it is returned to an application.
You can use custom code to modify the data returned by S3 GET requests to filter rows, dynamically resize images, redact confidential data, and much more. You can also use S3 Object Lambda to modify the output of S3 LIST requests to create a custom view of objects in a bucket and S3 HEAD requests to modify object metadata like object name and size.
With just a few clicks in the AWS Management Console, you can configure an AWS Lambda function and attach it to a S3 Object Lambda Access Point. From that point forward, S3 will automatically call your AWS Lambda function to process any data retrieved through the S3 Object Lambda Access Point, returning a transformed result back to the application.
You can get started with S3 Object Lambda through the AWS Management Console, AWS Command Line Interface (CLI), Application Programming Interface (API), or AWS Software Development Kit (SDK) client.
S3 Object Lambda is available in all AWS Regions, including AWS GovCloud (US) Regions, the AWS China (Beijing) Region, operated by Sinnet, and the AWS (Ningxia) Region, operated by NWCD.
The next generation of Amazon FSx for Lustre file systems is now available in four additional AWS regions: Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Hong Kong), and Europe (Stockholm).
The next generation of Amazon FSx for Lustre file systems is built on AWS Graviton processors and provides up to 5x higher throughput per terabyte (up to 1 GB/s per terabyte) and up to 60% lower cost of throughput compared to previous generation file systems.
Using the next generation of FSx for Lustre file systems, you can accelerate execution of machine learning, high-performance computing, media & entertainment, and financial simulations workloads while reducing your cost of storage.
Catalog API now supports tag-based authorization of resources. As a seller or a private marketplace administrator, you can now exercise IAM policy-based control over resources such as Entities and ChangeSets by tagging them and allowing/disallowing actions based on those tags.
You can either add tags on resources when you create them using the StartChangeSet API action or add tags on existing resources using the new TagResource API action. You can also list all the tags on a resource using ListTagsForResource API and remove tags from resources using the UntagResourceAPI.
Only owners of the target resource can perform tag-related actions, such as adding a tag, removing a tag, and listing all the tags on the resource. As an owner, you can also grant an IAM user/role permission to perform actions on resources based on the tags associated with them.
Previously, if you wanted to grant an IAM user/role access to update a group of product listings, you had to define an IAM policy with Amazon Resource Name (ARN) of each product in the group. Now, with tag-based authorization, you can accomplish this efficiently by tagging the resources and defining tag-based permissions on them.
This week, AWS were pleased to announce the general availability of an IoT Device Management feature, browser-based SSH (secure shell) using Secure Tunneling.
Secure Tunneling provides customers a secure connection between source and destination devices that are brokered through a cloud proxy service on AWS. To provide secure bi-directional communication between devices, tunnels are authenticated with the cloud proxy service and data transmitted through the tunnel is encrypted using Transport Layer Security (TLS).
With browser-based SSH, customers can open a SSH tunnel to a targeted device directly from the AWS console and use an embedded terminal without the need for a local proxy. This feature simplifies the onboarding experience significantly because customers no longer need to compile and install a local proxy on the operators' device.
This streamlined experience allows customers to easily scale their use of Secure Tunneling for remote tasks such as troubleshooting or conducting routine operational maintenance.
You can now apply AWS Lake Formation fine-grained access control policies with all table and file format supported by Amazon Athena. Lake Formation allows for centrally managing permissions and access control for data catalog resources in your S3 data lake. You can use fine-grained access control in Lake Formation to restrict access to data in query results using data filters to achieve column-level, row-level and cell-level security.
With this week’s launch, you can enforce fine-grained access control policies in Athena queries for data stored in any supported file format using table formats such as Apache Iceberg, Apache Hudi and Apache Hive. You get the flexibility to choose the table and file format best suited for your use case and get the benefit of centralized data governance to secure data access when using Athena.
For example, you can use Iceberg table format to store data in your S3 data lake for reliable write transactions at scale, together with row level security filters in Lake Formation so that data analysts residing in different countries get access to data only for customers located in their own country to meet the regulatory requirements.
The new expanded support for table and file formats does not require any change in how you setup fine-grained access control policies in Lake Formation and requires Athena engine version 3 which offers new features and improved query performance.
Amazon WorkDocs now offers Delete Previous Versions feature in WorkDocs web client and APIs. This new Amazon WorkDocs web client feature enables end users to delete a previous version(s) of a file that they own, thereby providing greater user controls. The new WorkDocs APIs and SDK also enable site administrators to support the deletion and restoration of deleted versions programmatically. Both end users and site administrators can now manage document versions and individual user data limits proactively and effectively.
Deleting document versions from the WorkDocs web client incurs no additional fees. Using the API to delete a version incurs no additional fees, and restoring a deleted version will be billed at the rate for a WRITE request. Please refer to the Amazon WorkDocs API Pricing page for the current pricing information. The ability to delete a previous version is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Ireland).
Agent Assist has launched backend modules as a GA feature. Backend modules is an out-of-the-box solution that provides an effective backend infrastructure, making integrating Agent Assist with your agent system faster and easier. See the backend modules basics and integration guide for details.
The Agent Assist Console is now GA. The Console now also includes built-in workflow tutorials that walk you through creating a dataset, training and testing a model, and creating a conversation profile. Sample datasets and demo models are now provided as well. To see the new Console tutorials, navigate to the Console and click the Get started button under the feature you'd like to test.
Agent Assist now supports sentiment analysis of voice data as a private Preview feature. For more information, see the Agent Assist private features documentation. To gain access to the private documentation, please contact your Google representative.
Agent Assist now supports CCAI Transcription as a GA feature. CCAI Transcription allows you to convert streaming audio data into text transcripts in real time, allowing you to implement Agent Assist features for use with voice data. See the documentation for details.
Anthos Clusters on VMware
Anthos clusters on VMware 1.13.2-gke.26 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.13.2-gke.26 runs on Kubernetes 1.24.7-gke.1400.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.13, 1.12, and 1.11.
- Fixed a validation error where the GKE Hub membership is not found when using a gcloud version that is not bundled with the admin workstation.
- Fixed the issue where the admin cluster might fail to register due to naming conflicts.
- Fixed the issue where the Connect Agent in the admin cluster does not upgrade after a failure to upgrade nodes in the user cluster control plane.
- Fixed a bug where running
gkectl diagnose snapshotusing
systemscenario did not capture Cluster API resources in the
- Fixed the issue during admin cluster creation where
gkectl check-configfails due to missing OS images, if
gkectl prepareis not run first.
- Fixed the unspecified Internal Server error in ClientConfig when using the Anthos Identity Service (AIS) hub feature to manage the OpenID Connect (OIDC) configuration.
- Fixed the issue of
/var/log/audit/filling up disk space on the admin workstation.
- Fixed an issue where cluster deletion may be stuck at node draining when the user cluster control plane and node pools are on different datastores.
- Fixed the issue where nodes fail to register if the configured hostname in the IP block file contains one or more periods.
Object tables are now in preview. Object tables are read-only tables containing metadata for unstructured data stored in Cloud Storage. These tables enable you to analyze and perform inference on images, audio files, documents, and other file types by using BigQuery ML and BigQuery remote functions. Object tables extend structured data features such as data security and governance best practices to unstructured data.
Metadata caching is now in preview. Using cached metadata might improve query performance for BigLake tables and object tables that reference large numbers of objects, by allowing the query to avoid listing objects from Cloud Storage.
You can collect Splunk CIM logs by using the Chronicle forwarder and Splunk default parser. For more information, see Collect Splunk CIM logs.
UDM Search is a new Chronicle search feature which enables you to find UDM events within your Chronicle instance. You can search both for individual UDM events and groups of UDM events tied to shared search terms. UDM search includes a number of search features, enabling you to navigate through your UDM data:
- Quick Filters—Fast access to saved searches and search history.
- Event Viewer—View the raw log and UDM for the event.
- Search Manager—Comprehensive view of your saved searches and search history.
Be sure to review Google's recommended best practices for conducting searches using UDM Search. UDM searches can require substantial computational resources to complete if they are not constructed carefully. Performance also varies depending on the size and complexity of the data in your Chronicle instance.
All Composer environment GKE clusters are set up with maintenance exclusions for the period between November 18, 2022 and November 30, 2022. For more information, see Maintenance exclusions.
Cloud Functions container runtimes have been patched against CVE-2022-3786 and CVE-2022-3602. Affected runtime languages are:
- Java 17
- Python 3.10
- Go 1.18/1.19
- .NET 6
You should redeploy functions using the affected runtime languages as soon as possible. Google does not automatically update the base image in use for already-deployed functions. Google will automatically apply the most updated runtime version to the redeployed function.
Logs from Cloud Run services can now be tailed or viewed in a command-line friendly format using
gcloud beta run services logs tail and
gcloud beta run services logs read
Time to live (TTL) is now supported in PostgreSQL-dialect databases. With TTL, you can reduce storage costs, improve query performance, and simplify data retention by automatically removing unneeded data based on user-defined policies.
Added support for the
JSONB data type in the Cloud Spanner PostgreSQL dialect. For more information, see Work with JSONB data.
- Mumbai (
- Delhi (
- Columbus (
- Dallas (
- Las Vegas (
Support for internal ingress from Cloud Tasks to Cloud Run and Cloud Functions is now at General Availability.
For online document translations, you can increase the page limit for native PDF documents to 300 pages.
Preview: You can limit the runtime of a VM to automatically stop or delete it when a time limit is reached. Limiting VM runtimes can help you optimize temporary workloads by minimizing costs and releasing quota. For more information, see Limit the runtime of a VM.
Generally available: You can double the default size limit for a managed instance group (MIG): Zonal MIGs support up to 2,000 VMs and regional MIGs support up to 4,000 VMs. For more information, see Increase the group's size limit.
Generally available: Use the new distribution shape ANY SINGLE ZONE in a regional managed instance group (MIG) to automatically select a single zone that has available resources within your quota. Recommended for workloads that require low latency, high-bandwidth connections between VMs or when you want to avoid inter-zone network traffic costs.
Balanced persistent disks and SSD persistent disks now offer baseline IOPS and throughput performance. To learn more, see Baseline performance.
VPC Service Controls now support Config Controller. The support is in Preview status.
Config Controller now uses the following versions of its included products:
New sub-minor versions of Dataproc images:
1.5.77-debian10, 1.5.77-rocky8, 1.5.77-ubuntu18,
2.0.51-debian10, 2.0.51-rocky8, 2.0.51-ubuntu18,
preview 2.1.0-RC4-debian11, preview 2.1.0-RC4-rocky8, preview 2.1.0-RC4-ubuntu20.
google-auth-oauthlib Python package to fix
gcsfs Python package for 2.0 and 2.1 images.
Backported HIVE-17317 in the latest 2.0 and 2.1 images.
Dialogflow CX agents can now be exported to JSON.
The Identity Document Proofing Processor is now available in Public Preview.
The Identity Document Proofing Processor is designed to help predict the validity of ID documents with four different signals:
is_identity_document detection: predict whether an image contains a recognized identity document.
suspicious_words detection: predict whether words are present that aren't typical on IDs.
image_manipulation detection: predict whether the image was altered or tampered via an image editing tool.
online_duplicate detection: predict whether the image can be found online.
GKE Autopilot clusters support signaling to GKE that a particular node is problematic in version 1.24 and later.
Google Cloud VMware Engine
Starting November 17, 2022, newly created private clouds will utilize IP address layout (IP Plan) version 2.0 subnet allocations. HCX addressing is now included in the management CIDR allocation, simplifying the process of starting data center VM migrations. IP Plan version 2.0 also enables additional scale and features delivered to your public cloud in upcoming releases.
Stretched private clouds are now available in the
europe-west3(Frankfurt) region. You can use stretched private clouds to stretch vSphere/vSAN clusters across zones and protect against zone level failures. This functionality enables high levels of availability for business critical applications.
BigQuery subscriptions now support the JSON type for all string fields, including
attributes. For more information about JSON type compatibility, see Properties of a BigQuery subscription.
Security Command Center
files attribute was added to the
Finding object of the Security Command Center API.
files attribute contains information about each file that triggered a finding, including the name of the file, the full path to the file, and the size of the file.
For more information, see the Security Command Center API documentation for the
Preview: Connectivity to Private Service Connect endpoints used to access a managed service is supported over VLAN attachments for Cloud Interconnect
Preview: Private Service Connect endpoints with consumer HTTP(S) controls now support accessing regional Google APIs and managed services using the following load balancers:
- Regional internal HTTP(S) load balancer
- Regional external HTTP(S) load balancer
Microsoft Azure Releases And Updates
You can invoke an Azure Function when a row in a SQL database is created, updated, or deleted through the Azure SQL trigger for Azure Functions, now available in public preview.
The Azure Arm-based VMs are now available in four more regions.
Optimize business costs by licensing your Disaster Recovery secondary for free with SQL Server on Azure Virtual Machines.
Save on business continuity costs and license geographically redundant Disaster Recovery with Azure SQL Managed Instance.
Backup of your Azure SQL Managed Instance database can now be restored to a SQL Server 2022 instance.
Azure Synapse Link for SQL enables seamless near-real-time data movement from relational sources in Azure SQL Database and SQL Server 2022 to analytical stores without needing to build ETL pipelines.
The link feature in Azure SQL Managed Instance connects your SQL Servers hosted anywhere to SQL Managed Instance, providing hybrid flexibility and database mobility.
New feature wave enhancements for Azure SQL Managed Instance make it even more performant, reliable, and secure with on-premises SQL Server and the broader Azure platform.
New feature wave enhancements for Azure SQL Managed Instance make it even more performant, reliable, and secure with on-premises SQL Server and the broader Azure platform.
Public preview enhancements and updates released for Azure SQL in mid-November 2022.
General availability enhancements and updates released for Azure SQL in mid-November 2022.
Participate in retail evaluation now to ensure compatibility. The Azure Sphere team has also updated the trusted keystore of Azure Sphere devices, resulting in an additional reboot for production devices.
Start using the new policies with TLS 1.3 for your Azure Application Gateway to improve security and performance.
Azure's regional Web Application Firewall on Application Gateway now supports customizing actions per rule. This new functionality allows you greater control over how the Web Application Firewall handles incoming traffic.
This rule set provides you enhanced protection against bots and provides granular control over bots detected by WAF by categorizing bot traffic as good, bad, or unknown bots.
Use Azure REST Quota APIs to manage the service limits (quotas) for Azure VMs, HPC Cache, Purview services, and networking.
Protect your data by encrypting Premium SSD, Standard SSD, and Standard HDD managed disks with cross tenant customer-managed keys.
You can use this feature to back up confidential VMs using Platform Managed Keys.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (includes forever free tier):