125 min read

In Cloud Computing This Week [Dec 2nd 2022]

December 2, 2022

 

Cloud_News_Roundup

Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday December 2nd 2022.

This week we saw continued take up of the new Hava Terraform Provider. This allows you to include diagram source creation directly inside your deployment code so you can take care of your network documentation as you create them.

https://www.hava.io/blog/hava-releases-hashicorp-terraform-provider

To stay in the loop, make sure you subscribe using the box on the right of this page.

Of course we'd love to keep in touch at the usual places. Come and say hello on:

Facebook.      Linkedin.     Twitter.


Getting_Started_aws_logo

AWS Updates and Releases

Source: aws.amazon.com

Announcing Amazon CodeCatalyst (Preview)

AWS is launching a preview of Amazon CodeCatalyst, a unified software development service that makes it faster to build and deliver software on AWS. CodeCatalyst provides software development teams with an integrated project experience that brings together the tools needed to plan, code, build, test, and deploy applications on AWS.

Software teams spend significant time and resources on collaborating effectively and setting up tools, development and deployment environments, and continuous integration and delivery (CI/CD) automation. These activities can detract from their ability to quickly deliver new features or software updates to customers.

With CodeCatalyst, teams can automate many of these complex activities, so they can focus on quickly enhancing their applications and deploying them to AWS.

You can use CodeCatalyst to browse a collection of project blueprints. Choose one to quickly set up your project using the right combination of technologies, architecture, infrastructure, and services. A project blueprint automatically provisions and configures your project tools—like an issue tracker or source repository—sets up CI/CD workflows that run on managed build infrastructure, and defines the architecture and configuration of AWS services used by the project.

You can also replace the built-in source repository and issue tracker with GitHub or Jira Software while maintaining an integrated experience. Teams can automate consistent creation and configuration of shared deployment environments and individual cloud-based development environments. Team members can quickly access the same tools, resources, environments, and project state to collaborate more efficiently.

Amazon CodeCatalyst is available in preview today in the AWS Region of US West (Oregon), and you can use it to automate deployments to any AWS account and Region.

Amazon EventBridge Pipes is now generally available

EventBridge Pipes provides a simpler, consistent, and cost-effective way to create point-to-point integrations between event producers and consumers, expanding the EventBridge offering beyond event buses and scheduling.

EventBridge Pipes makes it easy to connect your applications with data from sources including Amazon SQS, Amazon Kinesis, Amazon DynamoDB, Amazon Managed Streaming Kafka, self-managed Kafka, and Amazon MQ. EventBridge Pipes supports the same target services as event buses, such as Amazon SQS, AWS Step Functions, Amazon Kinesis Data Streams, Amazon Kinesis Data Firehose, Amazon SNS, Amazon ECS, and event buses themselves.

Creating a pipe is as simple as selecting a source and a target, and you can customize batching, starting position, concurrency, and more, if desired. An optional filtering step allows only specific source events to flow into the pipe and an optional enrichment step using AWS Lambda, AWS Step Functions, API Destinations, or Amazon API Gateway can be used to enrich or transform events before they reach the target.

By removing the need to write, manage, and scale undifferentiated integration code, EventBridge Pipes allows you spend your time building your services rather than connecting them.

Amazon Comprehend announces support for classification and entity extraction directly from a variety of document formats

Amazon Comprehend announced single-step APIs that customers can now use to classify and extract entities of interest from PDF documents, Microsoft Word files, and images.

Amazon Comprehend is a Natural Language Processing (NLP) service that provides pre-trained and custom APIs to derive insights from textual data. The new capability simplifies document processing by adding support for common document types like PDF documents, Microsoft Word and images, in Amazon Comprehend Custom Classification and Custom Entity Recognition APIs.

Previously, to process such documents, AWS customers were required to pre-process and flatten documents into machine-readable text, which can reduce the quality of the document context. Now, with a single API call, customers can process both scanned or digital semi-structured documents (like PDFs, Microsoft Word documents, and images in their native format), and plain-text documents, eliminating pre-processing overhead. Customers can use the new capability to simplify document processing for batch processing or real-time use cases.

AWS re:Post streamlines customers’ community engagement with AWS Builder ID and re:Post Linked logins

AWS re:Post is a cloud knowledge service that provides customers the technical guidance they need to innovate faster and improve operational efficiency with AWS services. re:Post has integrated with AWS Builder ID to provide re:Post users an additional sign-in method to join the community without an AWS account.

Additionally, users can link multiple logins with AWS Management Console or AWS Builder ID so that community contributions live in a single re:Post profile. re:Post integration with AWS Builder ID and the ability to link multiple logins to a single re:Post profile allow users to keep their contributions, points earned, and reputation status on re:Post, even if they transfer to a new role or organization.

AWS Customers can now simultaneously build applications in the AWS Management Console and troubleshoot on re:Post with the community, without managing multiple login credentials. AWS enthusiasts that do not have an AWS account can now publish technical guidance and engage with community experts.

Users that are signed on to re:Post using AWS Builder ID will have a seamless experience navigating between other AWS applications such as CodeCatalyst and CodeWhisperer without additional sign-in. Join the re:Post community and consolidate your contributions in one re:Post profile today. 

AWS Step Functions launches large-scale parallel workflows for data processing and serverless applications

AWS Step Functions expands support for iterating and processing large sets of data such as images, logs and financial data in Amazon Simple Storage Service (Amazon S3), a cloud object storage service. 

AWS Step Functions is a visual workflow service capable of orchestrating over 10,000 API actions from over 220 AWS services to automate business processes and data processing workloads.

Now, AWS Step Functions can iterate over objects such as images or logs stored in Amazon S3, then launch and coordinate thousands of parallel workflows to process the data, and save the results of executions to Amazon S3. You can scale thousands of parallel workflow executions, running concurrently, with the new distributed mode of the Map state.

You can use the distributed Map mode to analyze millions of log files for security risks, iterate terabytes of data for business insights or scan images, and video files for specific objects. To process your data, use compute services such as AWS Lambda and write code in any language supported, or choose from over 220 purpose-built AWS services to accelerate your development.

AWS Step Functions manages the concurrency and error thresholds, provides visibility into the progress of each individual workflow execution in a visual operator console, and gives you control over how faults and errors are handled.

Start using the new distributed Map mode today in the console with Workflow Studio, Command Line Interface (CLI) or SDK. To learn more, please see the AWS Step Functions developer guide and the launch blog.

Introducing AWS Application Composer (Preview)

AWS Application Composer helps developers simplify and accelerate architecting, configuring, and building serverless applications. You can drag, drop, and connect AWS services into an application architecture by using AWS Application Composer’s browser-based visual canvas. AWS Application Composer helps you focus on building by maintaining deployment-ready infrastructure as code (IaC) definitions, complete with integration configuration for each service.

Developers new to building serverless applications can face a steep learning curve when composing applications from multiple AWS services. They need to understand how to configure each service, and then learn and write IaC to deploy their application. When making changes to an existing application, developers can find it difficult to communicate architectural changes with their teams by reading updates to large IaC definition files.

With AWS Application Composer, you can start a new architecture from scratch, or you can import an existing AWS CloudFormation or AWS Serverless Application Model (SAM) template. You can add and connect AWS services, and AWS Application Composer helps generate deployment-ready projects, such as IaC definitions and AWS Lambda function code scaffolding. AWS Application Composer then maintains the visual representation of your application architecture in sync with your IaC, in real time. 

Amazon GameLift now supports customer-managed compute with GameLift Anywhere

The general availability (GA) of Amazon GameLift Anywhere de-couples game session management from the underlying compute resources. During the game development phase, developers need instant compute resources to deploy, test and iterate their game builds continuously. In addition, customers often have ongoing bare-metal contracts or on-premises game servers and need the flexibility to use their existing infrastructure with cloud servers. 

Amazon GameLift is a fully managed solution that allows you to manage and scale dedicated game servers for session-based multiplayer games. With this new release, customers can now register and deploy any hardware, including their own local workstations, under a logical construct called an Anywhere Fleet.

Customers can use their Anywhere Fleet the same way as a managed Amazon Elastic Compute Cloud (Amazon EC2) Fleet by connecting to GameLift FlexMatch and Queue services in the cloud via API calls. Amazon GameLift Anywhere enables developers to iterate game builds faster and manage game sessions across any server hosting infrastructure under a single managed solution.   

Amazon SageMaker Data Wrangler now supports over 40 third-party applications as data sources

This week, AWS announced the general availability of Amazon SageMaker Data Wrangler support for over 40 third party applications as data sources for machine learning (ML) through the integration with Amazon AppFlow. 

Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare data for machine learning (ML) from weeks to minutes. Preparing high quality data for ML is often complex and time consuming as it requires aggregating data across various sources and formats using different tools.

With SageMaker Data Wrangler, you can explore and import data from a variety of popular sources, such as Amazon S3, Amazon Athena, Amazon Redshift, Snowflake, Databricks and Salesforce Customer Data Platform.

Starting this week, AWS are making it easier for customers to aggregate data for ML from over 40 third-party application data sources, including Salesforce Marketing, SAP, Google Analytics, LinkedIn and more via Amazon AppFlow. 

Amazon AppFlow is a fully managed service that enables customers to securely transfer data from third-party applications to AWS services such as Amazon S3, and catalog the data in the AWS Glue Data Catalog in just a few clicks. Once the data sources are set up in AppFlow, you can browse tables and schemas from these data sources using Data Wrangler SQL explorer.

You can write Athena queries to preview data to ensure that it is relevant for your use cases, and import data to prepare for ML model training. You can also join data from multiple sources after import to create the right data set for ML.

Once the data is imported, you can quickly understand data quality, clean the data, and create features with 300+ built in analysis and data transformation. You can also train and deploy model with SageMaker Autopilot, and operationalize data preparation process in a feature engineering, training or or deployment pipeline using integration with SageMaker Pipeline from Data Wrangler.

Data Wrangler supports 40+ third-party data sources in all the regions currently supported by AppFlow. This feature is available at no additional charge beside Data Wrangler and AppFlow cost.

Amazon Athena now supports Apache Spark

Amazon Athena now supports Apache Spark, a popular open-source distributed processing system that is optimized for fast analytics workloads against data of any size. Athena is an interactive query service that helps you query petabytes of data wherever it lives, such as in data lakes, databases, or other data stores. With Amazon Athena for Apache Spark, you get the streamlined, interactive, serverless experience of Athena with Spark, in addition to SQL.

You can build interactive Apache PySpark applications using a simplified notebook experience in the Athena console or through Athena APIs. With Athena, interactive Spark applications start in under a second and run faster with the optimized Spark runtime, so you spend more time on insights, not waiting for results. As Athena takes care of managing the infrastructure and configuring Spark settings, you can focus on your business applications.

Amazon SageMaker Data Wrangler now provides built-in data preparation in notebooks

Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare data for ML from weeks to minutes With Data Wrangler, you can simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow, including data selection, visualization, cleansing, and preparation from a low-code visual interface.

Many ML practitioners want to explore datasets directly in notebooks to spot potential data-quality issues, like missing information, extreme values, skewed datasets, or biases, so they can correct those issues to prepare data for training ML model faster. ML practitioners can spend weeks writing boilerplate code to visualize and examine different parts of their dataset to identify and fix potential issues.

Starting this week, Data Wrangler offers a built-in data preparation capability in Amazon SageMaker Studio notebooks that allows ML practitioners to visually review data characteristics, identify issues, and remediate data-quality problems—in just a few clicks directly within the notebooks.

When users display a data frame (a tabular representation of data) in their notebooks, SageMaker Studio notebooks automatically generate charts to help users understand their data distribution patterns, identify potential issues such as incorrect data, missing data, or outliers, and suggests data transformations to fix these issues.

The new capability also enables users to identify target column data quality issues that will affect the ML model performance such as imbalanced data or mixed data types, and suggests data transformations to fix these issues. Once the ML practitioner selects a data transformation, SageMaker Studio notebooks generates the corresponding codes within the notebook so the data transformation can be repeatedly applied every time the notebook is run.

This feature is generally available in all the regions currently supported by SageMaker Studio notebooks at no additional charge.

Introducing the Amazon EC2 Spot Ready Software Products

The new Amazon EC2 Spot Ready specialization helps customers identify validated AWS Partner software products that support Amazon EC2 Spot Instances, a compute purchase option that allows customers to utilize spare EC2 capacity at a discounted price from on demand (up to 90%). Amazon EC2 Spot Ready ensures that customers have a well-architected and cost-optimized solution to help them benefit from EC2 Spot savings for their workloads.

Amazon EC2 Spot Ready software products are vetted by AWS Partner Solutions Architects through use cases so that customers can deploy on Spot with confidence. Customers can easily find top-tier software products knowing that each partner offering adheres to Spot best practices and has demonstrated customer success.

Amazon EC2 Spot Ready Partners makes it easy for customers to take advantage of Spot optimizations with a catalog of software products that support Spot.

Introducing AWS Glue Delivery

AWS are excited to announce the new AWS Glue Delivery specialization, which validates AWS Partners with deep expertise and proven success delivering AWS Glue for data integration, data pipeline, and data catalogue use cases. AWS Glue is a scalable, serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. With the ability to scale on demand, AWS Glue helps customers focus on high-value activities that maximize the value of their data.

The AWS Glue Delivery specialization recognizes AWS Partners that pass an exceptionally high technical bar and have a track record of success in serverless data integration with AWS Glue. AWS Glue Delivery Partners are vetted by AWS Partner Solutions Architects and demonstrate customer success through case studies. Learn more about AWS Glue Delivery Partners.

Introducing AWS Graviton Delivery Partners

AWS are thrilled to announce the new AWS Graviton Delivery specialization for AWS partners that excel in enabling the best price performance for workloads in Amazon Elastic Compute Cloud (Amazon EC2). This specialization validates AWS Partners that help customers accelerate and scale their adoption of AWS Graviton to achieve better workload performance and cost savings.

AWS Graviton Delivery Partners offer customers world-class services to implement workloads on AWS Graviton-based EC2 instances. From pilots to production deployment, our verified partners are able to recommend a transition strategy to Graviton and guide the customer throughout the entire process. This partnership enables a seamless transition to working with Graviton-based instances for optimal performance and cost.

AWS Graviton Delivery Partners make the adoption of Graviton simple and effective through their tested and verified experience. Learn more about AWS Graviton Delivery Partners.

Announcing Trusted Language Extensions for PostgreSQL on Amazon Aurora and Amazon RDS

Trusted Language Extensions for PostgreSQL is a new open source development kit to help you build high performance extensions that run safely on PostgreSQL. With Trusted Language Extensions, developers can install extensions written in a trusted language on Amazon Aurora PostgreSQL-Compatible Edition and Amazon Relational Database Service (RDS) for PostgreSQL.

Trusted Language Extensions for PostgreSQL allows developers to more productively create high performance database extensions using popular trusted languages, like JavaScript, Perl, and PL/pgSQL. Trusted Language Extensions provides database administrators control over who can install extensions and a permissions model for running them, letting application developers deliver new functionality as soon as an extension meets their needs.

Trusted Language Extensions for PostgreSQL is an open source project, available on Github, licensed under the Apache 2.0 license. You can learn more about Trusted Language Extensions in the AWS News blog.

You can get started by launching a new Amazon RDS DB instance directly from the AWS Console. Trusted Language Extensions for PostgreSQL is available in all AWS Regions, except the AWS China Regions. Trusted Language Extensions are available at no additional charge.

Amazon AppFlow now supports over 50 Connectors

Amazon AppFlow announces the release of 22 new data connectors. With this launch, Amazon AppFlow now supports data connectivity to over 50 applications. Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between Software-as-a-Service (SaaS) applications and AWS services like Amazon S3 and Amazon Redshift.

As enterprises increasingly rely on SaaS services for mission-critical workflows, they face the challenge of collecting data from a growing ecosystem of services into a centralized location to derive business insights using analytics and machine learning. With Amazon AppFlow, you can easily set up data flows in minutes without writing code.

Highlights include new marketing connectors such as Facebook Ads, Google Ads, Instagram Ads, and LinkedIn Ads; customer service and engagement connectors such as MailChimp, SendGrid, Zendesk Sell, Freshdesk, Okta and Typeform; as well as business operations solutions such as Microsoft Teams, Zoom Meetings, Stripe, QuickBooks Online, Jira Cloud and GitHub.

Using the updated Amazon Redshift connector, you can connect to Redshift clusters in a private subnet or Redshift Serverless. Additionally, you can now view run metrics for your flows using Amazon CloudWatch Metrics.

Amazon SageMaker Studio now supports real time collaboration

Amazon SageMaker Studio is a fully integrated development environment (IDE) for machine learning (ML) that enables ML practitioners to perform every step of the machine learning workflow, from preparing data to building, training, tuning, and deploying models.

This week, AWS announced new capabilities in SageMaker Studio to accelerate real time collaboration across ML teams.

By creating shared spaces in SageMaker Studio, users can now access, read, edit, and share the same notebooks in real time. All resources in a shared space are filtered and tagged, making it easier to focus on ML projects and manage costs.

Further, administrators now can provision multiple SageMaker domains in a region in order to separate different lines of business within a single AWS account. Finally, users can now configure a list of suggested Git repository URLs at the SageMaker domain or user profile level to aid collaboration using version control.

Introducing Amazon Managed Streaming for Apache Kafka (MSK) Delivery Partners

Amazon Web Services (AWS) is incredibly excited to announce the new Amazon MSK Service Delivery specialization for AWS partners that help customers migrate and build real-time streaming analytics solutions with fully managed Apache Kafka. 

Amazon MSK provisions your servers, configures your Apache Kafka clusters, replaces servers when they fail, orchestrates server patches and upgrades, architects clusters for high availability, ensures data is durably stored and secured, sets up monitoring and alarms, and runs scaling to support load changes.

With MSK Serverless, getting started with Apache Kafka is even easier. It automatically provisions and scales compute and storage resources and offers throughput-based pricing, so you can use Apache Kafka on demand and pay for the data you stream and retain.

The Amazon MSK Service Delivery specialization provides customers with a vetted list of AWS Partners with proven success in delivering Amazon MSK Service solutions. With Amazon MSK Delivery Partners, customers can migrate and build data streaming solutions on Amazon MSK with confidence and realize cost benefits. 

AWS Machine Learning University announces educator enablement program for higher education

AWS Machine Learning University is now providing a free educator enablement program that prioritizes U.S. community colleges, Minority Serving Institutions (MSIs), and Historically Black Colleges and Universities (HBCUs). Educators can leverage these tools to launch stand-alone courses, certificates, or full degrees in data management (DM), artificial intelligence (AI), and machine learning (ML).

The goal is to make early-career DM/AI/ML jobs more accessible to a broader and more diverse student population. The program offers a suite of ready-to-use tools to faculty, including a library of ready-to-teach DM/AI/ML educational materials, free computing capacity, and comprehensive faculty professional development built around MLU, Amazon's own internal training program for ML practitioners.

AWS Machine Learning University strives to diversify DM/AI/ML graduate pipelines by providing free curriculum, computing development environment, and year-round educator enablement to faculty at all colleges and universities. The new educator enablement program includes comprehensive courses supported by a suite of professional development engagements, giving educators pre-packaged options to launch their own stand-alone courses, certificates, or full degrees.

Faculty, students, and life-long learners may access instructional material through AWS Academy as well as Amazon SageMaker Studio Lab, AWS' free AI/ML development environment that provides computing, storage, and security for anybody to learn and experiment with DM/AI/ML. Faculty at participating institutions will have access to instructor guides, datasets, and MLU support to implement their programs.

Educators get classroom-ready through a community of practice including MLU instructors, Amazon Scholars, and peer faculty. First, they onboard in cohorts via virtual classes to learn the material and deep dive into how to teach it. Next, they select from a menu of year-round professional development opportunities such as a dedicated channel to promote sharing of teaching best practices, a DM/AI/ML education topic series with Amazon Scholars to help stay current on the state of the art in AI/ML education, virtual study sessions moderated by MLU instructors to promote on-going faculty success, and regional events to continue growing and connecting with the community.

For institutions wanting to offer full degree programs, AWS Machine Learning University also provides small group consultations with Amazon’s AI/ML education experts. To learn more about the program and express interest in attending an upcoming bootcamp visit here.

Amazon GuardDuty RDS Protection now in preview

Amazon GuardDuty now offers threat detection for Amazon Aurora to identify potential threats to data stored in Aurora databases. Amazon GuardDuty RDS Protection profiles and monitors access activity to existing and new databases in your account, and uses tailored machine learning models to accurately detect suspicious logins to Aurora databases.

Once a potential threat is detected, GuardDuty generates a security finding that includes database details and rich contextual information on the suspicious activity, is integrated with Aurora for direct access to database events without requiring you to modify your databases, and is designed to not affect database performance.

Amazon GuardDuty RDS Protection can be enabled with a single click in the GuardDuty console. Utilizing AWS Organizations for multi-account management, Amazon GuardDuty makes it easy for security teams to turn on and manage GuardDuty RDS Protection across all accounts in an organization.

Once enabled, GuardDuty RDS Protection begins analyzing and profiling access to Aurora databases, and when suspicious behaviors or attempts by known malicious actors are identified, GuardDuty issues actionable security findings to the GuardDuty console, AWS Security Hub, Amazon Detective, and Amazon EventBridge, allowing for integration with existing security event management or workflow systems.

During the preview period, Amazon GuardDuty RDS Protection is available to customers in five AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland). Also, in the preview period, Amazon GuardDuty RDS Protection is available to customers at no additional cost.

If you are not using Amazon GuardDuty already, you can enable your 30-day GuardDuty free trial with a single-click in the AWS Management console. To learn more, see Amazon GuardDuty Findings, and to receive programmatic updates on new Amazon GuardDuty features and threat detections, please subscribe to the Amazon GuardDuty SNS topic.

Amazon SageMaker JumpStart now enables you to more easily share ML artifacts within your organization

Amazon SageMaker JumpStart now enables you to more easily share machine learning (ML) artifacts, including notebooks and models, across your organization to accelerate model building and deployment. Amazon SageMaker JumpStart is an ML hub that accelerates your ML journey with built-in algorithms and pretrained models from popular model hubs, such as Hugging Face, and end-to-end solutions that solve common use cases.

Many enterprises have multiple data science teams who build ML models and Jupyter notebooks and many artifacts could be leveraged by other science and operations teams to increase productivity; however, it is often challenging to share ML artifacts internally or setup the execution environment to take the models and notebooks into production.

Starting this week Amazon SageMaker JumpStart helps you to more easily share ML artifacts, including notebooks and models, within your enterprise. You can add ML artifacts developed from SageMaker as well as those developed outside of SageMaker. Users within your organization can browse and select shared models to fine-tune, deploy endpoints, or execute notebooks directly in SageMaker JumpStart.

Announcing AWS Data Exchange for AWS Lake Formation (Preview)

AWS are announcing the preview of AWS Data Exchange for AWS Lake Formation, a new feature that enables data subscribers to find and subscribe to third-party data sets that are managed directly through AWS Lake Formation.

This feature is intended for data subscribers who want to easily integrate third-party data directly into their data lake as well as data providers who want to use AWS Lake Formation and AWS Data Exchange to help streamline their data licensing operations. 

Once data subscribers are entitled to an AWS Data Exchange for AWS Lake Formation data set, they can quickly query, transform, and share access to the data within their AWS account using AWS Lake Formation or across their AWS Organization using AWS License Manager.

Because the data appears within the Lake Formation catalog, subscribers can handle the discovery, access, and usage of third-party data in the same way they handle first-party data. By using AWS Data Exchange for AWS Lake Formation, data engineering teams can spend less time building and managing data pipelines, allowing them to get data to end users faster, helping to speed up time to insight for all data-dependent teams. 

 

Amazon SageMaker Studio is an integrated development environment (IDE) that provides a single, web-based visual interface where users can access purpose-built tools to perform all machine learning (ML) development steps, from preparing data to building, training, and deploying ML models.

This week, AWS were excited to announce a redesign that enhances the user experience by improving navigation, discoverability, and overall look and feel for SageMaker Studio.

This redesign includes a new navigation menu that highlights clear points of entry into each of SageMaker’s capabilities. The navigation experience is backed by an information architecture that mirrors the typical machine learning workflow to help users identify the right tools for the job.

The navigation bar is collapsible at any time for users who want to take advantage of the full screen. The navigation menu items lead to new dynamic landing pages that provide handy links to tutorials and help content to get users started and a more expansive view of resources.

Finally, the user interface also introduces a newly designed home page experience that includes one click access to common tasks and workflows, along with a redesigned launcher.

Introducing Amazon SageMaker Ready Software Products

AWS are thrilled to announce the new Amazon SageMaker Ready specialization, which validates world-class AWS Partner software products that integrate with Amazon SageMaker and help customers build machine learning solutions. AWS Partners offerings in the specialization include Data Platforms, Data Pre-Processing & Feature Stores, ML Frameworks, MLOps tools, and Business Decisions & Applications. 

Amazon SageMaker is a fully managed machine learning (ML) service that enables data scientists and developers to quickly build, train, and deploy ML models for any use case into a production-ready hosted environment.

Through the Amazon SageMaker Ready specialization, customers can confidently identify partner products that integrate with Amazon SageMaker and allow them to seamlessly execute use cases across the machine learning development lifecycle.

The Amazon SageMaker Ready specialization recognizes AWS Partners that pass a high technical bar for their product integration with Amazon SageMaker with proven customer success.

Announcing AWS Data Exchange for Amazon S3 (Preview)

AWS are announcing the preview of AWS Data Exchange for Amazon S3, a new feature that enables data subscribers to access third-party data files directly from data providers’ Amazon Simple Storage Service (Amazon S3) buckets.

This feature is intended for subscribers who want to easily use third-party data files for their data analysis with AWS services without needing to create or manage data copies, as well as data providers who want to offer in-place access to data hosted in their Amazon S3 buckets.

Once data subscribers are entitled to an AWS Data Exchange for Amazon S3 data set, they can start their data analysis without having to set up their own S3 buckets, copy data files into those S3 buckets, or pay associated storage fees.

Data analysis can be done with AWS services such as Amazon Athena, Amazon SageMaker Feature Store, or Amazon EMR. Subscribers access the same S3 objects that the data provider maintains and are therefore always using the most up-to-date data available, without additional engineering or operational work.

Data providers can easily set up AWS Data Exchange for Amazon S3 (Preview) on top of their existing S3 buckets to share direct access to an entire S3 bucket or specific prefixes and S3 objects. After setup, AWS Data Exchange automatically manages subscriptions, entitlements, billing, and payment.

AWS Glue announces AWS Glue Data Quality (Preview)

AWS Glue announces the preview of AWS Glue Data Quality, a new capability that automatically measures and monitors data lake and data pipeline quality. AWS Glue is a serverless, scalable data integration service that makes it more efficient to discover, prepare, move, and integrate data from multiple sources.

Managing data quality is manual and time-consuming. You must set up data quality rules and validate your data against these rules on a recurring basis, also writing code to set up alerts when quality deteriorates. Analysts must manually analyze data, write rules, and then write code to implement these rules. 

AWS Glue Data Quality automatically analyzes your data to gather data statistics. It then recommends data quality rules to get started. You can update recommended rules or add new rules using provided data quality rules. If data quality deteriorates, you can then configure actions to alert users.

Data quality rules and actions can also be configured on AWS Glue extract, transform, and load (ETL) jobs on data pipelines. These guidelines can prevent “bad” data from entering data lakes and data warehouses. AWS Glue is serverless, so there is no infrastructure to manage, and AWS Glue Data Quality uses open-source Deequ to evaluate rules. AWS uses Deequ to measure and monitor data quality of petabyte-scale data lakes.  

Deploy SageMaker Data Wrangler for real-time and batch inference and additional configurations to processing jobs

This week, AWS are excited to announce support for deploying data preparation flows created in Data Wrangler to real-time and batch serial inference pipelines, and additional configurations for Data Wrangler processing jobs in Amazon SageMaker Data Wrangler

Amazon SageMaker Data Wrangler reduces the time to rapidly prototype and deploy data processing workloads to production and easily integrates with CI/CD pipelines and MLOps production environments through SageMaker Processing APIs. When running and scheduling data processing workloads with Data Wrangler to prepare data to train ML models, customers asked to customize Spark memory and output partition settings for their data preparation workloads at scale.

Next, once customers process their data and train a ML model, they need to deploy both the data transformation pipeline and ML model behind a SageMaker Endpoint for real-time inference and batch inference use-cases. Customers then need to create data processing scripts from scratch to run the same data processing steps at inference that were applied when training the model, and once their model is deployed they need to ensure their training and deployment scripts are kept in sync. 

With this release, you can now easily configure Spark memory configurations and output partition format when running a Data Wrangler processing job to process data at scale. After preparing your data and training an ML model, you can now easily deploy your data transformation pipeline (also called a “data flow”) together with an ML model as part of a serial inference pipeline for both batch and real-time inference applications.

You can also now register your Data Wrangler data flows with SageMaker Model Registry. You can begin deploying your Data Wrangler flow for real-time inference by clicking on “Export to > Inference Pipeline (via Jupyter Notebook)" from the Data Flow view in Data Wrangler. Spark memory settings can now be configured as part of the Create job workflow and partitions can be configured as part of the destination node settings. 

Introducing Amazon SageMaker support for shadow testing

Amazon SageMaker supports shadow testing to help you validate performance of new machine learning (ML) models by comparing them to production models. With shadow testing, you can spot potential configuration errors and performance issues before they impact end users. SageMaker eliminates weeks of time spent building infrastructure for shadow testing, so you can release models to production faster.

Testing model updates involves sending a copy of the inference requests received by the production model to the new model and tracking how it performs. However, it can take several weeks of your time to build your own testing infrastructure, mirror inference requests, and compare how models perform.

Amazon SageMaker enables you to evaluate a new ML model by testing its performance against the current deployed production model. Simply select the production model you want to test against, and SageMaker automatically deploys the new model for inference. SageMaker then routes a copy of the inference requests received by the production model to the new model and creates a live dashboard that shows performance differences across key metrics including latency and error rate in real time.

Once you have reviewed the performance metrics and validated the model performance, you can quickly deploy the model in production. 

Launch Amazon SageMaker Autopilot experiments from Amazon SageMaker Pipelines to easily automate MLOps workflows

Amazon SageMaker Autopilot, a low-code machine learning (ML) service which automatically builds, trains, and tunes the best ML models based on your data, is now integrated with Amazon SageMaker Pipelines, the first purpose-built continuous integration and continuous delivery (CI/CD) service for ML. This enables the automation of an end-to-end flow of building ML models using SageMaker Autopilot and integrating models into subsequent CI/CD steps.

Starting this week, you can add an automated training step (AutoMLStep) in SageMaker Pipelines and invoke a SageMaker Autopilot experiment with Ensemble training mode. As an example, let’s consider building a training and evaluation ML workflow for a fraud detection use case with SageMaker Pipelines.

You can now launch a SageMaker Autopilot experiment using the AutoML Step which will automatically run multiple trials to find the best model on a given input dataset. After the model package for the best model is created using the CreateModel step, its performance can be evaluated on test data using the Transform step within SageMaker Pipelines. Eventually, the model can be registered into the SageMaker Model Registry using the RegisterModel step. 

Amazon SageMaker Studio now supports automatic conversion of notebook code to production-ready jobs

Amazon SageMaker Studio is a fully integrated development environment (IDE) for machine learning (ML) that enables ML practitioners to perform every step of the machine learning workflow, from preparing data to building, training, tuning, and deploying models.

This week, AWS were excited to announce a new capability in SageMaker Studio notebooks that enables automatic conversion of notebook code to production-ready jobs.

When data scientists and developers move their notebooks into production, they manually copy the snippets of code from notebook into a script, package the script with all its dependencies into a container, and then schedule the container to run as a job.

In addition, if the job needs to be run on a schedule, they must set up, configure, and manage a continuous integration and continuous delivery (CI/CD) pipeline to automate their deployments. It can take weeks to get all the necessary infrastructure set up, which takes time away from core ML development activities.

SageMaker Studio now lets ML practitioners select a notebook and automate it to run as a job in production with just few simple clicks, right from the Studio visual interface.

Once a job is scheduled, SageMaker Studio automatically takes the snapshot of entire notebook, packages it along with its dependencies in a container, builds the infrastructure, runs the notebook as an automated job, and de-provisions the infrastructure upon job completion–reducing the time it takes to move a notebook to production from weeks to hours.

Amazon S3 Access Points can now be used to securely delegate access permissions for shared datasets to other AWS accounts

Amazon S3 Access Points simplify data access for any AWS service or customer application that stores data in S3 buckets. With S3 Access Points, you create unique access control policies for each access point to more easily control access to shared datasets.

Now, bucket owners are able to authorize access via access points created in other accounts. In doing so, bucket owners always retain ultimate control over data access, but can delegate responsibility for more specific IAM-based access control decisions to the access point owner.

This allows you to securely and easily share datasets with thousands of applications and users, and at no additional cost.

S3 Access Points help you more easily configure the right access controls for shared datasets, simplifying access management for multiple applications. Each access point has its own policy that defines which requests and VPCs are allowed to use the access point, customized for each application or use case.

With cross-account access points, you can allow trusted accounts, such as the account administrator of a different team or a partner organization, to self-serve permissions for datasets. Additionally, you don't have to make continuous changes to a bucket policy for every permission change for applications or roles within these trusted accounts.

Amazon Redshift data sharing now supports centralized access control with AWS Lake formation (Preview)

Amazon Redshift data sharing enables you to efficiently share live data across Amazon Redshift data warehouses. Amazon Redshift now supports simplified governance of Amazon Redshift data sharing by enabling you to use AWS Lake Formation to centrally manage permissions on data being shared across your organization.

With the new Amazon Redshift data sharing managed by AWS Lake Formation, you can view, modify, and audit permissions on the tables and views in the Redshift datashares using Lake Formation APIs and the AWS Console, and allow the Redshift datashares to be discovered and consumed by other Redshift data warehouses.

With Lake Formation managed data sharing, you now have better visibility and control of data shared within and across accounts in your organization. Help improve the security of your data by enabling security administrators to use Lake Formation to manage granular entitlements such as table-level, column-level, or row-level access to tables being shared in Redshift data sharing and Redshift external tables.

With AWS Lake Formation managed data sharing, you can define policies once and enforce those consistently for multiple consumers.

Amazon SageMaker now supports geospatial ML (preview)

Amazon SageMaker now supports geospatial machine learning (ML), making it easier for data scientists and ML engineers to build, train, and deploy models using geospatial data. Today, the majority of all data generated contains geospatial information, but only a small fraction of it is used for ML because accessing, processing, and visualizing the data is complex, time consuming, and expensive.

SageMaker’s new geospatial capabilities simplify the process of building, training, and deploying models with geospatial data. You can now access readily available geospatial data sources, efficiently process or enrich large-scale geospatial datasets with purpose-built operations, and accelerate model building by selecting pretrained ML models.

You can then analyze and explore the generated predictions on an interactive map within SageMaker and share and collaborate on results. You can use SageMaker geospatial capabilities for a wide range of use cases, such as supporting sustainable urban development, maximizing harvest yield and food security, assessing risk and insurance claims, and predicting retail demand.

Amazon Redshift now supports auto-copy from Amazon S3

Amazon Redshift launches the preview of auto-copy support to simplify data loading from Amazon S3 into Amazon Redshift. You can now setup continuous file ingestion rules to track your Amazon S3 paths and automatically load new files without the need for additional tools or custom solutions. 

Amazon Redshift customers run COPY statements to load data into their local tables from various data sources including Amazon S3. You can now store a COPY statement into a Copy Job, which automatically loads the new files detected in the specified Amazon S3 path.

Copy Jobs track previously loaded files and exclude them from the ingestion process. Their activity can be monitored using the system tables. Copy Jobs can also be executed manually to reuse copy statements and prevent data duplication when automated loading is not needed.

Amazon DocumentDB (with MongoDB compatibility) Elastic Clusters is now generally available

Amazon DocumentDB (with MongoDB compatibility) is announcing the general availability of Amazon DocumentDB Elastic Clusters, a new type of Amazon DocumentDB cluster that let’s you elastically scale your document database to handle millions of reads and writes per second with petabytes of storage.

With Amazon DocumentDB Elastic Clusters, you can leverage the MongoDB Sharding API to create scalable collections that can be petabytes in size. You can start with Amazon DocumentDB Elastic Clusters for their small applications and scale their clusters to handle millions of reads and writes per second, and PBs of storage capacity as their applications grow.

Scaling Amazon DocumentDB Elastic Clusters is as simple as changing the number of cluster shards in the console and the rest is handled by the Amazon DocumentDB service, and can be as fast as minutes compared to hours when done manually. You can also scale down to save on cost at anytime. 

Amazon DocumentDB Elastic Clusters provides many of the same management capabilities as Amazon DocumentDB instance-based clusters including Multi-AZ support, AWS CloudWatch integration, automated patching and snapshot back-ups. 

You can continue to use existing MongoDB tools and application drivers to work with Amazon DocumentDB Elastic Clusters. In addition to simple queries, you can also leverage powerful aggregation pipelines in Amazon DocumentDB Elastic Clusters to filter, group, process, and sort data across shards.

AWS announces Amazon EC2 Inf2 instances (Preview)

This week, AWS announced the preview of Amazon Elastic Compute Cloud (Amazon EC2) Inf2 instances, which are designed to deliver high performance at the lowest cost in Amazon EC2 for the most demanding deep learning (DL) inference applications.

Inf2 instances are powered by up to 12 AWS Inferentia2, the third AWS-designed DL accelerator. Inf2 instances offer 3x higher compute performance, up to 4x higher throughput, and up to 10x lower latency compared to Inf1 instances.

You can use Inf2 instances to run DL applications for natural language understanding, translation, video and image generation, speech recognition, personalization, and more.

They are optimized to deploy complex models, such as large language models (LLM) and vision transformers, at scale while also improving the Inf1 instances’ price-performance benefits for smaller models. To support ultra-large 100B+ parameter models, Inf2 instances are the first inference-optimized instances in Amazon EC2 to support scale-out distributed inference with ultra-high-speed connectivity between accelerators.

Inf2 instances offer up to 2.3 petaflops of DL performance, up to 384 GB of accelerator memory with 9.8 TB/s bandwidth, and NeuronLink, an intra-instance ultra-high-speed, nonblocking interconnect. Inf2 instances also offer up to 50% better performance per watt compared to GPU-based instances in Amazon EC2 and help you meet your sustainability goals.

The AWS Neuron SDK is natively integrated with popular ML frameworks, such as PyTorch and TensorFlow, so you can deploy your DL applications on Inf2 with a few lines of code.  

Announcing Amazon Redshift integration for Apache Spark with Amazon EMR

Amazon EMR announces Amazon Redshift integration with Apache Spark. This integration helps data engineers build and run Spark applications that can consume and write data from an Amazon Redshift cluster. Starting with Amazon EMR 6.9, this integration is available across all three deployment models for EMR - EC2, EKS, and Serverless.

You can use this integration to build applications that directly write to Redshift tables as a part of your ETL workflows or to combine data in Redshift with data in other source. Developers can load data from Redshift tables to Spark data frames or write data to Redshift tables. Developers don’t have to worry about downloading open source connectors to connect to Redshift.

Amazon Redshift integration for Apache Spark enables applications on Amazon EMR that access Redshift data to run up to 10x faster compared to existing Redshift-Spark connectors. It supports pushing down relational operations such as joins, aggregations, sort and scalar functions from Spark to Redshift to improve your query performance.

It supports IAM-based roles to enable single sign on capabilities and integrates with AWS Secrets Manager for securely managing keys.

AWS announces Amazon VPC Lattice (Preview)

This week, AWS announced the preview of Amazon VPC Lattice, an application layer networking service that makes it simple to connect, secure, and monitor service-to-service communication. You can use VPC Lattice to enable cross-account, cross-VPC connectivity, and application layer load balancing for your workloads in a consistent way regardless of the underlying compute type – instances, containers, and serverless. 

VPC Lattice handles common tasks required for service-to-service communication such as service discovery, request level routing and load balancing, authentication, authorization, and generates detailed metrics and logs to give you visibility into how your service is performing. In addition, Amazon VPC Lattice is a fully managed service, removing the need to install and manage additional infrastructure, such as host-based agents or sidecar proxies, making it easier to connect your new and existing applications.

During the preview, Amazon VPC Lattice is available in the US West (Oregon) Region. To preview Amazon VPC Lattice, see the sign-up page.

Announcing Amazon OpenSearch Serverless (Preview)

Amazon OpenSearch Service now offers a new serverless option, Amazon OpenSearch Serverless. This option simplifies the process of running petabyte-scale search and analytics workloads without having to configure, manage, or scale OpenSearch clusters.

OpenSearch Serverless automatically provisions and scales the underlying resources to deliver fast data ingestion and query responses for even the most demanding and unpredictable workloads. With OpenSearch Serverless, you pay only for the resources consumed.

OpenSearch Serverless decouples compute and storage and separates the indexing (ingest) components from the search (query) components, with Amazon Simple Storage Service (Amazon S3) as the primary data storage for indexes.

With this decoupled architecture, OpenSearch Serverless can scale search and indexing functions independently of each other, and independently of the indexed data in Amazon S3. For example, when an application-monitoring workload receives a sudden burst of logging activities during an availability event, OpenSearch Serverless instantly scales the resources to ingest and store the data without impacting query response times.

To get started with OpenSearch Serverless, developers can create new collections, a logical grouping of indexed data that works together to support a workload. OpenSearch Serverless supports the same ingest and query APIs as OpenSearch, so you can get started in seconds with your existing clients and applications—all while building data visualizations with serverless OpenSearch Dashboards.

AWS Announces Torn Write Prevention for EC2 I4i instances, EBS, and Amazon RDS

Torn Write Prevention (TWP) is a feature that ensures 16KiB write operations are not torn in the event of operating system crashes or power loss during write transactions. This feature is available for AWS customers using instance store on AWS Nitro SSD based EC2 I4i storage optimized instances, Amazon Elastic Block Store (EBS), a block storage service, when attached to all EC2 Nitro-based instances, and Amazon Relational Database Services (RDS), a fully managed, open-source cloud database.

TWP enables customers running databases such as MySQL or MariaDB on EC2, EBS, and managed services like Amazon RDS, to turn off the double write operation, thereby accelerating database performance Transactions per Second (TPS) by up to 30% without compromising the resiliency of their workloads.

With additional performance unlocked, customers can now support their business growth without having to over provision or scale up their clusters, saving cost. For Amazon RDS users, Amazon RDS Optimized Writes uses TWP’s technology to improve write transaction throughput by up to 2x for RDS for MySQL customers at no additional cost. 

For customers using AWS EC2 I4i instances, they can use TWP in those regions where I4i instance is available. For customers with EBS volumes attached to Nitro instances, TWP is available in US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Sydney, Tokyo) with support for more regions coming soon

Contact Lens for Amazon Connect now provides conversational analytics for chat

Contact Lens for Amazon Connect now provides conversational analytics capabilities for Amazon Connect Chat, extending the machine learning powered analytics to better assess chat contacts. These capabilities enable businesses to understand customer sentiment in both agent and chatbot conversations, redact sensitive customer information, and monitor agent compliance with company guidelines to help improve agent performance and customer experience. 

With Contact Lens’ conversational analytics for chat, businesses can identify contacts where customers had issues based on specific keywords, sentiment score, contact categories, and agent response time. Contact Lens also provides chat summarization, a capability that uses machine learning to classify key parts of the customer’s conversation (e.g. issue, outcome, or action item) and enables businesses to dive deep into specific sections of the chat transcript.

Contact Lens’ conversational analytics for chat can also detect and redact sensitive customer information (e.g., name, credit card details, social security number, etc.) from chat transcripts and provide access to both redacted and unredacted chat transcripts. 

Amazon Kinesis Data Firehose adds support for data stream delivery to Amazon OpenSearch Serverless

Amazon Kinesis Data Firehose can now deliver streaming data to an Amazon OpenSearch Serverless. With few clicks, you can easily ingest, transform, and reliably deliver streaming data into an Amazon OpenSearch Serverless without building and managing your own data ingestion and delivery infrastructure. Kinesis Data Firehose is a fully managed service that automatically scales to match the throughput of your data and without ongoing administration.

OpenSearch Serverless is a new serverless option offered by Amazon OpenSearch Service. OpenSearch Serverless makes it simple to run petabyte scale search and analytics workloads without having to configure, manage, or scale OpenSearch clusters. OpenSearch Serverless automatically provisions and scales the underlying resources to deliver fast data ingestion and query responses for even the most demanding and unpredictable workloads.

AWS announces Amazon Redshift integration for Apache Spark

Amazon Redshift integration for Apache Spark helps developers seamlessly build and run Apache Spark applications on Amazon Redshift data. If you are using AWS analytics and machine learning (ML) services—such as Amazon EMR, AWS Glue, and Amazon SageMaker—you can now build Apache Spark applications that read from and write to your Amazon Redshift data warehouse without compromising on the performance of your applications or transactional consistency of your data. 

Amazon Redshift integration for Apache Spark builds on an existing open source connector project and enhances it for performance and security, helping customers gain up to 10x faster application performance. We thank the original contributors on the project who collaborated with us to make this happen. As we make further enhancements we will continue to contribute back into the open source project.

Amazon Redshift integration for Apache Spark minimizes the cumbersome and often manual process of setting up a spark-redshift open-source connector and reduces the time needed to prepare for analytics and ML tasks. You only need to specify the connection to your data warehouse and can start working with Amazon Redshift data from your Apache Spark-based applications in seconds.

You can use several pushdown capabilities for operations such as sort, aggregate, limit, join, and scalar functions so that only the relevant data is moved from your Amazon Redshift data warehouse to the consuming Spark application. This allows you to improve the performance of your applications. You can also help make your applications more secure by using AWS Identity Access and Management (IAM) credentials to connect to Amazon Redshift.

Amazon Connect forecasting, capacity planning, and scheduling is now generally available

Amazon Connect forecasting, capacity planning, and scheduling, now generally available, provides new machine learning (ML)–powered capabilities for contact centers. Forecasting, capacity planning, and scheduling help your contact center managers forecast contact demand, determine optimal staffing levels, and ensure the right agents are available at the right time to meet your operational and business goals.

With forecasting, capacity planning, and scheduling, agents have the flexibility to choose when they want to work overtime or take time off, within predetermined, manager-defined limits, without the need for manual approvals.

When agents accept overtime or time-off slots, Amazon Connect uses ML to make real-time schedule updates, such as moving or creating additional rest breaks. Automation frees managers to focus on reviewing performance metrics and coaching agents.

Additionally, contact center managers can track agents’ adherence to planned schedules in real time. With one check of a box, you can try out these capabilities, without any extra cost, effort, or time.

Announcing Amazon EC2 Hpc6id instances

AWS announced the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Hpc6id instances. These instances are optimized to efficiently run memory bandwidth-bound, data-intensive high performance computing (HPC) workloads, such as finite element analysis and seismic reservoir simulations. With EC2 Hpc6id instances, you can lower the cost of your HPC workloads while taking advantage of the elasticity and scalability of AWS.

EC2 Hpc6id instances are powered by 64 cores of 3rd Generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.5 GHz, 1,024 GB of memory, and up to 15.2 TB of local NVMe solid state drive (SSD) storage. EC2 Hpc6id instances, built on the AWS Nitro System, offer 200 Gbps Elastic Fabric Adapter (EFA) networking for high-throughput inter-node communications that enable your HPC workloads to run at scale.

The AWS Nitro System is a rich collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware and software. It delivers high performance, high availability, and high security while reducing virtualization overhead.

EC2 Hpc6id instances are available in the following AWS Regions: US East (Ohio) and AWS GovCloud (US-West). To optimize EC2 Hpc6id instances networking for tightly coupled workloads, you can access EC2 Hpc6id instances in a single Availability Zone in each Region.

AWS announces Amazon Aurora zero-ETL integration with Amazon Redshift

Amazon Aurora now supports zero-ETL integration with Amazon Redshift, to enable near real-time analytics and machine learning (ML) using Amazon Redshift on petabytes of transactional data from Aurora. Within seconds of transactional data being written into Aurora, the data is available in Amazon Redshift, so you don’t have to build and maintain complex data pipelines to perform extract, transform, and load (ETL) operations.

This zero-ETL integration also enables you to analyze data from multiple Aurora database clusters in the same new or existing Amazon Redshift instance to derive holistic insights across many applications or partitions.

With near real-time access to transactional data, you can leverage Amazon Redshift’s analytics and capabilities such as built-in ML, materialized views, data sharing, and federated access to multiple data stores and data lakes to derive insights from transactional and other data.

Amazon Redshift announces general availability of real-time streaming ingestion for Amazon KDS and Amazon MSK

Amazon Redshift now supports real-time streaming ingestion for Amazon Kinesis Data Streams (KDS) and Amazon Managed Streaming for Apache Kafka (MSK). Amazon Redshift streaming ingestion eliminates the need to stage streaming data in Amazon S3 before ingesting it into Amazon Redshift, enabling customers to achieve low latency, measured in seconds, while ingesting hundreds of megabytes of streaming data per second into their data warehouse. 

Data engineers, data analysts, and big data developers are evolving their analytics from batch to real-time, adopting streaming engines like Amazon KDS and Amazon MSK, to implement near real-time responsive logic and analytics on streaming application data.

Currently, customers who want to ingest real-time data from services like Amazon KDS and Amazon MSK, into Amazon Redshift must first stage the data in Amazon S3 and use the COPY command, which achieves latency in the minutes.

With the new streaming ingestion capability in Amazon Redshift, you can use SQL (Structured Query Language) within Redshift to provide the ability to connect to and directly ingest data from multiple Amazon KDS streams or multiple Amazon MSK topics simultaneously. Amazon Redshift streaming ingestion simplifies data pipelines by letting you create materialized views on top of streams directly.

The materialized views can also include SQL transforms as part of your ELT (Extract Load Transform) pipeline.

Once the materialized views are defined, streaming data is automatically and continuously ingested from the KDS stream or MSK topic into the Amazon Redshift streaming materialized view when the Auto Refresh feature is enabled.

You can also choose to manually refresh the streaming materialized view when direct control over ingest scheduling is desired. This approach allows you to perform downstream processing and transformations of streaming data using existing Amazon Redshift tools and SQL that you are familiar with, at no additional cost.

Announcing AWS SimSpace Weaver

This week, AWS announced AWS SimSpace Weaver, a new fully managed compute service that helps you deploy large-scale spatial simulations in the cloud. With SimSpace Weaver, you can create seamless virtual worlds with millions of objects that can interact with one another in real time without managing the backend infrastructure.

Historically, spatial simulations were generally confined to running on a single server. This limited the number and complexity of dynamic entities that developers could build in their simulations.

With SimSpace Weaver, you can break down the simulation world into smaller, discrete spatial areas and distribute the task of running the simulation code across multiple Amazon Elastic Compute Cloud (Amazon EC2) instances.

SimSpace Weaver automatically provisions the requested number of EC2 resources, networks them together, and maintains synchronized simulation time across the distributed cluster. It manages the complexities of data replication and object transfer across Amazon EC2 instances so that you can spend more time developing simulation code and content. You can use your own custom simulation engine or popular third-party tools such as Unity and Unreal Engine 5 with SimSpace Weaver.

To quickly iterate on simulation code, use the SimSpace Weaver Local Development environment to test on your hardware. The local environment uses the same SimSpace Weaver APIs as the cloud service, so you can transition to AWS without needing to modify any code when it’s time to scale. 

AWS KMS launches External Key Store

This week, AWS Key Management Service (AWS KMS) introduced the External Key Store (XKS), a new feature for customers who want to protect their data with encryption keys stored in an external key management system under their control.

This capability brings new flexibility for customers to encrypt or decrypt data with cryptographic keys, independent authorization, and audit in an external key management system outside of AWS.

XKS may help you address your compliance needs where encryption keys for regulated workloads must be outside AWS and solely under your control. 

To provide customers with a broad range of external key manager options, AWS KMS developed the XKS specification with feedback from several HSM, key management and integration service providers, including Thales, Entrust, Salesforce, T-Systems, Atos, Fortanix, and HashiCorp.

For information about availability, pricing, and how to use XKS with solutions from these vendors, consult the vendor’s documentation.

Introducing Amazon Security Lake (Preview)

Amazon Security Lake automatically centralizes security data from cloud, on-premises, and custom sources into a purpose-built data lake stored in your account. Security Lake makes it easier to analyze security data so that you can get a more complete understanding of your security across the entire organization.

You can also improve the protection of your workloads, applications, and data. Security Lake automatically gathers and manages all your security data across accounts and Regions. You can use your preferred analytics tools while retaining control and ownership of your security data. 

Security Lake has adopted the Open Cybersecurity Schema Framework (OCSF), an open standard. It helps normalize and combine security data from AWS and a broad range of enterprise security data sources. Now, your analysts and engineers can get broad visibility to investigate and respond to security events and improve your security across the cloud and on premises.

Once enabled, Security Lake automatically creates a security data lake in a Region that you select for rolling up your global data. AWS log and security data sources are automatically collected in your selected Amazon Simple Storage Service (Amazon S3) bucket for existing and new accounts.

They are normalized into the OCSF format, including AWS CloudTrail management events, Amazon Virtual Private Cloud (Amazon VPC) Flow Logs, Amazon Route 53 Resolver query logs, and security findings from over 50 solutions integrated through AWS Security Hub. You can also bring data into your security data lake from third-party security solutions and your custom data that you have converted into OCSF.

This data can include logs from internal applications or network infrastructure. Security Lake manages the lifecycle of your data with customizable retention settings and storage costs with automated storage tiering. 

Introducing Amazon Omics

Amazon Omics is a new purpose-built service that helps healthcare and life science organizations store, query, and analyze genomic, transcriptomic, and other omics data and then generate insights from that data to improve health and advance scientific discoveries.

Omics is designed to support large scale analysis and collaborative research so you can store and, together with other AWS services, analyze genome data for entire populations. Amazon Omics also automates provisioning and scaling of bioinformatics workflows, so you can run analysis pipelines at production scale and spend more time on research and innovation.

With Amazon Omics, you can bring genomic, biological, and population health data together to generate insights and offer more personalized care with multimodal analysis. For example, you can train ML models with Amazon SageMaker to help researchers predict whether individuals are predisposed to certain diseases.

You can also combine an individual’s genome data with their medical history from Amazon HealthLake to deliver better diagnosis and personalized treatment plans. Additionally, Amazon Omics is HIPAA eligible.  

Announcing AWS Supply Chain (Preview)

AWS Supply Chain is a cloud-based application that helps supply chain leaders mitigate risks and lower costs to increase supply chain resilience. AWS Supply Chain unifies supply chain data, provides machine learning (ML)–powered actionable insights, and offers built-in contextual collaboration, all of which help you increase customer service levels by reducing stock outs and help you lower costs from overstock.

AWS Supply Chain provides a real-time visual map feature showing the level and health of inventory in each location, ML-powered insights, and targeted watchlists to alert you to potential risks.

When a risk is uncovered, AWS Supply Chain provides inventory rebalancing recommendations and built-in, contextual collaboration tools that make it easier to coordinate across teams to implement solutions. AWS Supply Chain connects to your existing enterprise resource planning (ERP) and supply chain management systems, without replatforming, upfront licensing fees, or long-term contracts. 

Amazon Connect now provides step-by-step guides in agent workspace (preview)

Amazon Connect agent workspace now provides a step-by-step experience (preview) that guides agents by identifying customer issues and recommending subsequent actions. With Amazon Connect, you can create workflows that walk agents through custom UI pages that suggest what to do at a given moment during a customer interaction. Detailed step-by-step guides increase agent productivity and decrease training time.

Step-by-step guides within agent workspace programmatically infer customer intent and recommend discrete next actions for agents within a guided experience. For example, when a customer calls, agent workspace presents the agent with the likely issue based on the customer’s history or current context (such as a lost order).

Then agent workspace guides the agent through the actions needed to help quickly resolve the issue (such as initiating a replacement order). You can design workflows for various types of customer interactions and present agents with different step-by-step guides based on customer context, such as call queues, customer information, and IVR responses.

New question types for Amazon QuickSight Q

This week, Amazon QuickSight announces support for two new question types that simplify and scale complex analytical tasks using natural language. Business users type “forecast” to see future trajectories for up to 3 measures simultaneously. “Why” performs contribution analysis to automatically identify key drivers.

These new capabilities enable business users to instantly get insights previously only accessible by enlisting the help of trained analysts. 'Forecast' questions make it fast and easy to predict trends and helps business users understand what they should expect so they can act quickly and plan accordingly.

'Why' questions are easy to ask and natural to think of, so business users can quickly pinpoint insights they need to know, rather than manually analyzing a body of data to discover contributing changes.

Amazon Connect announces Contact Lens agent performance evaluation forms (Preview)

Contact Lens for Amazon Connect now provides a set of agent performance evaluation capabilities (preview) that enable contact center managers to create evaluation forms with criteria (e.g, adherence to talk scripts or compliance with sensitive data collection practices) that can be scored using Contact Lens’ machine learning powered conversational analytics.

Managers can assess agent performance alongside contact details, recordings, transcripts, and summaries, without the need to switch applications. These capabilities enable managers to assess more agent/customer interactions while reducing the amount of time they spend identifying performance issues and coaching agents to perform their best. 

Amazon Redshift announces support for Dynamic Data Masking (Preview)

Amazon Redshift already supports role-based access control, row-level security, and column-level security to enable organizations to enforce fine-grained security on Redshift data. Amazon Redshift now extends these security features by supporting Dynamic Data Masking (DDM) that allows you to simplify the process of protecting sensitive data in your Amazon Redshift data warehouse.

With Dynamic data masking, you control access to your data through simple SQL based masking policies that determine how Redshift returns sensitive data to the user at query time. Dynamic data masking makes it simple for you to adapt to changing privacy requirements without altering underlying data or updating SQL queries.

With this capability, as a security administrator, you can create masking policies to define consistent, format preserving, and irreversible masked data values. You can apply masking on a specific column or list columns in a table.

Also, you have the flexibility of choosing how to show the masked data. For example, you can completely hide all the information about the data, you can replace partial real values with wildcard characters, or you can define your own way to mask the data using SQL Expressions, Python, or Lambda User Defined Functions.

Additionally, you can apply a conditional masking based on other columns, which selectively protects the column data in a table based on the values in one or more different columns. When you attach a policy to a table, the masking expression can be applied to one or more of its columns. 

Announcing the preview of AWS Verified Access

This week AWS announced the preview release of AWS Verified Access, a new service that allows you to deliver secure access to corporate applications without a VPN. Built using AWS Zero Trust guiding principles, Verified Access helps you implement a work-from-anywhere model in a secure and scalable manner.

With Verified Access, you can quickly enable applications for secure remote access by creating a set of fine-grained policies that define the conditions under which a user can access an application.

Verified Access evaluates each access request in real time and only connects users to the application if these conditions are met. Using Verified Access, you can define a unique access policy for each application, with conditions based on identity data and device posture.

For example, you can create policies allowing only members of the finance team to access their financial reporting application and using only compliant devices. Verified Access supports your workforce identities through direct integration with AWS IAM Identity Center and direct federation with third-party identity providers through OpenID Connect (OIDC).

Verified Access also integrates with third-party device posture providers for additional security context. You can find the list of partners integrated with Verified Access in the AWS News blog.

Amazon FSx for NetApp ONTAP simplifies access to Multi-AZ file systems from on-premises and peered networks

Amazon FSx for NetApp ONTAP is announcing a new capability that makes it even easier to access Multi-AZ file systems from other networks (on-premises networks and peered networks in AWS).

Starting this week, you can create Multi-AZ file systems that you can access from other networks over AWS Transit Gateway without needing to perform any additional routing configuration—making it even quicker and easier to get started.

When you create a Multi-AZ FSx for ONTAP file system, you specify an IP address range for the endpoints that you use to access and manage your data. Before today, these endpoints could only be created outside of your VPC’s IP address range.

To route traffic to these endpoints from on-premises or any other peered network, you would therefore need to create custom routes in your Transit Gateway.

Starting this week, you now have the option to create Multi-AZ file systems with an IP address range that’s within your VPC’s IP address range. This allows you to access them from on premises without needing to create any additional routes in your Transit Gateway.

AWS CloudTrail Lake now supports configuration items from AWS Config

AWS CloudTrail Lake now integrates with AWS Config to support ingestion and query of configuration items. Now you can query and analyze both configuration items and CloudTrail activity logs in CloudTrail Lake, thereby simplifying and streamlining your security and compliance investigations.

CloudTrail Lake enables security teams to perform retrospective investigations by helping answer who made what configuration changes to resources associated with security incidents such as data exfiltration or unauthorized access.

CloudTrail Lake helps compliance engineers investigate noncompliant changes to their production environments by relating AWS Config rules with noncompliant status to who and what resource changes triggered them. IT teams can perform historical asset inventory analysis on configuration items using CloudTrail Lake’s default seven-year data retention period.

It's easy to get started with ingesting and analyzing configuration items in CloudTrail Lake. First, you must enable recording in AWS Config. Next, you must create a CloudTrail Lake event data store using the CloudTrail Lake console, or using the AWS API or CLI to collect configuration items.

This will allow newly-recorded configuration items from AWS Config, at an account or organization level, to be delivered to the specified CloudTrail Lake event data store. You can join queries in CloudTrail Lake across diverse event data sources, such as CloudTrail events or configuration items, for granular analysis. Sample queries are available in the CloudTrail Lake console to help you get started.

AWS announces lower latencies for Amazon Elastic File System

Amazon Elastic File System (Amazon EFS) now delivers lower latencies enabling you to power an even broader set of applications with simple, scalable storage on AWS.

Amazon EFS now delivers up to 60% lower read operation latencies when working with frequently-accessed data and metadata. In addition, EFS now delivers up to 40% lower write operation latencies when working with small files (<64 KB) and metadata.

Put together, for a typical workload these improvements can benefit the majority of all file operations. With these reductions, in the US East (N. Virginia) Region, for example, read latencies are as low as 0.25 milliseconds for frequently-accessed data, and write latencies are as low as 1.6 milliseconds for EFS-One Zone (and 2.7 milliseconds for EFS-Standard). Applications that may benefit from these improvements include content management systems, analytics, and CI/CD tools.

AWS Network Manager introduces real-time performance monitoring for the AWS Global Network

Using AWS Network Manager, you can now monitor the real-time and historical performance of the AWS Global Network for operational and planning purposes. AWS Network Manager provides aggregate network latency between AWS Regions, Availability Zones and within each Availability Zone, allowing you to better understand how your application performance relates to the performance of the underlying AWS network.

You can monitor the network latency for the AWS Global Network in up to 5-minute intervals, as well as view the 45-day historical trend from AWS Network Manager. In addition, you can also publish these latency metrics to Amazon CloudWatch, to further monitor, analyze, and alert on them.

To get started, navigate to AWS Network Manager from the AWS Management Console, and select AWS Regions or Availability Zones to view the network latencies within or between them.

There is no cost to view these metrics in AWS Network Manager. You only pay regular Amazon CloudWatch Metrics costs if you publish the metrics to Amazon CloudWatch. To learn more, visit the documentation.

Introducing Amazon EC2 R7iz instances

Starting this week, memory-optimized, high-frequency Amazon EC2 R7iz instances are available in preview. R7iz instances are the first EC2 instances powered by 4th generation Intel Xeon Scalable processors (code named Sapphire Rapids) with an all core turbo frequency up to 3.9 GHz.

These instances have the highest performance per vCPU among x86-based EC2 instances, and they deliver up to 20% higher performance than z1d instances. The instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor that delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security.

R7iz instances are ideal for front-end Electronic Design Automation (EDA), relational database with high per-core licensing fees, financial, actuarial, data analytics simulations, and other workloads requiring a combination of high compute performance and high memory footprint.

For increased memory and scalability, R7iz instances are available in various sizes, including two bare metal sizes, with up to 128 vCPUs and up to 1,024GiB of memory. R7iz instances are the first x86-based EC2 instances to use DDR5 memory and deliver up to 2.4x higher memory bandwidth than comparable high-frequency instances. They also deliver up to 50 Gbps of networking speed and 40 Gbps of Amazon Elastic Block Store (EBS) bandwidth.

AWS Config rules now support proactive compliance

AWS Config announces the ability to proactively check for compliance with AWS Config rules prior to resource provisioning. Customers use AWS Config to track the configuration changes made to their cloud resources and check if those resources match their desired configurations through a feature known as AWS Config rules. Proactive compliance allows customers to evaluate the configurations of their cloud resources before they are created or updated.

Typically, customers run compliance checks against the resources after they have been created or updated. This launch extends AWS Config functionality so that, in addition to being run after resources have been provisioned, AWS Config rules can now be run at any time before provisioning, saving customers time spent remediating non-compliant resources.

Administrators can use the feature to create standard resource templates which they know to be compliant with AWS Config rules before sharing these templates across their organization. Developers can incorporate AWS Config rules into their infrastructure-as-code CI/CD pipelines to identify non-compliant resources before provisioning.

To get started, you can use the AWS Config console or APIs to enable AWS Config rules to run proactively. Then, you can invoke these AWS Config rules at any time before provisioning to learn whether the configurations of your resource are compliant or non-compliant with your policies. Through a custom hook, you can also trigger AWS Config rules to run proactively as part of resource deployments through AWS CloudFormation.

AWS Glue for Apache Spark Native support for Data Lake Frameworks (Apache Hudi, Apache Iceberg, Delta Lake)

AWS Glue for Apache Spark now supports three open source data lake storage frameworks: Apache Hudi, Apache Iceberg, and Linux Foundation Delta Lake. These frameworks allow you to read and write data in Amazon Simple Storage Service (Amazon S3) in a transactionally consistent manner.

AWS Glue is a serverless, scalable data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources. This feature removes the need to install a separate connector and reduces the configuration steps required to use these frameworks in AWS Glue for Apache Spark jobs.

These open source data lake frameworks simplify incremental data processing in data lakes built on Amazon S3. They enable capabilities including time travel queries, ACID (Atomicity, Consistency, Isolation, Durability) transactions, streaming ingestion, change data capture (CDC), upserts, and deletes.

AWS Nitro Enclaves now supports Amazon EKS and Kubernetes

AWS Nitro Enclaves now supports Amazon EKS and Kubernetes for orchestrating Nitro enclaves. You can now use familiar Kubernetes tools to orchestrate, scale, and deploy enclaves from a Kubernetes pod. 

AWS Nitro Enclaves is an Amazon EC2 capability that enables customers to create isolated compute environments to further protect and securely process highly sensitive data within their EC2 instances. Nitro Enclaves helps customers reduce the attack surface area for their most sensitive data processing applications.

Amazon EKS is a managed Kubernetes service that makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or worker nodes.

Previously, you would need to write custom code to leverage Kubernetes to deploy and scale your Nitro enclaves. With this launch, you can use the open-source tool called the Nitro Enclaves Kubernetes Device Plug-in, which provides Kubernetes pods with the ability to manage the lifecycle of an enclave.

Kubernetes support for Nitro Enclaves is available in all AWS regions that Nitro Enclaves is available in.

Introducing Elastic Network Adapter (ENA) Express for Amazon EC2 instances

AWS announced the general availability of Elastic Network Adapter (ENA) Express for Amazon Elastic Compute Cloud (EC2) instances. All current generation EC2 instances use ENA, a purpose-built network interface, to deliver an enhanced networking experience.

ENA Express is a new ENA feature that uses the AWS Scalable Reliable Datagram (SRD) protocol to improve network performance in two key ways: higher single flow bandwidth and lower tail latency for network traffic between EC2 instances.

Workloads such as distributed storage systems and live media encoding require large flows and are sensitive to variance in latency. Before today, AWS customers could use multipath TCP to increase bandwidth, but this adds complexity and at times, it maybe incompatible with the application layer.

TCP is also not equipped to handle congestion when your server is overloaded with requests. SRD is a proprietary protocol that delivers these improvements through advanced congestion control, multi-pathing, and packet reordering directly from the Nitro card. Enabling ENA Express is a simple configuration that makes enabling SRD as easy as a single command or console toggle for your EC2 instances.

Using the SRD protocol, ENA Express increases the maximum single flow bandwidth of EC2 instances from 5 Gbps up to 25 Gbps, and it can provide up to 85% improvement in P99.9 latency for high throughput workloads. ENA Express works transparently to your applications with TCP and UDP protocols.

When configured, ENA Express works between any two supported instances in an Availability Zone. ENA Express detects compatibility between your EC2 instances and establishes an SRD connection when both communicating instances have ENA Express enabled.

Once a connection is established, your traffic takes advantage of SRD and its performance benefits. Detailed monitoring for these SRD connections is also available through new ethtool metrics available in the latest Amazon Linux AMI.

Announcing a new generation of Amazon FSx for OpenZFS file systems

Amazon FSx for OpenZFS now offers a new generation of file systems that doubles the maximum throughput and IOPS performance of the existing generation and includes a high-speed NVMe cache.

Amazon FSx for OpenZFS provides fully managed, cost-effective, shared file storage powered by the popular OpenZFS file system. The new generation of FSx for OpenZFS file systems provides two performance improvements over the existing generation. First, new-generation file systems deliver up to 350,000 IOPS and 10 GB/s throughput for both reads and writes to persistent SSD storage.

Second, they include up to 2.5 TB of high-speed NVMe storage that automatically caches your most recently-accessed data, making that data accessible at over a million of IOPS and with latencies of a few hundred microseconds. With these new-generation file systems, you can power an even broader range of high-performance workloads like media processing/rendering, financial analytics, and machine learning with simple, highly-performant NFS-accessible storage.

Starting this week, you can create these new-generation FSx for OpenZFS file systems using the AWS Console, AWS CLI, or FSx API. These new file systems provide the same per-unit storage and throughput cost as existing generation file systems, and are available in four AWS Regions: US East (Ohio, N. Virginia), US West (Oregon), Europe (Ireland). 

Introducing AWS Glue 4.0

AWS were pleased to announce the launch of AWS Glue version 4.0, a new version of AWS Glue that accelerates data integration workloads in AWS. AWS Glue 4.0 upgrades the Spark engines to Apache Spark 3.3.0 and Python 3.10. Glue 4.0 gives customers the latest Spark and Python releases so they can develop, run, and scale their data integration workloads and get insights faster.

AWS Glue is a serverless, scalable data integration service that makes it simple to discover, prepare, move, and integrate data from multiple sources. AWS Glue 4.0 adds support for built-in Pandas APIs as well as support for Apache Hudi, Apache Iceberg, and Delta Lake formats, giving you more options for analyzing and storing your data. It upgrades connectors for native AWS Glue database sources such as RDS, MySQL, and SQLServer, which simplifies connections to common database sources.

AWS Glue 4.0 also adds native support for the new Cloud Shuffle Storage Plugin for Apache Spark, which helps customers scale their disk usage during runtime. It enables Adaptive Query Execution which dynamically optimizes your queries as it runs. Finally, AWS Glue 4.0 improves the developer experience by adding more context to error messages. As with AWS Glue 3.0, customers only pay for the resources they use.

Announcing the general availability of AWS Wickr

AWS Wickr is an end-to-end encrypted, enterprise communications service that offers advanced security features and facilitates one-to-one chats, group messaging, calling, file sharing, screen sharing, and more. The service is now generally available.

With AWS Wickr, organizations can collaborate more safely than with consumer-grade messaging applications. Advanced security and administrative controls help organizations meet data retention requirements and build custom solutions for data security challenges.

With AWS Wickr, you can retain information in a security-focused, customer-controlled data store that helps you comply with industry-specific regulations and meet audit needs. AWS Wickr also provides flexible administrative controls that allow management of data, including setting permissions, configuring ephemeral messaging options, and defining security groups.

AWS Wickr integrates with additional services such as Active Directory (AD), single sign-on (SSO) with OpenID Connect (OIDC), and more. Quickly create and manage AWS Wickr networks through the AWS Management Console. AWS Wickr also allows you to automate your workflows using AWS Wickr bots.

Amazon QuickSight announces Paginated Reports

Amazon QuickSight now supports Paginated Reports, which allows the capture of detailed operational data in custom formats to facilitate critical and day-to-day business processes. For example, you can get a weekly snapshot of operational metrics in a tabular format for critical review by business teams.

Paginated Reports allows you to create, schedule, and share highly formatted multipage reports and schedule data exports at scale using the QuickSight serverless architecture and straightforward interface. Additionally, you can now use the QuickSight authoring web interface to build dashboards, create paginated reports, and share the dashboards and reports with end users.

Amazon FSx for NetApp ONTAP doubles the maximum throughput capacity and SSD IOPS per file system

Amazon FSx for NetApp ONTAP is doubling the maximum throughput capacity per file system from 2 GB/s to 4 GB/s and the maximum SSD IOPS from 80,000 to 160,000, enabling you to accelerate your performance-intensive workloads such as video rendering and database applications.

The new 4 GB/s throughput capacity option delivers up to 2x higher throughput and IOPS performance and offers 2x more in-memory and NVMe read caching, compared to the 2 GB/s throughput capacity option. With the 4 GB/s option, you can drive up to 4 GB/s of read throughput and 160,000 IOPS from SSD storage (6 GB/s of read throughput and 650,000 IOPS from the NVMe read cache) and up to 1.8 GB/s of write throughput.

Amazon Macie introduces automated sensitive data discovery

AWS are pleased to announce automated sensitive data discovery, a new capability in Amazon Macie that provides continual, cost efficient, organization-wide visibility into where sensitive data resides across your Amazon Simple Storage Service (Amazon S3) estate. 

With this new capability, Macie automatically and intelligently samples and analyzes objects across your S3 buckets, inspecting them for sensitive data such as personally identifiable information (PII), financial data, and AWS credentials.

Macie then builds and continuously maintains an interactive data map of where your sensitive data in S3 resides across all accounts and Regions where you’ve enabled Macie, and provides a sensitivity score for each bucket.

Amazon Macie uses multiple automated techniques including resource clustering by attributes such as bucket name, file types, and prefixes to minimize the data scanning needed to uncover sensitive data in your S3 buckets.

This helps you continuously identify and remediate data security risks without manual configuration and lowers the cost to monitor for and respond to data security risks. 

Getting started with Amazon Macie is fast and easy with one-click in the AWS Management Console or with a single API call. Macie has multi-account support using AWS Organizations, which makes it easier for you to enable Macie across all of your AWS accounts.

Macie applies machine learning and pattern matching techniques to automatically identify and alert you to sensitive data, such as names, addresses, credit card numbers, or credential materials.

The first 30 days of automated sensitive data discovery are available at no additional charge for existing Macie accounts. For new accounts, automated sensitive data discovery is part of the 30-day Amazon Macie free trial.

During the trial period you can see the estimated cost of running automated sensitive data discovery after the trial period ends in the Macie Management Console. 

New Amazon S3 Multi-Region Access Points failover controls enable active-passive configurations and customer-initiated failovers

Amazon S3 Multi-Region Access Points failover controls let you shift S3 data access request traffic routed through an Amazon S3 Multi-Region Access Point to an alternate AWS Region within minutes to test and build highly available applications.

S3 Multi-Region Access Points provide a single global endpoint to access a data set that spans multiple S3 buckets in different AWS Regions. With S3 Multi-Region Access Points failover controls, you can operate S3 Multi-Region Access Points in an active-passive configuration where you can designate an active AWS Region to service all S3 requests and a passive AWS Region that will only be routed to when it is made active during a planned or unplanned failover.

You easily shift S3 data access request traffic from an active AWS Region to a passive AWS Region typically within 2 minutes to test application resiliency and perform disaster recovery simulations.

Amazon S3 Multi-Region Access Points failover controls give your applications a multi-region architecture that can help you achieve resiliency and compliance needs, while helping you maintain business continuity during regional traffic disruptions.

If you operate S3 Multi-Region Access Points, consider pairing with S3 Intelligent-Tiering, a storage class that delivers automatic storage cost savings when data access patterns change before and after a failover. Amazon S3 Intelligent-Tiering delivers automatic cost savings without data retrieval costs, performance impacts, or operational overhead. You can monitor the shift of your S3 traffic between AWS Regions after a failover in Amazon CloudWatch. 

S3 Multi-Region Access Points are supported in 17 AWS Regions and are available at a per-GB request routing charge, plus an internet acceleration charge for requests that are made to S3 from outside of AWS. You will be charged standard S3 API request costs to view and change the current routing control status of each Region for initiating a failover.

See the Amazon S3 pricing page for pricing details on Amazon S3 Multi-Region Access Points, S3 Replication, S3 API usage, data transfer, and the S3 Intelligent-Tiering storage class. You can get started with failover controls using the Amazon S3 CLI, SDK, or with a few clicks in the S3 console; for more information on getting started visit the S3 User Guide. To learn more, visit the S3 Multi-Region Access Point feature page and the AWS News Blog post for this capability.

Introducing account customization within AWS Control Tower

AWS Control Tower now offers support for account factory customization, enabling you to customize your new and existing AWS accounts prior to provisioning them from within the AWS Control Tower console. With this release, you can now use AWS Control Tower to define account blueprints that scale your multi-account provisioning without starting from scratch with every account.

An account blueprint describes the specific resources and configurations that are used when an account is provisioned. You may also use pre-defined blueprints, built and managed by AWS partners, to customize accounts for specific use cases.  

Customers often need to customize their accounts to meet their business needs. Previously, customers used AWS Control Tower’s pre-defined account blueprint, with default resources, configurations, and VPC, or developed alternative solutions to add customizations to their accounts.

In AWS Control Tower, you can now define and implement custom account requirements as part of a well-defined account factory workflow, and immediately start using the account after it is provisioned. AWS Control Tower automates the entire process on your behalf, freeing you from the need to build and maintain costly deployment pipelines. 

AWS Control Tower offers a streamlined way to set up and govern a new, secure, multi-account AWS environment based on AWS best practices. You can start customizing accounts now in AWS Control Tower.

Announcing AWS Lambda SnapStart for Java functions

AWS Lambda SnapStart for Java delivers up to 10x faster function startup performance at no extra cost. Lambda SnapStart is a performance optimization that makes it easier for you to build highly responsive and scalable Java applications using AWS Lambda, without having to provision resources or spend time and effort implementing complex performance optimizations. 

For latency sensitive applications where you want to support unpredictable bursts of traffic, high and outlier startup latencies—known as cold starts—can cause delays in your users’ experience. 

Lambda SnapStart offers improved startup times by initializing the function’s code ahead of time, taking a snapshot of the initialized execution environment, and caching it. When the function is invoked and subsequently scales up, Lambda SnapStart resumes new execution environments from the cached snapshot instead of initializing them from scratch, significantly improving startup latency.

Lambda SnapStart is ideal for applications such as synchronous APIs, interactive microservices, or data processing.

Amazon VPC Reachability Analyzer now supports network reachability analysis across accounts in an AWS Organization

Amazon VPC Reachability Analyzer now supports network reachability analysis between AWS resources across different AWS accounts in your AWS Organization, allowing you to trace and troubleshoot the network reachability across your AWS Organization.

VPC Reachability Analyzer allows you to diagnose network reachability between a source resource and a destination resource in your virtual private clouds (VPCs) by analyzing your network configurations. Previously, you could only use Reachability Analyzer to analyze network reachability between AWS resources that were within the same AWS account.

With the AWS Organizations support for Reachability Analyzer, you can now view the hop-by-hop details of the virtual network path between your specified source and destination across multiple AWS accounts in your AWS Organization, and also isolate network configuration issues that could be blocking network reachability between them.

For example, Reachability Analyzer can help you identify a missing route table entry in your VPC route table that could be blocking network reachability between an EC2 instance in Account A that is not able to connect to another EC2 instance in Account B in your AWS Organization. 

Announcing the general availability of AWS Local Zones in Buenos Aires, Copenhagen, Helsinki, and Muscat.

AWS Local Zones are now available in four new metro areas—Buenos Aires, Copenhagen, Helsinki, and Muscat. You can now use these Local Zones to deliver applications that require single-digit millisecond latency or local data processing. 

This launch includes the first Local Zones launch in Latin America (Buenos Aires, Argentina), and also expands Local Zones in EMEA to three new countries (Denmark, Finland and Oman).

At the beginning of this year, AWS announced plans to launch AWS Local Zones in over 30 metro areas across 27 countries outside of the US. AWS Local Zones are also generally available in 16 metro areas in the US (Atlanta, Boston, Chicago, Dallas, Denver, Houston, Kansas City, Las Vegas, Los Angeles, Miami, Minneapolis, New York City, Philadelphia, Phoenix, Portland, and Seattle) and four metro areas (Delhi, Hamburg, Taipei, and Warsaw) outside of the US.

AWS Glue introduces custom visual transforms

AWS Glue now offers custom visual transforms which let customers define, reuse, and share business-specific ETL logic among their teams. AWS Glue is a serverless, scalable data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources.

With this new feature, data engineers can write reusable transforms for the AWS Glue visual job editor. Reusable transforms increase consistency between teams and help keep jobs up to date by minimizing duplicate effort and code.

You can define AWS Glue custom visual transforms using Apache Spark code as well as the user input form. You can also specify validations for the input form to help protect users from making mistakes.

Once you save the files defining the transform to your AWS account, it automatically appears in the dropdown list of available transforms in the visual job editor. You can call custom visual transforms from both visual and code-based jobs, and sharing transforms between AWS accounts is straightforward.

Announcing preview for Amazon Route 53 Application Recovery Controller zonal shift

Amazon Route 53 Application Recovery Controller now supports zonal shift to help you quickly recover from application failures in an AWS Availability Zone (AZ). Starting today, you can shift application traffic away from using an AZ with a single action for multi-AZ resources with support of Application Load Balancer and Network Load Balancer.

This will help you quickly recover an unhealthy application in an AZ, and reduce the duration and severity of impact to the application due to events such as power outages and hardware or software failures.

To initiate a zonal shift you can simply go to the Amazon Route 53 Application Recovery Controller console to start a zonal shift for a load balancer in your AWS account, in an AWS Region.

You can also use the AWS SDK to start a zonal shift and programmatically move application traffic out of an AZ, and move it back once the affected AZ is healthy. Zonal shift is available for Application Load Balancers and Network Load Balancers with cross-zone configuration disabled. 

AWS Compute Optimizer now supports external metrics from observability partners

AWS Compute Optimizer now supports external performance metrics from four observability partners: Datadog, Dynatrace, Instana, and New Relic. By using externally provided utilization metrics for Amazon EC2 memory, Compute Optimizer can now identify additional savings opportunity, and more performance-aware recommendations for customers that use these products.

With this launch, you can now configure AWS Compute Optimizer to ingest memory metrics for EC2 instances from any of the four partner platforms. After 30 hours, Compute Optimizer will begin to provide rightsizing recommendations that size memory capacity in addition to CPU, disk, network, IO and throughput, unlocking additional savings and performance awareness. 

Memory metrics from CloudWatch will continue to be supported when external metrics are not present, and for customers without memory utilization metrics enabled, Compute Optimizer will continue to give rightsizing recommendations that do not reduce memory capacity.

AWS announces Amazon Inspector support for AWS Lambda functions

Amazon Inspector now supports AWS Lambda functions, adding continual, automated vulnerability assessments for Serverless compute workloads. With this expanded capability, Amazon Inspector now automatically discovers all eligible Lambda functions and identifies software vulnerabilities in application package dependencies used in the Lambda function code.

All functions are initially assessed upon deployment to Lambda service and continually monitored and reassessed, informed by updates to the function and newly published vulnerabilities. When vulnerabilities are identified in the Lambda function or layer, actionable security findings are generated, aggregated in the Amazon Inspector console, and pushed to AWS Security Hub and Amazon EventBridge to automate workflows.

Amazon Inspector is a vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure across your entire AWS Organization.

Once activated, Amazon Inspector automatically discovers all of your Amazon Elastic Compute Cloud (EC2) instances, container images in Amazon Elastic Container Registry (ECR), and AWS Lambda functions, at scale, and continuously monitors them for known vulnerabilities, giving you a consolidated view of vulnerabilities across your compute environments.

Amazon Inspector also provides a highly contextualized vulnerability risk score by correlating vulnerability information with environmental factors such as external network accessibility to help you prioritize the highest risks to address. 

 

AWS announces Amazon Verified Permissions (Preview)

This week, AWS announced the preview of Amazon Verified Permissions, a scalable, fine-grained permissions management and authorization service for custom applications. With Amazon Verified Permissions, application developers can let their end users manage permissions and share access to data.

For example, application developers can use Amazon Verified Permissions to define and manage fine grained permissions to determine which Amazon Cognito users have access to which application resources. 

This central fine-grained permissions management system simplifies changing and updating permission rules in a single place without needing to change the code. Teams can use the permissions system to shorten their development timelines and implement more dynamic permissions across application resources.

Amazon Verified Permissions also gives IT administrators a comprehensive audit capability that scales to millions of policies using automated reasoning.

Amazon Verified Permissions uses a custom policy language called Cedar to define fine-grained permissions for application users. The service manages access within the application by storing and evaluating these fine-grained permissions to determine what each user is allowed to do.

Access requests are evaluated in a few milliseconds which allows continual verification as required by Zero Trust. Amazon Verified Permissions can be used with any identity provider, such as Amazon Cognito.

Elastic Load Balancing capabilities for application availability

AWS are excited to announce four new Elastic Load Balancing capabilities to further improve the availability of your applications. AWS provides multiple building blocks, like Regions and Availability Zones, so that you can design your applications to isolate your applications from different types of failures.

Starting this week, AWS are providing additional features that allow you to define how you want your applications to behave when failures occur as well as a feature to help you recover faster. These new capabilities include:

Application Load Balancer (ALB) Cross Zone Off: ALB now has the ability to turn off cross zone load balancing, similar to the existing capability on NLB. When enabled, ALB routes traffic to targets in the same Availability Zone (AZ) as the load balancer nodes. This capability enables customers to maintain zonal isolation of their application stacks, while still providing redundancy across multiple AZs. For details, see the ALB cross zone off feature documentation here.

Network Load Balancer (NLB) Health Check Improvements: NLB allows customers to define health check intervals, specify HTTP response codes that determine target health, and configure the number of consecutive health check responses before a target is either health or unhealthy. For details, see the NLB health check documentation here.

ALB and NLB Minimum Healthy Targets: Customers can now configure a threshold for the minimum number or percentage of healthy targets for ALB and NLB in an AZ. When the healthy target capacity drops below the specified threshold, the load balancer automatically stops routing to targets in the impaired AZ. For details, see the documentation here for ALB and here for NLB.

Zonal Shift for ALB and NLB [Preview]: Using Amazon Route 53 Application Recovery Controller’s zonal shift feature, you can recover from gray failures, like bad application deployments, by routing traffic away from a single impaired AZ. This feature is ideal for zonally architected applications using ALBs and NLBs that have cross-zone load balancing turned off. For details, read the launch blog, or see the Zonal Shift section of the documentation here

There is no additional charge for using these features. ALB Cross Zone Off, NLB health check improvements, and ALB/NLB Min Healthy Targets features are now available in all commercial AWS Regions, and AWS GovCloud (US) Regions.

The ALB and NLB zonal shift feature is available in preview in the AWS US West (Oregon), US East (Northern Virginia), US East (Ohio), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Tokyo), Asia Pacific (Sydney), and Asia Pacific (Jakarta) Regions. 

Announcing AWS Glue for Ray (Preview)

AWS Glue for Ray is a new engine option on AWS Glue. Data engineers can use AWS Glue for Ray to process large datasets with Python and popular Python libraries. AWS Glue is a serverless, scalable data integration service used to discover, prepare, move, and integrate data from multiple sources. AWS Glue for Ray combines that serverless option for data integration with Ray (ray.io), a popular new open-source compute framework that helps you scale Python workloads.

You pay only for the resources that you use while running code, and you don’t need to configure or tune any resources. AWS Glue for Ray facilitates the distributed processing of your Python code over multi-node clusters. 

You can create and run Ray jobs anywhere that you run AWS Glue ETL (extract, transform, and load) jobs. This includes existing AWS Glue jobs, command line interfaces (CLIs), and APIs. You can select the Ray engine through notebooks on AWS Glue Studio, Amazon SageMaker Studio Notebook, or locally. When the Ray job is ready, you can run it on demand or on a schedule.

Announcing comprehensive controls management with AWS Control Tower (Preview)

This week AWS were excited to announce the preview launch of comprehensive controls management in AWS Control Tower, a set of new features that enhances AWS Control Tower’s governance capabilities.

You can now programmatically implement controls at scale across your multi-account AWS environments within minutes, so you can more quickly vet, allow-list, and begin using AWS services.

With comprehensive controls management in AWS Control Tower, you can reduce the time it takes to define, map, and manage the controls required to meet your most common control objectives such as enforcing least privilege, restricting network access, and enforcing data encryption.

As customers begin to use AWS services, many take an allow-list approach — only allowing use of AWS services that have been vetted and approved — to balance their security and compliance requirements with the need to be agile. This restricts developer access to AWS services until risks are defined and controls implemented.

AWS Control Tower’s new proactive control capabilities leverages AWS CloudFormation Hooks to proactively identify and block noncompliant resources before they are provisioned by CloudFormation. AWS Control Tower’s new proactive controls complement AWS Control Tower’s existing control capabilities, enabling you to disallow actions that lead to policy violations and detect noncompliance of resources at scale.

AWS Control Tower provides updated configuration and technical documentation so you can more quickly benefit from AWS services and features. AWS Control Tower provides you a consolidated view of compliance status across your multi-account environment.

AWS Marketplace for containers now supports direct deployment to EKS clusters

Amazon EKS customers can now find and deploy third-party operational software to their EKS clusters through the EKS console or using CLI, eksctl, AWS APIs, or infrastructure as code tools such as AWS CloudFormation and Terraform.

Customers can choose between commercial, free, or packaged open source software that address use cases like monitoring, security, storage, and use the same simple commands they use today to deploy EKS add-ons to deploy these third party software. This helps EKS customers reduce time required to find, subscribe to, and deploy third party software, helping customers set up production-ready EKS clusters in minutes.

Third-party container software is sourced from AWS Marketplace which continually scans software for common vulnerabilities and exposures (CVEs), and validates the software to work on EKS clusters. Customers are presented with software versions that are compatible with their Kubernetes versions.

Moreover, selecting products from the EKS console will provide customers the same benefits as any other product in AWS Marketplace, including consolidated billing, flexible payment options, and lower pricing for long-term contracts. Through this feature, customers can now automate deployments to create EKS clusters and include third-party software from AWS Marketplace, enabling customers to set up production-ready EKS clusters in minutes.

Post deployment, customers will receive notifications when new versions of software are available to upgrade, helping ensure that customers are running the latest patches.

Announcing the availability of Microsoft Office Amazon Machine Images (AMIs) on Amazon EC2 with AWS provided licenses

AWS now offers fully-compliant, Amazon-provided licenses for Microsoft Office LTSC Professional Plus 2021 Amazon Machine Images (AMIs) on Amazon EC2. These AMIs are now available on the Amazon EC2 console and on AWS Marketplace to launch instances on-demand without any long-term licensing commitments.

With this offering, customers have the flexibility to run Microsoft Office on EC2. Amazon EC2 provides a broad choice of instances with the flexibility of paying only for the optimal capacity and hardware configuration Microsoft Office users need.

IT administrators or license administrators can easily manage the access to Microsoft Office for their end users via AWS License Manager. Administrators have the flexibility to modify the end user access on a monthly basis.

Customers are billed per vCPU for the Amazon EC2 License Included Windows Server instance, and per-user per-month (non-prorated) for Microsoft Office and Remote Desktop Services (RDS) Subscriber Access License (SAL) licenses.

Expanded API capabilities now generally available for Amazon QuickSight

Amazon QuickSight now offers expanded API capabilities, allowing programmatic access to the underlying structure of QuickSight dashboards and analyses with the AWS Software Development Kit. The new and expanded APIs let customers and developers treat QuickSight assets like software code and integrate with DevOps processes, such as code reviews, audits, and promotion across development and production environments. 

With the new APIs, you can also create programmatic migration accelerators that expedite business intelligence (BI) migrations to the cloud. By facilitating DevOps automation and accelerating migration, these new expanded API capabilities allow you to be agile and innovative in your BI journey and bring insights to all users who need them.

AWS Backup launches delegation of organization-wide backup administration

AWS Backup now supports organization-wide delegation of backup administration to member accounts within AWS Organizations. This enables delegated backup administrators to create and manage backup policies and monitor backup activity across accounts within the organization.

You can get started with delegation of AWS Backup administration by using the AWS Backup and AWS Organizations console, API, or CLI. You can delegate backup management for the organization, previously afforded to your management account only, to dedicated backup administration accounts.

This removes the need for member accounts to access management account for backup administration on behalf of their organization. Organization-wide backup administration delegation enables organizations to securely centralize their AWS Backup management at scale.

AWS Backup introduces support for Amazon Redshift

AWS announces support for Amazon Redshift in AWS Backup, making it easier for you to centrally manage data protection of your Amazon Redshift data warehouse. You can now use AWS Backup to schedule and restore Redshift manual snapshots. Further, your backups can provide enhanced data protection with immutability, improved security with separate backup access policies, and better governance by centralizing backup and recovery.

To get started with AWS Backup for Amazon Redshift, you can use the AWS Management console, API, or CLI by creating backup policies. You assign Amazon Redshift resources to the policies, and AWS Backup automates the creation of backups of Amazon Redshift, storing the backups in an encrypted backup vault.

Using AWS Backup for Amazon Redshift gives you the option to schedule cluster level manual snapshots, which are full backups, and restore a table to the existing cluster or a cluster to a new cluster with a few clicks, simplifying data recovery.

Announcing data protection in Amazon CloudWatch Logs, helping you detect, and protect sensitive data-in-transit

AWS are excited to announce data protection in Amazon CloudWatch Logs, a new set of capabilities that leverage pattern matching and machine learning capabilities to detect and protect sensitive log data-in-transit. Amazon CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services, in a single, highly scalable service.

With log data protection in Amazon CloudWatch Logs, you can now detect and protect sensitive log data-in-transit logged by your systems, and applications.

Data protection in CloudWatch Logs enables customers to define and apply data protection policies that scan log data-in-transit for sensitive data and mask sensitive data that is detected. Customers select the data identifiers that are relevant to their use cases.

For example, log data protection can help with regulations such as the Health Insurance Portability and Accountability Act (HIPAA), General Data Privacy Regulation (GDPR), Payment Card Industry Data Security Standard (PCI-DSS), and Federal Risk and Authorization Management Program (FedRAMP). Customers can also view data unmasked for validation via elevated AWS Identity and Access Management privileges.

Amazon CloudWatch Logs data protection is available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Jakarta), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo).

Start discovering and masking sensitive data in Amazon CloudWatch Logs using the AWS Software Development Kit (SDK), AWS Command Line Interface (CLI), AWS CloudFormation templates, or CloudWatch in the AWS Management Console.

To learn more about Amazon CloudWatch Logs data protection, you can read the blog post, developer guide, and API reference documentation. Data protection costs $0.12 per GB of data scanned. Check CloudWatch Pricing - Detecting and masking sensitive log data with data protection for an example of pricing. 

Announcing Amazon RDS Blue/Green Deployments for safer, simpler, and faster updates

Amazon Relational Database Service (Amazon RDS) now supports Amazon RDS Blue/Green Deployments to help you with safer, simpler, and faster updates to your Amazon Aurora and Amazon RDS databases. Blue/Green Deployments create a fully managed staging environment that allows you to deploy and test production changes, keeping your current production database safe.

With a single click, you can promote the staging environment to be the new production system in as fast as a minute, with no changes to your application and no data loss. 

Use Amazon RDS Blue/Green Deployments for deploying changes to production, such as major and minor version database engine upgrades, schema updates, maintenance updates, database parameter setting changes, and scaling instances.

Blue/Green Deployments use built-in switchover guardrails that will time-out promotion of the staging environment if it exceeds your maximum tolerable downtime, detects replication errors, or identifies instance health check errors. 

Amazon RDS Blue/Green Deployments is now available for Amazon Aurora with MySQL compatibility 5.6 and higher, Amazon RDS for MySQL 5.7 and higher, and Amazon RDS for MariaDB 10.2 and higher in all AWS Regions (excluding AWS China Regions) and AWS GovCloud (US) Regions.

AWS Backup launches application-aware data protection for applications defined using AWS CloudFormation

This week, AWS Backup is announcing application-aware data protection that enables you to add AWS CloudFormation stacks to backup policies, making it easier for you to back up and restore your entire applications. AWS Backup automates the data protection of applications that are defined using AWS CloudFormation stacks to create immutable application-level backups using AWS Backup Vault Lock.

Get started with application-aware data protection for applications defined using AWS CloudFormation using the AWS Backup console, API, or CLI. Automate the data protection of applications by creating backup policies and assigning AWS CloudFormation stacks using tags or Resources IDs.

Now, all supported AWS services belonging to an application stack can be protected by AWS Backup so that an entire application has a single recovery point objective (RPO), simplifying and expediting application recovery during disasters and malicious incidents.

Amazon Elastic File System introduces 1-Day Lifecycle Management Policy to help customers reduce costs for cold data sets

Amazon Elastic File System (Amazon EFS) now supports a 1-day Lifecycle Management Policy that allows you to automatically move files that haven’t been accessed in 1 day to the Amazon EFS Infrequent Access (EFS IA) storage class. You can configure this new policy option for your file system, or you can use one of the existing policy options: 7, 14, 30, 60, or 90 days.

Amazon EFS Lifecycle Management enables you to automatically move data stored on the Standard storage class that hasn’t been accessed recently to the cheaper, colder IA storage class, helping you save on storage costs. Applications that ingest and analyze large amounts of data on a daily basis, and then retain the data long-term, can benefit from a 1-day Lifecycle Management policy.

AWS Elastic Disaster Recovery now supports cross-Region and cross-Availability Zone failback

AWS Elastic Disaster Recovery (AWS DRS) now allows you to initiate a scalable failback process for your applications running on AWS. This process helps simplify failing back recovered Amazon Elastic Compute Cloud (Amazon EC2) instances to your primary AWS Region or Availability Zone. It also allows you to perform frequent, non-disruptive recovery and failback drills for the AWS-based applications that you replicate using Elastic Disaster Recovery.

Elastic Disaster Recovery helps minimize downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. You can use Elastic Disaster Recovery to recover a wide range of applications and databases that run on supported Windows and Linux operating system versions.

For your applications running on AWS, Elastic Disaster Recovery helps increase application resilience and meet availability goals by continuously replicating your Amazon EC2 instances to a recovery site in a different AWS Region or Availability Zone.

If an unexpected event occurs, you can launch recovery instances in your recovery Region within minutes. For cross-Availability Zone failback, Elastic Disaster Recovery can replicate only the changed data back to your primary Availability Zone. You can use the new scalable failback process to launch failed back instances in your primary Region whenever you’re ready.

 

Announcing Amazon CloudWatch Internet Monitor Preview

Amazon CloudWatch Internet Monitor is a new preview feature of Amazon CloudWatch that helps you, as application developers and network engineers, continually monitor internet availability and performance metrics between your AWS-hosted applications and application end users. InternetMonitor monitors your application through Amazon Virtual Private Clouds (VPCs), Amazon CloudFront distributions, and Amazon WorkSpaces directories.

Internet Monitor enables you to quickly visualize the impact of issues, pinpoint locations and providers that are affected, and then helps you take action to improve your end users' network experience.

You can see a global view of traffic patterns and health events, and easily drill down into information about events at different geographic granularities. If an issue is caused by the AWS network, you’ll receive an AWS Health Dashboard notification that tells you the steps that AWS is taking to mitigate the problem. Internet Monitor also provides insights and recommendations that can help you improve your users' experience by using other AWS services or by rerouting traffic to your workload through different Regions.

Internet Monitor publishes measurements to CloudWatch Metrics and CloudWatch Logs that include the geographies and networks specific to your application. It also sends health event notifications through Amazon EventBridge. AWS CloudFormation is not supported at this time but is coming soon. For more information about the AWS Regions where Internet Monitor is available, see the AWS Region table

Amazon RDS Optimized Writes enables up to 2x higher write throughput at no additional cost

Amazon Relational Database Service (Amazon RDS) for MySQL now supports Amazon RDS Optimized Writes. With Optimized Writes you can improve write throughout by up to 2x at no additional cost. This is especially useful for RDS for MySQL customers with write-intensive database workloads, commonly found in applications such as digital payments, financial trading, and online gaming.

In MySQL, you are protected from data loss due to unexpected events, such as a power failure, using a built-in feature called the “doublewrite buffer”. But this method of writing takes up to twice as long, consumes twice as much I/O bandwidth, and reduces the throughput and performance of your database.

Starting this week, Amazon RDS Optimized Writes provide you with up to 2x improvement in write transaction throughput on RDS for MySQL by writing only once while protecting you from data loss and at no additional cost. Optimized Writes uses the AWS Nitro System, to reliably and durably write to table storage in one step. 

AWS IoT Core announces new Device Location feature

AWS IoT Core, a managed cloud service that lets customers connect billions of IoT devices and routes trillions of messages to AWS services, announces AWS IoT Core Device Location, a new feature that makes it possible for customers to track and manage IoT devices using their location data, such as latitude and longitude coordinates.

Using AWS IoT Core Device Location, customers can optimize business processes, simplify and automate maintenance efforts, and unlock new business use cases. For example, customers’ field service team can stay informed and quickly identify the location of devices that require maintenance action.

In an IoT application, Global Positioning Service (GPS) is a commonly applied standard to locate an IoT device. But, not all IoT things, especially battery powered IoT devices, can be equipped with GPS hardware because of its high-power consumption. Therefore, new technologies such as cloud-assisted Global Navigation Satellite System (GNSS), WiFi, and cellular network have become popular alternatives to obtain location data for IoT devices. 

With the new Device Location feature, customers can choose the appropriate location technology that works within their business and engineering constraints, without relying on the high-power consuming GPS hardware. AWS IoT Core Device Location is integrated with solutions offered by AWS Partners such as Semtech, HERE, and MaxMind, enabling customers to use cloud-assisted GNSS, WiFi scan, cellular triangulation, and reverse IP lookup techniques to determine geo-coordinates for a device.

Customers can subsequently publish these geo-coordinates to AWS IoT Device Shadow or any topic of their choice to store the calculated location data. Customers can also use the new Location Action feature from AWS IoT Core Rules Engine, launched on 10/27/2022, to route the geo-coordinates to Amazon Location Service, where customers can add maps and points of interest, track resources, define geo-fencing models, and visualize device location information. 

 

Announcing Elastic Throughput for Amazon Elastic File System

Elastic Throughput is a new throughput mode for Amazon Elastic File System (Amazon EFS) that is designed to provide your applications with as much throughput as they need with pay-as-you-use pricing. Elastic Throughput is designed to further simplify running workloads and applications on AWS by providing file storage that doesn’t require any performance provisioning.

Elastic Throughput is ideal for spiky and unpredictable workloads with performance requirements that are difficult to forecast. When you enable Elastic Throughput on an EFS file system, you don’t specify or provision throughput capacity to meet your application needs; instead, with Elastic Throughput, EFS is designed to automatically deliver the throughput performance your application needs while you pay only for the amount of data read or written.

Amazon EFS already provides fully elastic, pay-as-you-use storage. With Elastic Throughput, Amazon EFS now extends its simplicity and elasticity to performance, so you don’t have to think about provisioning or planning any resources.

AWS Backup Audit Manager adds centralized reporting for AWS Organizations

This week, AWS Backup added centralized, multi-account reporting for AWS Organizations, making it easier for you to demonstrate compliance and meet regulatory auditing needs across your accounts and Regions.

Now, you can use your organization’s management account to generate aggregated reports on your data protection policies and retrieve operational data about your backup and recovery activities from multiple accounts and AWS Regions using AWS Backup Audit Manager.

AWS Backup enables you to centralize and automate data protection across AWS services based on organizational best practices and regulatory standards. With AWS Backup Audit Manager, you can generate auditor-ready reports to help prove compliance of your backup policies with your defined industry-specific regulatory requirements. 

To get started with centralized reporting, you can create a report plan using AWS Backup Audit Manager from your AWS Organization’s management account. Report plans allow you to generate periodic reports for your backup, copy, and restore activities as well as resource and controls compliance. You can specify the AWS accounts, organizational units (OUs), and Regions from which you want to aggregate data and customize your report delivery preferences in the report plan.

Announcing Schema Conversion feature in AWS DMS

AWS Database Migration Service (AWS DMS), which helps enterprise customers migrate their databases quickly and securely to AWS, just launched a new feature called Schema Conversion. DMS Schema Conversion is a fully managed feature of AWS DMS that automatically assesses and converts the database schema to a format compatible with the target database service in AWS, enabling you to modernize your database and analytics workloads.

DMS Schema Conversion is intended for customers who plan to migrate their database and analytics workloads to AWS to help reduce licensing costs and improve performance, agility, and resilience by embracing cloud and database modernization. 

With DMS Schema Conversion feature built into DMS, customers can avoid the hassle of implementing piecemeal solutions, especially for heterogeneous migrations. This feature allows you to convert the schema, views, stored procedures, and functions from a source database into the schema for the target database service.

With a few clicks, you can generate an assessment report that shows the schema conversion complexity. This report provides prescriptive guidance on how to resolve any incompatibilities between the source and target database engines.

It takes hours instead of weeks or months to migrate your database and analytics portfolio once the schema code has been converted, making it possible for customers to perform a complete database migration process from discovery and analysis to schema code conversion to data migration using a single AWS console. 

Announcing delegated administrator for AWS Organizations

We are excited to launch delegated administrator for AWS Organizations to help you delegate the management of your Organizations policies, enabling you to govern your AWS organization and member accounts with increased agility and decentralization.

You can now allow individual lines of business, operating in member accounts, to manage policies specific to their needs. By specifying fine-grained permissions, you can balance flexibility with limiting access to your highly privileged management accounts.

You can use AWS Organizations to centrally manage and govern multiple accounts with AWS. As you scale operations and need to manage more accounts within AWS Organizations, implementing and scaling policy administration requires coordination between multiple teams, and can take more time.

You can now delegate the management of policies to designated member accounts that are known as delegated administrators for AWS Organizations. You can select any policy type — backup policies, service control policies (SCPs), tag policies, and AI services opt-out policies — and specify permissible actions.

Once delegated access, users with the right permissions can go to the AWS Organizations console, see and manage policies that they have permissions for, and create their own policies.

Amazon CodeWhisperer adds Enterprise administrative controls, simple sign-up, and support for new languages

Amazon CodeWhisperer now provides AWS administrators the ability to enable CodeWhisperer for their organization with Single Sign-On authentication. Administrators can easily integrate CodeWhisperer with their existing workforce identity solutions, provide access to users and groups, and configure organization-wide settings.

Additionally, individual users who do not have AWS accounts can now use CodeWhisperer with their personal email using AWS Builder ID. The sign-up process takes only a few minutes and enables developers to start using CodeWhisperer immediately without any wait-list. 

AWS are also expanding programming language support for CodeWhisperer. In addition to Python, Java and JavaScript, developers can now use CodeWhisperer to accelerate development on their C# and TypeScript projects.

Available in popular Integrated development environment (IDEs) [Visual Studio Code, JetBrains, AWS Cloud9, AWS Lambda] as a part of AWS Toolkit, CodeWhisperer generates (ML)–powered code recommendations seamlessly to boost developer productivity. 

Amazon CodeWhisperer is currently in preview and you can get started by downloading the latest AWS Toolkit extension for your preferred IDE, or by enabling it for your organization from the AWS Console.

AWS Backup adds legal hold capability for extended data retention beyond lifecycle policies

AWS Backup now offers you the ability to create legal holds on your protected data beyond your defined retention policies, for legal and auditing purposes. Legal holds prevent your backups from being deleted after the expiration of their retention period, until your backups are explicitly released from legal hold.

With this feature, AWS Backup allows you to manage legal hold requests at scale and help prove compliance to outside counsel, auditors, and related third-parties. Together with AWS Backup Vault Lock, this new capability has been assessed by Cohasset Associates for use in environments that are subject to SEC Rule 17a-4(f), FINRA Rule 4511, and CFTC Regulation 1.31. A copy of the Cohasset Associates Assessment report can be downloaded from the Backup Vault Lock technical documentation

You can get started with legal holds by using the AWS Backup console, API, or CLI. To create a new legal hold, you can select one or more resources or backup vaults to retain under this hold. You can further specify a creation date range for your backups.

After adding tags and descriptions to categorize your legal holds, you can activate the legal hold on all selected backups.

 


Getting_Started_gcp_logo
Google Cloud Releases and Updates
Source: cloud.google.com

 

AlloyDB for PostgreSQL

The AlloyDB Clusters page of the Google Cloud console displays summary cards and a resource table that provide an overview on the overall health of your databases. This helps you monitor the real-time performance of your database fleet.

Anthos Clusters on bare metal

Anthos clusters on bare metal 1.13.2 is now available for download. To upgrade, see Upgrading Anthos on bare metal. Anthos clusters on bare metal 1.13.2 runs on Kubernetes 1.24.

Apigee Integration

On November 29, 2022 GCP released an updated version of the Apigee Integrations software.

Integration variable color code

The color codes of all the integration variable data types is removed and now changed to a single uniform color. Integration variables will no longer be color coded (green, blue, orange) based on their data type.

See Format of an integration variable.

Data Mapping editor

  • The background color of the Input and Output row is changed to a single uniform color.
  • Input rows are updated to add line breaks and indentations according to the use of mapping functions to improve readability and structure recognition.
  • A confirmation dialog is displayed before proceeding to delete an entire Input row.

See Data Mapping editor.

 

BigQuery

BigQuery now supports querying Apache Iceberg tables that are created by open source engines. This feature is in preview.

BigQuery now supports the following features when you load data:

These features are generally available (GA).

Cloud Billing

View granular cost data from Cloud Run instances in Cloud Billing exports to BigQuery

You can now view granular Cloud Run cost data in the Google Cloud Billing detailed export. Use the resource.global_name field in the export to view and filter your Cloud Run instances.

Review the schema of the Detailed cost data export.

View granular cost data from Cloud Function instances in Cloud Billing exports to BigQuery

You can now view granular Cloud Function cost data in the Google Cloud Billing detailed export. Use the resource.global_name field in the export to view and filter your Cloud Function instances.

Review the schema of the Detailed cost data export.

Cloud Build

Users can generate Supply chain Levels for Software Artifacts (SLSA) build provenance information for standalone Java and Python packages when they upload artifacts to Artifact Registry using new fields available in the Cloud Build config file. This feature is in public preview. For more information, see Build and test Java applications and Build and test Python applications.

Cloud Composer

GCP are currently experiencing an issue with gcloud CLI version 410.0.0. Some composer commands return non-zero error codes along with an additional gcloud crashed (TypeError): 'NoneType' object is not callable) output message.

This issue doesn't impact the functionality provided by the commands when used in interactive mode. It may contribute to misleading error stack traces and cause failures when using the commands programmatically since it returns non-zero error codes.

The following issue affects only CMEK-encrypted Composer environments for which a label update operation was performed in Composer 1 versions 1.18.3 and higher, and Composer 2 versions between 2.0.7 and 2.0.28.

Updating labels in CMEK-encrypted Composer environments leads to reconfiguring the bucket to use a Google Managed Key instead of the CMEK key for newly added or modified objects in the bucket. This issue doesn't cause changes in bucket's access settings.

  • Please refrain from updating labels in your CMEK-encrypted Composer environments until the issue is fixed.
  • If you already performed the update, reconfigure the environment Cloud Storage bucket to use the original CMEK key. See Use customer-managed encryption keys.

Cloud Data Loss Prevention

The NEW_ZEALAND_IRD_NUMBER infoType detector is available in all regions.

The VAT_NUMBER infoType detector is available in all regions. Currently, this detector identifies VAT numbers from France, Germany, Hungary, Indonesia, Italy, and the Netherlands.

For more information about all built-in infoTypes, see InfoType detector reference.

Cloud Functions

Cloud Functions has added support for a new runtime, Node.js 18, at the Preview release level.

Cloud SQL for PostgreSQL

The changes listed in the October 19th release rotes entry for PostgreSQL minor versions, extension versions, and plugin versions have been postponed.

Dataplex

Dataplex Source and Sink plugins are generally available (GA) in Cloud Data Fusion for ingesting and processing data.

Error Reporting

Error Reporting is a Virtual Private Cloud (VPC) supported service.

GKE

Kubernetes control plane logs are now Generally Available. You can now configure GKE clusters with control plane version 1.22.0 or later to export to Cloud Logging logs emitted by the Kubernetes API server, Scheduler, and Controller Manager.

These logs are stored in Cloud Logging and can be queried in the Cloud Logging Log Explorer or Cloud Logging API. These logs can also be sent to Google Cloud Storage, BigQuery, or Pub/Sub using the Log Router.

You can now use deprecation insights to identify clusters on versions 1.23 and earlier that use Docker-based node images, which are unsupported on GKE version 1.24 and later.

Google Cloud Armor

Three new rate limiting keys are now Generally Available:

  • HTTP-PATH
  • SNI
  • REGION-CODE

For more information about using rate limiting keys, see the Rate limiting overview

Google Cloud VMware Engine

Zerto Solution version 9.5u1 is now supported as a disaster recovery solution with VMware Engine. Learn more about setting up Zerto Solution.

Preview: VMware Engine private clouds support the addition of a Trusted Platform Module (TPM) 2.0 virtual cryptoprocessor to a virtual machine.

For details about this feature, see About Virtual Trusted Platform Module.

Pub/Sub

Exactly once delivery is now GA.

  • Update dependency com.google.cloud:google-cloud-bigquery to v2.18.0 (#1375) (b6ada4e)
  • Update dependency com.google.cloud:google-cloud-bigquery to v2.19.1 (#1416) (e140a49)
  • Update dependency org.graalvm.buildtools:junit-platform-native to v0.9.18 (#1413) (b3fb828)
  • Update dependency org.graalvm.buildtools:native-maven-plugin to v0.9.18 (#1414) (74d2dc3)

Text-to-Speech

Text-to-Speech now offers additional Neural2 voices across 9 locales with 40+ speakers. Voices are available in the us-central1, us, and eu endpoints. See the supported voices page for a complete list of voices and audio samples.

Traffic Director

Traffic Director deployment with automatic Envoy injection for Google Kubernetes Engine Pods currently installs Envoy version v1.24.0.

Vertex AI

AutoML image model updates

AutoML image classification and object detection now support a higher-accuracy model type. This model is available in Preview.

For information about how to train a model using the higher accuracy model type, see Begin AutoML model training.

Batch prediction is currently not supported for this model type.

Cloud Logging for Vertex AI Pipelines is now generally available (GA). For more information, see View pipeline job logs.

Workflows

Support for an Application Integration connector is available in Preview.

 


 

Getting_Started_Azure_Logo
Microsoft Azure Releases And Updates
Source: azure.microsoft.com

 
 

Azure SQL—Public preview updates for late November 2022

Public preview enhancements and updates released for Azure SQL in late November 2022.

Azure SQL—General availability updates for late November 2022

General availability enhancements and updates released for Azure SQL in late November 2022.

Public preview: Enhanced metrics for Azure Database for PostgreSQL – Flexible Server

Have more visibility and fine-grained control of your Azure Database for PostgreSQL – Flexible Server instances with new enhanced metrics.

General availability: Cross-region read replicas for Azure Cosmos DB for PostgreSQL

 

Build applications with cross-region read replicas for read scalability and disaster recovery with Azure Cosmos DB for PostgreSQL.

Generally available: Azure Blob Storage integration with Azure Cosmos DB for PostgreSQL

Now you can interact with your Azure Blob Storage directly from Azure Cosmos DB for PostgreSQL, making it a perfect staging environment for getting your data into the cloud.

General availability: Azure Cosmos DB for PostgreSQL Citus 11.1 support

Run the latest Citus 11.1 version when using clusters in Azure Cosmos DB for PostgreSQL clusters, a managed service for distributed Postgres databases.

Generally available: PostgreSQL 15 support in Azure Cosmos DB for PostgreSQL

Now you can select PostgreSQL 15 when creating PostgreSQL clusters in Azure Cosmos DB for PostgreSQL, a managed service for distributed Postgres database.

Generally available: Additional Always Free Services for Azure Free Account and PAYG 

With an Azure free account, you can explore with free amounts of 55+ always free services.

General availability: 12 months free services for new Azure PAYG customers

With an Azure free account, you can explore with free amounts of 55+ always free services.

Generally available: Azure Blob CSI driver support in AKS

You no longer need to deal with manual installation as well as lifecycle management of the open-source Azure Blob CSI driver with AKS.

Public preview: Azure SQL Trigger for Azure Functions

You can now build application logic in azure function apps which can be driven by the data from Azure SQL database.

Public preview: Durable Functions support for .NET 7.0 isolated model

You can use .NET 7.0 to write Durable Functions in the isolated worker model.

Public preview: Inbound IP restrictions support in Azure Container Apps

You can now restrict inbound traffic to your Azure Container Apps by IP without using a custom solution.

Public preview: GitHub action to build and deploy to Azure Container Apps

Azure Container Apps now supports, in public preview, a new GitHub action that builds and deploys container apps from GitHub Actions workflows.

Public preview: Azure Pipelines task to build and deploy to Azure Container Apps

 

You can now use a new Azure Pipelines task to build and deploy container apps from Azure DevOps.

Public preview: Build and deploy to Azure Container Apps without a Dockerfile from the Azure CLI

Azure Container Apps now supports building container images from source code without a Dockerfile.

Public preview: Azure HX series and HBv4 series virtual machines

Perfect for a range of HPC workloads, these new virtual machines series can give you a significant performance boost over our previous HB generation series.

 

Public preview: Go language support on Azure App Service

Go language support is available as an experimental language release on Linux App Service.

General availability: Azure Monitor Logs, custom log API and ingestion-time transformations

New Custom Log API and Ingestion-time Transformations announced in Azure Monitor Logs.

Generally available Day 0 support for .NET 7.0 on App Service

Day 0 support for .NET 7.0 on App Service.

General availability: Azure Monitor agent custom and IIS logs

Azure Monitor agent team has shipped custom log collection and IIS log collection.

Public preview: Premium series hardware for Azure SQL Database Hyperscale

Leverage new premium-series hardware based on the latest Intel CPUs that offers significantly improved performance and scalability on Azure SQL Database Hyperscale.

Private preview: Azure Kubernetes Service (AKS) Backup

Azure Backup is announcing private preview of AKS Backup providing ability to backup and restore AKS clusters.

 

 
 

All_Hava_Diagrams
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes.  Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
 
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free. 

When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
 
Check out the 14 day free trial here (includes forever free tier):


Learn More!

 

Topics: aws azure gcp news
Team Hava

Written by Team Hava

The Hava content team

Featured