This week's roundup of all the cloud news.
Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday 10th June 2022.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
AWS Updates and Releases
Source: aws.amazon.com
Amazon Transcribe real-time streaming is now available in AWS GovCloud (US) Regions
Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for you to add speech-to-text capabilities to your applications. Today, we are excited to announce availability of Amazon Transcribe streaming APIs in AWS GovCloud (US) Regions.
Live streaming transcription is used across industries in contact center applications, broadcast events, meetings captions, and e-learning. For example, contact centers can use transcription to remove note taking requirements and improve agent productivity by providing recommendations for next best action. Companies can also make their live sports events or real-time meetings more accessible with automatic subtitles. In addition, customers who have a large social media presence can use Amazon Transcribe to help moderate content and detect inappropriate behavior or speech in user-generated content.
Starting this week, AWS customers can view CloudTrail event logs corresponding to a change request using AWS Systems Manager Change Manager ServiceNow Connector. The integration helps customers understand which resources were impacted by the change request, thereby providing customers with more visibility into the change request execution. AWS Systems Manager Change Manager helps customers request, approve, implement, and report on operational changes to their application configuration and infrastructure on AWS and on-premises. Using AWS Service Management Connector, customers can create and approve change requests, and get the CloudTrail events associated with these change requests in the ServiceNow console, making the integration with AWS Change Manager even deeper.
This AWS Service Management Connector release also introduces integrations of AWS Health service status and availability, AWS Systems Manager Incident Manager incident creation and management, AWS Support dual sync using Amazon EventBridge along with enhancements to existing integrations with AWS Service Catalog and AWS Config within ServiceNow. The Connector also provides existing integrations with AWS Systems Manager OpsCenter, AWS Systems Manager Automation and AWS Security Hub, which helps simplify cloud provisioning, operations and resource management as well as enables to view streamlined Service Management governance and oversight over AWS services.
This week, you can export features into Amazon SageMaker Feature Store faster than ever with export functionality now available in Amazon SageMaker Data Wrangler. Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare data for machine learning (ML) from weeks to minutes. With SageMaker Data Wrangler, you can simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow, including data selection, cleansing, exploration, and visualization from a single visual interface. With SageMaker Data Wrangler’s data selection tool, you can quickly select data from multiple data sources, such as Amazon S3, Amazon Athena, Amazon Redshift, AWS Lake Formation, Amazon SageMaker Feature Store, and Snowflake. Amazon SageMaker Feature Store is a fully managed, purpose-built repository to store, update, retrieve, and share machine learning (ML) features.
Starting this week you can create and export features to Amazon SageMaker Feature Store in just a few clicks with Amazon SageMaker Data Wrangler. Previously, engineering features and exporting them into a feature store when preparing data for machine learning would require writing a substantial amount of code. You can now engineer your features using SageMaker Data Wrangler’s visual point-and-click interface and export features to SageMaker Feature Store in just a few clicks. You can also now easily browse feature groups, create new feature groups, and validate feature group schemas all from within SageMaker Data Wrangler.
Amazon RDS for SQL Server now supports SQL Server 2014 SP3 CU4 SU
A new minor version of Microsoft SQL Server is now available on Amazon Relational Database Service (Amazon RDS) for SQL Server, offering performance and security fixes. Amazon RDS for SQL Server supports the new minor version for Microsoft SQL Server 2014 on the Express, Web, Standard, and Enterprise Editions.
AWS encourage you to upgrade your Amazon RDS for SQL Server database instances at your convenience. You can upgrade with just a few clicks in the Amazon RDS Management Console or using the AWS CLI.
Learn more about upgrading your database instances from the Amazon RDS User Guide. You can find the release note for the new minor version here: SQL Server 2014 SP3 CU4 SU - 12.00.6433.1.
Amazon EC2 R5n instances now available in additional regions
Starting this week, Amazon EC2 R5n instances are available in the Africa (Cape Town) and Europe (Milan) Regions.
Based on the AWS Nitro System, these instances make 100 Gbps networking available to network-bound workloads, and Amazon Elastic Fabric Adapter (EFA) for low latency networking workloads. Customers can take advantage of this improved network performance to run a variety of network-bound workloads such as High Performance Computing (HPC), data analytics, machine learning (ML), Big Data, and data lake applications, as well as accelerate data transfer to and from Amazon S3, reducing the data ingestion time for applications and speeding up delivery of results. Workloads on these instances will continue to take advantage of the security, scalability and reliability of Amazon’s Virtual Private Cloud (VPC).
Amazon SageMaker Data Wrangler now enables model training with Amazon SageMaker Autopilot
Starting this week, you can invoke SageMaker Autopilot from SageMaker Data Wrangler to automatically train, tune and build machine learning models. SageMaker Data Wrangler reduces the time to aggregate and prepare data for machine learning (ML) from weeks to minutes. SageMaker Autopilot automatically builds, trains, and tunes the best machine learning models based on your data, while allowing you to maintain full control and visibility. Previously, customers used Data Wrangler to prepare their data for machine learning and Autopilot for training machine learning models independently. With this unified experience, you can now prepare your data in SageMaker Data Wrangler and easily export to SageMaker Autopilot for model training. With just a few clicks, you can automatically build, train, and tune machine learning models, making it easier to automatically employ state-of-the-art feature engineering techinques, train high quality machine learning models, and gain insights from your data faster.
To prepare data for Machine learning, select File > New Flow from SageMaker Studio. After preparing your data in SageMaker Data Wrangler, You can either preview your prepared data and choose “Export and Train”, or click + next to the node in data flow and select “Train model”. This lets you initiate training and model building in SageMaker Autopilot.
Split data into train and test sets in a few clicks with Amazon SageMaker Data Wrangler
This week AWS are announcing the general availability of splitting data into train and test splits with Amazon SageMaker Data Wrangler. Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare data for machine learning (ML) from weeks to minutes. With SageMaker Data Wrangler, you can simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow, including data selection, cleansing, exploration, and visualization from a single visual interface. With SageMaker Data Wrangler’s data selection tool, you can quickly select data from multiple data sources, such as Amazon S3, Amazon Athena, Amazon Redshift, AWS Lake Formation, Snowflake, and Databricks Delta Lake.
Starting this week, you can now split your data into train and test sets in just a few clicks with Data Wrangler. Previously data scientists had to write code to split their data into train and test sets before training ML models. With SageMaker Data Wrangler’s new train-test split transform, you can now split your data into train, test, and validation sets for use in downstream model training and validation. SageMaker Data Wrangler also provides various types of splits including: randomized, ordered, stratified, and key-based splits along with the option to specify how much data should go in each split. For example, if you create a random split of your data into a training set and test set, you can train a machine learning model on the training set and then evaluate your machine learning model on the test set. Evaluating the model on data seen during the training can be biased, thus setting test data aside prior to training is crucial. As a result, evaluating model accuracy on the test set data provides a real-world estimate of model performance.
Incident Manager from AWS Systems Manager now streamlines responses to ServiceNow Incidents
Starting this week, customers who use ServiceNow can respond, investigate and resolve incidents affecting their AWS-hosted applications using AWS Systems Manager Incident Manager and the AWS Service Management Connector. AWS Systems Manager is the operations hub for AWS applications and resources, that helps to automate reactive processes to quickly diagnose and remediate operational issues. With the Incident Manager integration with ServiceNow, customers can now automate their incident response plans in AWS Systems Manager and automatically synchronize their incidents into ServiceNow. This feature enables faster resolution of critical application availability and performance issues without disrupting existing workflows in ServiceNow. The AWS Service Management Connector also integrates with AWS Systems Manager OpsCenter to view, investigate, and resolve operational issues related to your AWS resources.
This AWS Service Management Connector release also introduces integrations of AWS Health service status and availability, AWS CloudTrail events logs visibility with AWS Systems Manager Change Manager, AWS Support dual sync using Amazon EventBridge, along with enhancements to existing integrations with AWS Service Catalog and AWS Config within ServiceNow. The Connector also provides existing integrations with AWS Systems Manager OpsCenter, AWS Systems Manager Automation and AWS Security Hub which helps simplify cloud provisioning, operations and resource management as well as enables to view streamlined Service Management governance and oversight over AWS services.
AWS Health Dashboard streamlines service transparency via Connector for ServiceNow
Starting this week, AWS customers can now view AWS Health and service status availability powered by AWS Health Dashboard in the Connector for ServiceNow. AWS Health provides ongoing visibility into resource performance and the availability of AWS services. Customers can view the AWS Health Dashboard to get relevant and timely information to help manage events in progress, prepare for planned activities and provide information on accounts and services. AWS Health Dashboard delivers alerts and notifications initiated by changes in the health of AWS resources for near-instant event visibility and guidance to help accelerate troubleshooting.
This AWS Service Management Connector release also introduces integrations of AWS Systems Manager Incident Manager incident creation and management, AWS CloudTrail events logs visibility with AWS Systems Manager Change Manager, AWS Support dual sync using Amazon EventBridge along with enhancements to existing integrations with AWS Service Catalog and AWS Config within ServiceNow. The Connector also provides existing integrations with AWS Systems Manager OpsCenter, AWS Systems Manager Automation and AWS Security Hub which helps simplify cloud provisioning, operations and resource management, as well as enables to view streamlined Service Management governance and oversight over AWS services.
Amazon Chime SDK now supports live transcription in AWS GovCloud (US)
Developers can now use live transcription with Amazon Chime SDK in AWS GovCloud (US) Regions to generate live audio transcriptions. The Amazon Chime SDK lets developers add intelligent real-time audio, video, and screen share to their web and mobile applications. AWS GovCloud (US) allows U.S. government agencies and contractors to move communication workloads into the cloud while addressing their specific regulatory and compliance requirements.
Amazon Chime SDK lets developers host WebRTC media sessions in both AWS GovCloud (US) Regions. Amazon Chime SDK integrates with Amazon Transcribe to deliver “who said what” transcription information direct to each session participant. Each user’s audio is processed individually to help improve accuracy when multiple people are talking. The audio from the top two active talkers is sent to Amazon Transcribe, in separate channels, via a single stream. Developers can use the transcription information to render real-time machine-generated captions or dynamically build a session transcript.
Live transcription uses Amazon Transcribe in the AWS GovCloud (US-West) Region, and provides access to all the streaming languages supported by Amazon Transcribe, as well as features such as automatic language identification, vocabulary filters, content identification, custom vocabularies, and custom language models. Standard Amazon Transcribe costs apply.
Introducing Amazon R6id instances
AWS announces the general availability of new memory-optimized Amazon EC2 R6id instances. R6id instances are powered by 3rd generation Intel Xeon Scalable Ice Lake processors, with an all-core turbo frequency of 3.5 GHz, up to 7.6 TB of local NVMe-based SSD block-level storage, and up to 15% better price performance than R5d instances. Furthermore, R6id instances also offer up to 58% higher TB storage per vCPU and 34% lower cost per TB and come with always-on memory encryption using Intel Total Memory Encryption (TME).
R6id instances are ideal for memory-intensive workloads, distributed web-scale in-memory caches, in-memory databases, and real-time big data analytics. They will also benefit applications that need temporary data storage, such as caches and scratch files. To provide increased scalability, R6id instances offer a new 32xlarge size that has 33% more vCPU and memory than previous generation instances and up to 20% higher memory bandwidth per vCPU; the 32xlarge size also integrates with the Elastic Fabric Adapter, which enables low latency and highly scalable inter-node communication. R6id instances also provide up to 50 Gbps of networking speed and 40 Gbps of bandwidth to the Amazon Elastic Block Store (EBS).
AWS Service Catalog is now available in (Jakarta) Indonesia
AWS Service Catalog is now available to customers in the AWS Region in (Jakarta) Indonesia.
AWS Service Catalog enables organizations on AWS to create, govern, and manage a catalog of Infrastructure as Code (IaC) templates that are approved for use on AWS. These IaC templates can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog administrators can centrally curate and share commonly deployed templates with their teams to achieve consistent governance and meet compliance requirements. End-users such as DevOps engineers and data scientists can then quickly discover and self-service approved AWS resources that they need to use to perform their daily job functions.
AWS Application Migration Service now supports automated application modernizations
AWS Application Migration Service is announcing support for new automated application modernizations. AWS Application Migration Service allows you to quickly rehost applications on AWS. It automatically converts your source servers from physical, virtual, or cloud infrastructure to run natively on AWS.
You can now use AWS Application Migration Service to configure application modernizations in the AWS Management Console before you migrate your servers. The modernizations will then be automatically applied to your migrated servers. The following post-migration modernization features are now supported:
- Convert CentOS to Rocky Linux distribution. When you migrate using this feature, AWS Application Migration Service replicates your source server to an Amazon EC2 instance and converts the operating system from CentOS to Rocky Linux.
- Change SUSE subscription to AWS-provided SUSE subscription. When you choose this feature, you can simplify billing and subscription maintenance for your migrated SUSE servers, by allowing AWS to manage these activities.
- Configure disaster recovery on migrated servers. This simplifies the setup of AWS Elastic Disaster Recovery (AWS DRS) on your migrated servers. AWS DRS helps increase resilience by allowing you to quickly recover applications in a different AWS Region after unexpected events.
In addition to these modernization features, AWS Application Migration Service now supports SUSE Linux Enterprise Server 12 Service Packs 1, 2, and later.
Starting this week, these new modernization features are available in all of the AWS Regions where AWS Application Migration Service is supported. Access the AWS Regional Services List for the most up-to-date availability information.
Announcing AWS Cost Allocation Tag API
AWS Cost Allocation Tags now offers APIs that you can use to activate and deactivate your Cost Allocation Tags. After you activate a Cost Allocation Tag, it will appear in your cost management products, such as AWS Cost Explorer and AWS Cost and Usage Report. You can use Cost Allocation Tags to filter, categorize, and track your AWS cost and usage information. Previously, you had to activate and deactivate Cost Allocation Tag on the Cost Allocation Tag page in the AWS Billing Console. With this launch, you can use the ListCostAllocationTags API to list all Tags and use the UpdateCostAllocationTagsStatus API to activate and deactivate Cost Allocation Tags.
AWS Mainframe Modernization is now generally available
Introduced at re:Invent in November 2021, AWS Mainframe Modernization is now generally available for AWS customer and partner use. Mainframe Modernization is a unique platform that allows you to migrate and modernize your on-premises mainframe workloads to a managed and highly available runtime environment on AWS. The service currently supports two main migration patterns—replatforming and automated refactoring—so you can select your best-fit migration path and associated toolchains based on your migration assessment results.
Use Mainframe Modernization to easily migrate and modernize your mainframe applications, increasing agility and reducing costs. You can break up and manage your complete migration with infrastructure, software, and tools to refactor or replatform legacy applications. Deploy, test, run, maintain, operate, and evolve migrated applications in the runtime environment with no upfront costs.
Amazon Connect forecasting, capacity planning, and scheduling (preview) are now available in the Europe (London) AWS Region. Machine-learning powered capabilities make it easier for contact center managers to help predict contact volumes and average handle time with high accuracy, determine ideal staffing levels, and optimize agent schedules to ensure they have the right agents at the right time. This helps businesses optimize their operations, meet service level goals, and improve agent and customer satisfaction. Getting started takes just a click, eliminating the need to build custom applications or integrate with third-party products.
New Amplify Flutter supports customizable authentication flows
AWS Amplify Flutter introduces support for creating customizable authentication flows, using Amazon Cognito Lambda triggers. Using this functionality, developers are able to setup customizations for the login experience in their Flutter apps, such as creating OTP login flows, or adding CAPTCHA to their Flutter app.
Developers of Amplify Flutter can setup the Auth category to use custom challenges using the Amplify CLI such as one-time passwords, or Captcha. The Amplify CLI provisions Lambdas in the backend, which then interact with Amazon Cognito to verify the challenges. Developers can build the custom challenges are built in steps, such as providing a one-time password after using an email for login, and a user of the app will have to clear each step in the challenges to get authenticated.
AWS Security Hub now receives AWS Config managed and custom rule evaluation results
AWS Security Hub now automatically receives AWS Config managed and custom rule evaluation results as security findings. AWS Config allows security and compliance professionals to assess, audit, and evaluate the configurations of their AWS resources via Config rules, which evaluate the compliance of AWS resources against specified policies. Examples of resource misconfigurations detected by Config rules include publicly-accessible Amazon S3 buckets, unencrypted EBS volumes, and overly-permissive IAM policies. When a Config rule evaluation passes or fails, you will now see a ‘passed’ or ‘failed’ finding for that evaluation in Security Hub. Any updates to the status of the Config rule evaluation will be automatically updated in the Security Hub finding. This new integration between Security Hub and AWS Config expands the centralization and single pane of glass experience by consolidating your Config evaluation results alongside your other security findings, allowing you to more easily search, triage, investigate, and take action on your security findings.
Available globally, AWS Security Hub gives you a comprehensive view of your security posture across your AWS accounts. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, and AWS IAM Access Analyzer, as well as from over 65 AWS Partner Network (APN) solutions. You can also continuously monitor your environment using automated security checks based on standards, such as AWS Foundational Security Best Practices, the CIS AWS Foundations Benchmark, and the Payment Card Industry Data Security Standard. You can also take action on these findings by investigating findings in Amazon Detective and by using Amazon CloudWatch Event rules to send the findings to ticketing, chat, Security Information and Event Management (SIEM), Security Orchestration Automation and Response (SOAR), and incident management tools or to custom remediation playbooks.
Amazon Neptune simplifies graph analytics and machine learning workflows with Python integration
You can now run graph analytics and machine learning tasks on graph data stored in Amazon Neptune using an open-source Python integration that simplifies data science and ML workflows. With this integration, you can read and write graph data stored in Neptune using Pandas DataFrames in any Python environment, such as a local Jupyter notebook instance, Amazon SageMaker Studio, AWS Lambda, or other compute resources. From there, you can run graph algorithms, such as PageRank and Connected Components, using open-source libraries like iGraph, NetworkX, and cuGraph.
Today’s launch helps customers to build and innovate faster by simplifying workflows to extract analytical insights for use cases such as knowledge graphs, fraud detection, entity resolution, and security posture management. For example, you can run a Connected Components algorithm on your Neptune data using NetworkX to identify strongly linked communities of users. You can then run PageRank to find the most influential users in each community and update these users with a “Most Influential” label in Neptune. You can also use Python libraries such as XGBoost to compute embeddings or make predictions on graph data, the SageMaker Python SDK to train and deploy machine learning models, or the Deep Graph Library, which is available today with Neptune ML.
Amazon DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class is now available in the AWS Asia Pacific (Jakarta) Region. The DynamoDB Standard-IA table class is ideal for use cases that require long-term storage of data that is infrequently accessed, such as application logs, social media posts, e-commerce order history, and past gaming achievements.
Now, you can optimize the costs of your DynamoDB workloads based on your tables’ storage requirements and data access patterns. The new DynamoDB Standard-IA table class offers 60 percent lower storage cost than the existing DynamoDB Standard tables, making it the most cost-effective option for tables with storage as the dominant table cost. The existing DynamoDB Standard table class offers 20 percent lower throughput costs than the DynamoDB Standard-IA table class. DynamoDB Standard remains your default table class and the most cost-effective option for the wider variety of tables that store frequently accessed data with throughput as the dominant table cost. You can switch between DynamoDB Standard and DynamoDB Standard-IA table classes with no impact on table performance, durability, or availability and without changing your application code. For more information about using DynamoDB Standard-IA check the DynamoDB Developer Guide.
Amazon CloudFront now supports TLS 1.3 session resumption for viewer connections
Amazon CloudFront now supports Transport Layer Security (TLS) 1.3 session resumption to further improve viewer connection performance. Until now, Amazon CloudFront has supported version 1.3 of the TLS protocol since 2020 to encrypt HTTPS communications between viewers and CloudFront. Customers that adopted the protocol have seen their connection performance improved by up to 30% compared with previous TLS versions. Starting today, customers that use TLS 1.3 will see up to 50% additional performance improvement thanks to TLS 1.3 session resumption. With session resumption, when a client reconnects to a server with which the client had an earlier TLS connection, the server decrypts the session ticket using a pre-shared key sent by the client and resumes the session. TLS 1.3 session resumption speeds up session establishment as it reduces computational overhead for both the server and the client. It also requires fewer packets to be transferred compared to a full TLS handshake.
Amazon EMR release 6.6 now supports Apache Spark 3.2, Apache Spark RAPIDS 22.02, CUDA 11, Apache Hudi 0.10.1, Apache Iceberg 0.13, Trino 0.367, and PrestoDB 0.267. You can use the performance-optimized version of Apache Spark 3.2 on EMR on EC2, EKS, and recently released EMR Serverless. In addition Apache Hudi 0.10.1 and Apache Iceberg 0.13 are available on EC2, EKS, and Serverless. Apache Hive 3.1.2 is available on EMR on EC2 and EMR Serverless. Trino 0.367 and PrestoDB 0.267 are only available on EMR on EC2.
Each Amazon EMR release version uses a default Amazon Linux 2 (AL2) Amazon Machine Image (AMI) for Amazon EMR. Prior to Amazon EMR 6.6, the default AMI was based on the latest and up-to-date Amazon Linux AMI available at the time of the Amazon EMR release. Therefore, the Amazon EMR release version was "locked" to its respective assigned AL2 AMI. This means that any new updates to AL2 were not automatically updated, unless you moved to the next Amazon EMR release or manually install them. With Amazon EMR 6.6 and subsequent releases, every time you launch an EMR on EC2 cluster, Amazon EMR automatically uses the latest AL2 release. See the documentation to learn more.
AWS IoT Device Management announces an 80% price reduction for Secure Tunnelling
AWS are excited to announce that this week they are reducing the price of the AWS IoT Device Management Secure Tunnelling feature by 80%. With the improved cost efficiencies, customers can now scale Secure Tunnelling to access remote devices deployed behind restricted firewalls for troubleshooting, configuration updates, training, and other operational tasks for their growing IoT workloads on AWS.
Secure Tunnelling is metered per tunnel opened. We are reducing the current price per tunnel by 80% while keeping the maximum tunnel duration at 12 hours. You can define the tunnel duration upon opening the tunnel and will not be charged for making multiple client connections over a single secure tunnel. If the connection drops, you can resume your connection to the original device using the RotateTunnlAccessTokenAPI.
AWS Security Hub is now available in the Asia Pacific (Jakarta) Region
AWS Security Hub is now available in the Asia Pacific (Jakarta) Region. You can now centrally view and manage the security posture of your AWS accounts in AWS Asia Pacific (Jakarta) Region.
Available globally, AWS Security Hub gives you a comprehensive view of your security posture across your AWS accounts. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, and AWS IAM Access Analyzer, as well as from over 65 AWS Partner Network (APN) solutions. You can also continuously monitor your environment using automated security checks based on standards, such as AWS Foundational Security Best Practices, the CIS AWS Foundations Benchmark, and the Payment Card Industry Data Security Standard. You can also take action on these findings by investigating findings in Amazon Detective and by using Amazon CloudWatch Event rules to send the findings to ticketing, chat, Security Information and Event Management (SIEM), Security Orchestration Automation and Response (SOAR), and incident management tools or to custom remediation playbooks.
Amazon AppStream 2.0 adds larger instance sizes to the General Purpose instance family
Amazon AppStream 2.0 adds new instance sizes stream.standard.xlarge, and stream.standard.2xlarge to the General Purpose instance family. stream.standard.xlarge offers 4 vCPUs and 16 GiB of memory, and stream.standard.2xlarge offers 8 vCPUs and 32 GiB of memory. These new instances provide higher performance options of compute, memory and networking resources for a diverse set of workloads that require more system resources to run effectively. A few examples include Integrated Development Environments, Web Servers and Code Repositories. The new instance sizes are available across all AppStream fleet types Always-On, On-Demand, and Elastic fleets.
AWS DeepRacer introduces quota management
AWS DeepRacer Multi-user mode provides an exciting way for organizations to sponsor multiple AWS DeepRacer participants under one AWS account. Until now, AWS DeepRacer event organizers lacked ways to preemptively set budgets and controls for participants and maintain, monitor, and control their budgets by monitoring usage.
Now, AWS DeepRacer event organizers can set quotas on participants' training hours and model count, monitor spending on training and storage, enable and disable training, and view and manage models for every user in their account from the AWS DeepRacer console. Event organizers can update 5h default training limits individually, for a group, or across all participants by selecting and updating quotas in the AWS DeepRacer console to match their budget.
Amazon Chime SDK announces messaging conversation APIs
Amazon Chime SDK messaging enables developers to connect business users and their customers with secure, scalable messaging in their web and mobile applications. Starting today, developers have access to new APIs that provide the ability to search for specific channels as well as automatically pre-fetch information when clients connect to display the messaging channels that require users attention when opening their application.
With channel search, developers can retrieve existing conversations based on the conversation members or a unique identifier, which can be useful when a user needs to recall a specific individual, or to find a channel based on an ID such as an appointment ID. With pre-fetch, developers can implement a rich chat UI for users to get an understanding of their recent updates and prioritize their attention accordingly when they first open their app. When used, the client can automatically receive a single message per channel with information such as channel members, last message, and time the last message was sent.
Amazon Personalize adds support for unstructured text in six new languages
Amazon Personalize has extended support for unstructured text in six new languages - Spanish, German, French, Portuguese, Chinese (Simplified and Traditional) and Japanese. Amazon Personalize enables developers to improve customer engagement through personalized product and content recommendations – no ML expertise required. Last year, Amazon Personalize launched support for unstructured text in English which enabled customers to unlock the information trapped in their product descriptions, reviews, movie synopses or other unstructured text to generate highly relevant recommendations for users. Amazon Personalize is now extending this support to unstructured text in six new languages allowing customers with global catalogues to use this feature. Customers provide unstructured text as a part of their catalogue and, using state-of-the-art natural language processing (NLP) techniques, Amazon Personalize automatically extracts key information about the items and uses it when generating recommendations for your users.
Amazon RDS for MySQL supports new minor versions 5.7.38 and 8.0.29
Following the announcement of updates in MySQL database versions 5.7 and 8.0, AWS have updated Amazon Relational Database Service (Amazon RDS) for MySQL to support MySQL minor versions 5.7.38, and 8.0.29.
AWS recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MySQL, and to benefit from the numerous bug fixes, performance improvements, and new functionality added by the MySQL community. Learn more about upgrading your database instances in the Amazon RDS User Guide, and create or update a fully managed Amazon RDS database using the latest available minor versions in the Amazon RDS Management Console.
Amazon SageMaker Studio and SageMaker Notebook Instance now come with JupyterLab 3 notebooks
Amazon SageMaker comes with two options to spin up fully managed notebooks for exploring data and building machine learning (ML) models. The first option is fast start, collaborative notebooks accessible within Amazon SageMaker Studio - a fully integrated development environment (IDE) for machine learning. You can quickly launch notebooks in Studio, easily dial up or down the underlying compute resources without interrupting your work, and even share your notebook as a link in few simple clicks. In addition to creating notebooks, you can perform all the ML development steps to build, train, debug, track, deploy, and monitor your models in a single pane of glass in Studio. The second option is Amazon SageMaker Notebook Instance - a single, fully managed ML compute instance running notebooks in cloud, offering customers more control on their notebook configurations. Today, we are excited to announce that both SageMaker Studio and SageMaker Notebook Instance now come with JupyterLab 3 notebooks to boost productivity of data scientists and developers building ML models on SageMaker.
With this update, you now have access to a modern interactive development environment (IDE) complete with developer tools for code authoring, refactoring and debugging, and support for latest open source JupyterLab extensions. With the integrated debugger you can inspect variables and step through breakpoints while you interactively build your data science and machine learning (ML) code. In addition, using the Language Server extension, you can enable modern IDE functionality such as tab-completion, syntax highlighting, jump to reference, and variable renaming across notebooks and modules, making you much more productive.
Google Cloud Releases and Updates
Source: cloud.google.com
GCP Released Dallas US-South1 : The Following Services became available
- CloudRun
- Cloud SQL for MySQL / PostgreSQL / SQL Server
- CloudSpanner
- Cloud VPN
- Dataflow
- GKE
- Pub/Sub
- VPC
- Cloud Storage
Anthos clusters on AWS / Azure
You can now launch clusters with the following Kubernetes versions:
- 1.21.11-gke.1800
- 1.22.8-gke.2100
App Engine standard environment Java
The Java 17 runtime (preview) now uses Ubuntu 22.
BigQuery
The following Storage Read API quotas and limits have changed:
- There is now a limit of 2,000 concurrent
ReadRows
calls per project in theUS
andEU
multi-regions and 400 concurrentReadRows
calls in other regions. - The number of data plane requests per user per project per minute has increased from 5,000 to 25,000.
For more information, see Storage Read API quotas and limits.
Cloud Functions
The Java 17 runtime (preview) now uses Ubuntu 22.
Cloud KMS
Cloud KMS is available in the following region:
us-south1
For more information, see Cloud KMS locations.
Cloud Load Balancing
External TCP/UDP Network Load Balancing now supports load-balancing GRE traffic. To handle GRE protocol traffic, you set the load balancer's forwarding rule protocol to L3_DEFAULT
and set the backend service protocol to UNSPECIFIED
.
For details, see:
This feature is available in General Availability.
Generally available: Dallas, Texas us-south1-a,b,c
has launched with E2 and N2 VMs available in all three zones.
Generally available: NVIDIA A100 GPUs are now available in the following additional regions and zones:
Las Vegas, Nevada, North America : us-west4-b
For more information about using GPUs on Compute Engine, see GPU platforms.
Config Controller
Config Controller now uses the following versions of its included products:
- Anthos Config Management v1.11.2, release notes
- Config Connector v1.86.0, release notes
Dataproc
New sub-minor versions of Dataproc images:
1.5.68-debian10
, 1.5.68-rocky8
, 1.5.68-ubuntu18
2.0.42-debian10
, 2.0.42-rocky8
, 2.0.42-ubuntu18
Dataproc Serverless for Spark now uses runtime version 1.0.13.
Dataproc Serverless for Spark runtime versions 1.0.2, 1.0.3 and 1.0.4 are unavailable for new batch submissions.
Dataproc on GKE Spark 3.1 images upgraded to Spark version 3.1.3.
Dataproc Megastore
Updated Dataproc Metastore auxiliary versions to support the Spanner database type.
Google Cloud Armor
Google Cloud Armor Threat Intelligence (Threat Intel) is available in public preview. Threat Intel lets you secure your traffic by allowing or blocking traffic to your HTTP(S) load balancers based on threat intelligence data. For more information, see Configuring Threat Intelligence.
Google Cloud Deploy
The Google Cloud Terraform provider now supports creating Google Cloud Deploy delivery pipelines and targets.
Storage Transfer Service
Storage Transfer Service now offers a merged, unified console experience for cloud and file system transfers. All transfer jobs irrespective of source can be tracked though a single interface. This launch simplifies job creation, monitoring, and troubleshooting.
Microsoft Azure Releases And Updates
Source: azure.microsoft.com
Public preview: Expanded feature regional availability for standard network features
Regional coverage has expanded for Azure NetApp Files for standard network features.
General availability: Feature general availability and feature expansion of regional availability
Various Azure NetApp Files features have reached general availability and regional coverage has been expanded for Azure NetApp Files cross-region replication.
General availability: Azure Databricks available in new regions
Sweden Central and West Central US regions added to Azure Databricks.
General availability: Azure SDK for Go
The Azure SDK for Go has received a major update and data plane support for new services like Identity, Key Vault, Service Bus, and Tables.
Public preview: Azure Load Testing supports splitting input data across test engines
You can now configure your test to automatically split large input data evenly across the test engine instances.
Public preview: Azure Load Testing support for user specified JMeter properties
Azure Load Testing preview now supports user specified JMeter properties, making load tests more configurable.
Public preview: Azure Load Testing support for customer-managed keys
You can now add another layer of security by bringing your own customer-managed keys.
Public preview: Azure Load Testing supports quick start tests with web URL
You can now create load tests quickly without prior knowledge of testing tools by entering your URL in the Azure Load Testing resource from the Azure portal.
Public preview: Azure Load Testing support for user-assigned managed identities
Use Azure AD managed identities with Azure Load Testing Preview to easily access other AAD-protected resources, like Azure Key Vault. Both system-assigned and user-assigned managed identities are now supported.
Generally available: Azure Container Apps support for custom domains and TLS certificates
You can now use custom domains and secure them with TLS certificates.
Public preview: Linux portal editing for applications
You can now edit Azure Functions running on Linux from within the Azure portal.
Public preview: Mount Azure Files and ephemeral storage in Azure Container Apps
You can now mount a file share as well as share data between multiple containers in Azure Container Apps.
Public preview: Microsoft Graph API integration with Azure Event Grid
Use the Event Grid Microsoft Graph API integration to subscribe to events from Azure AD, Outlook, and more.
Static Web Apps CLI now available
Deploy static web apps to the cloud using a simple CLI command.
Generally available: API Management reusable policy fragments
Organize policy components into reusable modules that can be inserted into any policy document.
Public preview: API Management authorizations
Simplify the process of managing authorization using APIs against backend services using OAuth2.
Public preview: Azure Arc-enabled System Center Virtual Machine Manager
Azure Arc-enabled System Center Virtual Machine Manager enables on-premises System Center Virtual Machine Manager environments to be connected to Azure, unlocking Azure-based self-service for end users and developers.
Generally available: Azure Monitor Agent available on latest Linux distros
Ubuntu 22.04, AlmaLinux, and Rocky Linux now supported via the Azure Monitor Agent.
Public preview: Connection Monitor Support for virtual machine scale sets
Azure Network Watcher Connection Monitor announces support for virtual machine scale sets which enables faster performance monitoring and network troubleshooting through connectivity checks.
Public preview: Network Watcher packet capture support for virtual machine scale sets
Azure Network Watcher packet capture announces support for virtual machines scale sets. This is as an out of the box, on-demand capability, enabling faster diagnostics and troubleshooting of networking issues.
Azure SQL—Generally available updates for early June 2022
Generally available updates made in early June 2022 for Azure SQL.
Public preview: Microsoft Purview access policies for Azure SQL Database
Use Microsoft Purview to manage access to your SQL sources using access policies.
Public preview: Azure Cosmos DB serverless container storage limit increase to 1TB
Store up to 1TB of data in Azure Cosmos DB serverless containers.
Public preview: Linux emulator with Azure Cosmos DB API for MongoDB
Develop and test locally at no cost on Linux and macOS with Azure Cosmos DB Linux emulator with API for MongoDB support.
Public preview: 16MB limit per document in API for MongoDB
Develop your applications with more flexibility with a new higher 16MB document limit in the API for MongoDB
Public preview: Azure NetApp Files datastores for Azure VMware Solution
Optimize and scale storage for Azure VMware Solution environments.
General availability: Trusted launch support for virtual machines using Ephemeral OS disks
Trusted launch virtual machine (VM) support for VMs using Ephemeral OS disks improves the security of generation 2 VMs in Azure.
Public preview: ExpressRoute IPv6 Support for Global Reach
IPv6 support for Global Reach expands the possibilities for customers building dual-stack, hybrid networks with Azure.
General availability: Updates for resource configuration changes
Resource configuration changes enables you to query across your subscriptions and tenants to discover changes to your resources with Azure Resource Graph.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here: