Hello,
Here's the weekly cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday April 21st 2023.
This week we released Architectural Monitoring Alerts (Private Beta) to get access please get in touch.
All the lastest Hava news can be found on our Linkedin Newsletter.
Of course we'd love to keep in touch at the other usual places. Come and say hello on:
AWS Updates and Releases
Source: aws.amazon.com
AWS Snowball Edge Compute Optimized now supports Amazon S3 compatible storage
Amazon S3 compatible storage is now available on AWS Snowball Edge Compute Optimized devices. Amazon S3 compatible storage enables customers to store data and run highly available applications on AWS Snowball Edge Compute Optimized devices by delivering secure object storage with high resiliency, increased scale, an expanded S3 API feature-set to rugged, mobile edge, and disconnected environments.
Amazon S3 compatible storage is ideal for running applications that require real-time processing or are bound by data residency requirements in Denied, Disrupted, Interrupted and Limited (DDIL) network connectivity environments.
Previously, customers used PUT and GET object operations to manage data on Snow devices. Now, customers have an expanded set of Amazon S3 APIs and features such as CreateBuckets, BucketLifecycle, and an increased cluster range from 3-16 devices, making it easier to develop, deploy, and manage applications requiring object storage on premises.
Customers can streamline operations and reduce operational overhead, cost and complexity by using the same tools, such as AWS CLI/SDK, across AWS Regions and AWS Snow Family devices.
To start using Amazon S3 compatible storage, log in to AWS Snow Family console and select ‘Amazon S3 compatible storage’ at the time of ordering your AWS Snowball Edge Compute Optimized device. Customers can use AWS OpsHub or Amazon S3 SDK/CLI to manage object storage on the device(s) locally or remotely from a central location.
Amazon ECS Service Connect now available in the Middle East (UAE) and Europe (Zurich) Regions
Amazon Elastic Container Services (Amazon ECS) launches a networking capability called Service Connect in the Middle East (UAE) and Europe (Zurich) Regions.
Amazon ECS is a fully managed container orchestration service that makes it easier for you to deploy, manage, and scale containerized applications. Customers can use Service Connect to easily configure service discovery, connectivity and traffic observability for services running in Amazon ECS.
This helps build applications faster by letting you focus on the application code and not on your networking infrastructure.
Announcing Amazon GuardDuty support for AWS Lambda
Amazon GuardDuty expands threat detection coverage to continuously monitor network activity logs, starting with VPC Flow Logs, generated from the execution of AWS Lambda functions to detect threats to Lambda such as functions maliciously repurposed for unauthorized cryptocurrency mining, or compromised Lambda functions that are communicating with known threat actor servers.
GuardDuty Lambda Protection can be enabled with a few steps in the GuardDuty console, and using AWS Organizations, can be centrally enabled for all existing and new accounts in an organization.
AWS Customers across many industries and geographies use Amazon GuardDuty, including more than 90% of AWS’s 2,000 largest customers. GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.
With GuardDuty Lambda Protection, you can now continuously monitor your Lambda execution environment without any configuration changes to the existing Lambda functions or new Lambda functions that are added. Current and new GuardDuty users can try GuardDuty Lambda Protection at no cost with a 30-day free trial.
Amazon Redshift announces general availability of MERGE SQL command
Amazon Redshift announces support for MERGE command which enables you to apply source data changes unto Redshift warehouse tables with a simple SQL command. MERGE command allows you to combine a series of DML (Data Manipulation Language) statements into a single statement, .
When using multiple statements to update or insert data, there is a risk of inconsistencies between the different operations. Merge operation reduces this risk by ensuring that all operations are performed together in a single transaction.
For Amazon Redshift customers who are migrating from other data warehouse systems or who regularly need to ingest fast changing data into their Redshift warehouse, MERGE SQL command is an easier way to conditionally insert, update, and delete from target tables based on existing and new source data.
Introducing AWS WAF Ready Partner Offerings
We are excited to announce the new AWS WAF Ready specialization for AWS Partners with software products that integrate with AWS Web Application Firewall (WAF). Ensuring websites and applications are protected from external threats that can lead to a loss of revenue, customer trust, and brand reputation has become a top concern for businesses of all shapes and sizes.
AWS WAF Ready specialization offers customers a simple solution to deploying and maintaining their application layer security solution with AWS WAF Ready Partner offerings.
AWS WAF Ready partner offerings provide robust WAF rule sets and mitigation tools that customers can choose depending on their specific application use case. In addition, AWS WAF Ready partners stay ahead of the attack curve and can help mitigate zero-day vulnerabilities so that customers do not have to worry about continually updating their rule sets based on novel or new attack vectors.
Beyond detection and mitigation, AWS WAF Ready partners provide customers with pre-built integrations to help ingest and analyze WAF event data.
With the new AWS WAF Ready specialization, customers can quickly and confidently identify validated AWS Partner software products vetted by AWS Partner Solutions Architects for their sound architecture, adherence to AWS best practices, and demonstrated customer success. We invite you to learn more about AWS WAF Ready Partner offerings.
AWS SAM CLI announces local testing support for API Gateway Lambda authorizers
The AWS Serverless Application Model (SAM) Command Line Interface (CLI) announces the launch of local testing support for Amazon API Gateway Lambda authorizers making it easier for developers to locally test their applications containing API Gateway. The AWS SAM CLI is a developer tool that makes it easier to build, test, package, and deploy serverless applications.
Customers can now use the SAM CLI to locally emulate API Gateway Lambda authorizers on their machine using the sam local start-api command. Previously, customers had to use AWS Console or AWS CLI to test their API Gateway Lambda authorizers.
With this launch, SAM CLI users can speed up their iteration cycles by testing their authorizer code locally. sam local start-api command supports testing of both Lambda authorizers, and Lambda V2 authorizers.
Announcing the general availability of Amazon CodeCatalyst
This week, AWS announced the general availability of Amazon CodeCatalyst, a unified software development service that makes it easier for you to quickly build and deliver applications on AWS. CodeCatalyst provides everything you need to start planning, coding, building, testing, and deploying applications on AWS with a streamlined, integrated experience.
With CodeCatalyst, you can spend more time developing application features and less time setting up project tools, creating and managing continuous integration and continuous delivery (CI/CD) pipelines, provisioning and configuring development and deployment environments, and onboarding team members to their projects.
CodeCatalyst has additional new features, such as the ability to use AWS Graviton for CI/CD workflows and deployment environments, which can result in significant cost savings. If you have already established code and tooling in GitHub, you can create a project from a linked GitHub repository, which developers can then work in using Dev Environments.
Work can be tracked by linking issues with pull requests, making it easier to track and understand changes to code. You can choose from two new blueprints when creating projects: Intelligent Document Processing and Static Website frameworks.
CodeCatalyst is available in the US West (Oregon) AWS Region, but can deploy workloads to any public Region worldwide. Multiple AWS accounts can be connected to CodeCatalyst spaces in order to better manage and control access to deployments in different environments such as dev, test, staging, and production.
Amazon Redshift announces general availability of Dynamic Data Masking
Amazon Redshift already supports role-based access control, row-level security, and column-level security to enable organizations to enforce fine-grained security on Redshift data. Amazon Redshift now extends these security features by supporting Dynamic Data Masking (DDM) that allows you to simplify the process of protecting sensitive data in your Amazon Redshift data warehouse.
With Dynamic data masking, you control access to your data through SQL based masking policies that determine how Redshift returns sensitive data to the user at query time.
With this capability, as a security administrator, you can create masking policies to define consistent, format preserving, and irreversible masked data values. You can apply masking on a specific column or list columns in a table.
Also, you have the flexibility of choosing how to show the masked data. For example, you can completely hide all the information about the data, you can replace partial real values with wildcard characters, or you can define your own way to mask the data using SQL Expressions, Python, or Lambda User Defined Functions.
Additionally, you can apply a conditional masking based on other columns, which selectively protects the column data in a table based on the values in other column(s). When you attach a policy to a table, the masking expression can be applied to one or more of its columns.
Amazon EC2 supports Ubuntu Pro operating system in a subscription-included model
Amazon Web Services announces the general availability of Ubuntu Pro on Amazon EC2 in a subscription-included model. You can now easily deploy Ubuntu Pro on-demand instances and purchase Ubuntu Pro Compute Savings Plans from the AWS EC2 console, and get five additional years of Ubuntu security updates from Canonical.
You will be charged on a per-second basis for Ubuntu Pro EC2 AMI instances. For any new Ubuntu Pro EC2 AMI deployments, you will now see Ubuntu Pro charges in the Elastic Cloud Compute section of your AWS bill.
In addition to five more years (total ten years) of security maintenance, Ubuntu Pro provides security coverage for approximately 23000 packages in Ubuntu Universe repository, live kernel patching, etc. For example, Ubuntu 18.04 LTS will reach end of standard support on May 31 2023, however with Ubuntu Pro 18.04 LTS you can continue to receive security coverage until April 2028.
Redesigned opportunity management experience in AWS Partner Central
This week, AWS launched a redesigned opportunity management experience within the AWS Partner Central portal. This launch provides AWS Partners, using APN Customer Engagements (ACE), a faster, simpler and intuitive experience to securely collaborate and co-sell with AWS. The improved experience will enable our partners to drive successful customer engagements for their continued growth and profitability.
The enhanced opportunity management experience will improve AWS Partner digital experience through a modern look and feel of the ACE portal, consistent with other AWS tools. Partners will now experience faster load times on commonly used pages, including pipeline manager, opportunity creation and opportunity view/edit pages.
The new opportunity management experience introduces an opportunity overview dashboard to provide partners with key insights into their opportunity pipeline and enables them to apply specific criteria or conditions through custom filters so they can easily retrieve and focus on the data that is most relevant to their needs.
AWS have simplified the opportunity creation process by breaking it down into smaller, more manageable steps and have enhanced our view and management of an opportunity flow so partners can interact with it seamlessly. Additionally, they have structured the deal information by breaking it down into easy-to-understand sections, introduced intuitive notifications and have made it frictionless for a partner to launch their engagements.
The new experience will also have embedded in-app guidance to make it easier for our partners to navigate through our site and accomplish their goals.
AWS Amplify Flutter announces general availability for web and desktop support
AWS Amplify Flutter is announcing the launch of version 1.0.0, expanding support to web and desktop as target platforms. Amplify is a set of tools and services that help frontend web and mobile developers build secure, scalable, fullstack applications.
With this release, Flutter developers can target 6 platforms, including iOS, Android, web, MacOS, Linux, and Windows. Developers can build Authentication, API (GraphQL +REST), Storage, and Analytics experiences with Amplify targeting the platforms that matter most to their users.
Developers using Amplify Flutter can now enjoy a consistent experience for all platforms they target, thanks to the complete re-writing of the Amplify Flutter libraries in Dart. This means that Flutter developers can use the same codebase and libraries for all the platforms they target, which significantly reduces developments times, allowing developers to deliver value faster to their customers.
Amazon Redshift announces centralized access control for data sharing with AWS Lake Formation
Amazon Redshift data sharing enables you to share live data across Amazon Redshift data warehouses. Amazon Redshift now supports simplified governance of Amazon Redshift data sharing by using AWS Lake Formation to centrally manage permissions on data being shared across your organization.
With the new Amazon Redshift data sharing managed by AWS Lake Formation, you can manage permission grants, view access controls, and audit permissions on the tables and views in the Redshift datashares using Lake Formation APIs and the AWS Console.
Lake Formation managed data sharing improves the security of your data by enabling security administrators to use Lake Formation to manage granular entitlements such as table-level, column-level, or row-level access to tables and views being shared in Redshift data sharing.
Data is shared live from Redshift Managed Storage (RMS) and not copied or moved to Amazon S3, data consumers can detect the data directly in AWS Lake formation and start querying within a minutes. You now have better visibility and control of data shared within and across accounts in your organization. AWS Lake Formation managed data sharing also enables you to define policies once and enforce those consistently for multiple data consumers.
Amazon Chime SDK now supports Hindi and Thai languages for live transcription
The Amazon Chime SDK now supports Hindi and Thai languages for live transcription of WebRTC media sessions. Amazon Chime SDK lets developers add real-time audio, video, and screen share to their web and mobile applications. Live transcription integrates with Amazon Transcribe to generate live audio transcription for use as subtitles or transcripts.
The expanded language coverage of live transcription enables customers to reach a broader global audience with a consistent experience in their communication-enabled applications. The Hindi and Thai languages can be selected explicitly or automatically through automatic language detection.
Hindi and Thai languages are available for live transcription with any WebRTC media session in any Amazon Chime SDK meeting media Region, including the AWS GovCloud (US) Regions.
Amazon Corretto April, 2023 Quarterly Updates
On April 18, 2023 Amazon announced quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) versions of OpenJDK. Corretto 20.0.1, 17.0.7, 11.0.19, 8u372 are now available for download.
Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK. With this release, we are declaring aarch64 Alpine Linux binaries of Corretto 17, 11, and 8 GA.
Amazon VPC Prefix Lists now available in three additional regions
Starting this week, Amazon Virtual Private Cloud (VPC) customers can create their own Prefix Lists in three additional AWS Regions: Asia Pacific (Hyderabad), Asia Pacific (Melbourne) and Europe (Spain).
A prefix list allows you to group multiple CIDR blocks into a single object, and use it as a reference to simplify network configuration. You can share your prefix list with other AWS accounts using AWS Resource Access Manager (RAM) and use it to configure VPC route tables and security groups.
A prefix list makes it easier for you to roll out changes and maintain consistency in security groups and route tables across multiple VPCs and accounts. For example, you can create a prefix list to represent all your branch office CIDR blocks and use it to configure your security groups and route tables. When you add a new branch office, you simply add its CIDR block to the prefix list, and this automatically establishes connectivity from all corresponding VPCs and accounts.
Amazon GuardDuty now available in AWS Asia Pacific (Melbourne) Region
Amazon GuardDuty is now available in the Asia Pacific (Melbourne) Region. You can now continuously monitor and detect security threats in this additional region to help protect your AWS accounts, workloads, and data.
Customers across many industries and geographies use Amazon GuardDuty, including more than 90% of AWS’s 2,000 largest customers. GuardDuty continuously monitors for malicious or unauthorized behavior to help protect your AWS resources, including your AWS accounts, EC2 workloads, access keys, EKS clusters, and data stored in Amazon S3 and Amazon Aurora. GuardDuty can identify unusual or unauthorized activity like crypto-currency mining, access to data stored in S3 from unusual locations, or unauthorized access to Amazon Elastic Kubernetes Service (EKS) clusters.
GuardDuty Malware Protection adds file scanning for workloads utilizing Amazon Elastic Block Store (EBS) volumes to detect the presence of malware. GuardDuty continually evolves its techniques to identify indicators of compromise, such as updating machine learning (ML) models, adding new anomaly detections, and growing integrated threat intelligence to identify and prioritize potential threats.
AWS Control Tower is now available in 7 additional Regions
This week, AWS Control Tower is available in 7 additional AWS Regions: US West (N. California), Asia Pacific (Hong Kong, Jakarta, and Osaka), Europe (Milan), Middle East (Bahrain), and Africa (Cape Town).
With this launch, AWS Control Tower is now available in 22 AWS Regions and 2 AWS GovCloud (US) Regions. AWS Control Tower offers the easiest way to set up and govern a secure, multi-account AWS environment. It simplifies AWS experiences by orchestrating multiple AWS services on your behalf while maintaining the security and compliance needs of your organization. You can set up a multi-account AWS environment within 30 minutes or less.
If you are new to AWS Control Tower, you can launch it today in any of the supported regions, and you can use AWS Control Tower to build and govern your multi-account environment in all supported Regions.
If you are already using AWS Control Tower and you want to extend its governance features to the newly supported regions in your accounts, you can go to the settings page in your AWS Control Tower dashboard, select your regions, and then update your landing zone. You must then update all accounts that are governed by AWS Control Tower. Then your entire landing zone, all accounts, and OUs will be under governance in the new region(s).
Amazon WorkSpaces Web Access with WSP is now available in AWS GovCloud (US-West) Region
Amazon WorkSpaces Web Access is now available in the AWS GovCloud (US-West) Region. You can deploy Amazon WorkSpaces using WorkSpaces Streaming Protocol (WSP), without having to install a native client application and your users can now access Amazon WorkSpaces from supported web browsers on devices running Windows, macOS, and Linux.
The new Amazon WorkSpaces Web Access experience is powered by the WorkSpaces Streaming Protocol (WSP), a cloud-native streaming protocol that enables a consistent user experience when your users access WorkSpaces across global distances and unreliable networks.
Web Access with WSP helps users to remain productive when connecting from computers where a web browser experience may be optimal, for example on personally-owned or locked-down devices where installing and maintaining a client application can be challenging.
You can get started with WSP by selecting WSP from the protocol drop down in the AWS management console and choosing a Windows 10 Value, Standard, Performance, Power, or PowerPro bundle when creating a WorkSpace. To learn more, visit the Amazon WorkSpaces Administration Guide to Enable and configure Amazon WorkSpaces Web Access.
Amazon SageMaker Studio Lab combats bots with CAPTCHA
Amazon SageMaker Studio Lab deployed CAPTCHA, (Completely Automated Public Turing test to tell Computers and Humans Apart), to deter automated bots and scripts, from absorbing compute capacity that is meant for customers experimenting with machine learning.
Customers are now enjoying more CPU and GPU capacity with Amazon SageMaker Studio Lab, which is a free service, that enables customers to experiment with machine learning code in a Jupyter IDE, hosted on the AWS cloud.
SageMaker Studio Lab leverages AWS WAF CAPTCHA JS API, which challenges customers with simple puzzle images, that are easy for humans but hard for bots to solve. This has enabled Studio Lab to reclaim compute that bots were using, and redistribute it to the ML learners who need it.
Amazon Inspector now supports deep inspection of EC2 instances
Amazon Inspector now supports deep inspection of EC2 instances when the continual EC2 scanning feature is activated. With this expanded capability, Inspector now identifies software vulnerabilities in application programming packages including Python, Java, and Node.js packages in addition to operating system packages.
Inspector discovers these application programming packages installed in default directory paths, and also allows customers to provide additional custom directory paths for Inspector discovery. This feature is activated by default for all new customers, and existing customers can activate this feature across their organization with a single click in the console. Deep inspection of EC2 instances is offered at no additional cost to Inspector customers.
Amazon Inspector is a vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure across your entire AWS Organization.
Once activated, Amazon Inspector automatically discovers all of your Amazon Elastic Compute Cloud (EC2) instances, container images in Amazon Elastic Container Registry (ECR), and AWS Lambda functions, at scale, and continuously monitors them for known vulnerabilities, giving you a consolidated view of vulnerabilities across your compute environments.
Amazon Inspector also provides a highly-contextualized vulnerability risk score by correlating vulnerability information with environmental factors such as external network accessibility to help you prioritize the highest risks to address.
AWS Backup for Amazon S3 is now available in AWS GovCloud (US) Regions
This week, we are announcing the availability of AWS Backup for Amazon S3 in AWS GovCloud (US-East, US-West) Regions. AWS Backup is a policy-based, fully managed and cost-effective solution that enables you to centralize and automate data protection of Amazon S3 along with other AWS services (spanning compute, storage, and databases) and third-party applications.
Together with AWS Organizations, AWS Backup enables you to centrally deploy policies to configure, manage, and govern your data protection activity.
With this launch Amazon S3 backups are now available in the following Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm), Asia Pacific (Hong Kong, Jakarta, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Middle East (Bahrain, UAE), Africa (Cape Town) and AWS GovCloud (US-East, US-West). For more information on regional availability and pricing, see AWS Backup pricing page.
AWS Amplify supports Push Notifications for mobile and cross platform apps
AWS Amplify is announcing support for Push Notifications for Swift, Android, Flutter, and React Native applications. This means that developers can now use Amplify to set up push notifications when targeting iOS and Android platforms, providing developers and businesses with a way to engage with users and send them timely and relevant information.
Developers can now set up campaigns to segment and target specific users. Personalizing push notifications with AWS Amplify allows mobile and cross-platform developers to target specific users with important updates, promotions, and new features, resulting in higher engagement and increased user satisfaction.
AWS Lake Formation and Glue Data Catalog now manage Apache Hive Metastore resources
AWS Lake Formation and the Glue Data Catalog now extend data cataloging, data sharing and fine-grained access control support for customers using a self-managed Apache Hive Metastore (HMS) as their data catalog.
Previously, customers had to replicate their metadata into the AWS Glue Data Catalog in order use Lake Formation permissions and data sharing capabilities. Now, customers can integrate their HMS metadata within AWS, allowing them to discover data alongside native tables in the Glue data catalog, manage permissions and sharing from Lake Formation, and query data using AWS analytics services.
To get started, customers using this feature will need to connect their HMS databases and tables as federation objects into their AWS Glue Data Catalog. Customers can then grant Lake Formation column, tag, and data filter permissions on tables as if they were native AWS Glue Data Catalog tables.
These permissions are then applied whenever those tables are queried by Lake Formation supported AWS services, simplifying the management of unified data access controls. Finally, customers can audit access and permissions on their HMS resources using AWS CloudTrail logs generated on all data and metadata access events.
Amazon Comprehend improves accuracy of document classification using layout data
Amazon Comprehend announced that Amazon Comprehend APIs for Document Classification will now use layout of the document in addition to text, to provide higher accuracy.
Amazon Comprehend is a Natural Language Processing (NLP) service that provides pre-trained and custom APIs to derive insights from textual data. At re:Invent 2022, Comprehend simplified document classification by adding support for inference on common document types.
At that time, customers did not have the ability to train custom document classification models for PDF/Word/Image files with layout data for higher accuracy. Now, using the same document classification APIs, customers will be able to train custom classification models with PDF documents, Microsoft Word files, and images, to support using layout information and get higher accuracy for classification.
This higher accuracy is beneficial for various scenarios such as insurance claims and mortgage document classification. Customers can use the new capability for asynchronous processing or real-time use cases.
Announcing Dev Environment dashboard for Amazon CodeCatalyst (Preview)
Amazon CodeCatalyst (Preview) has added an administrative dashboard for Dev Environments. This dashboard enables users with the Space administrator role to centrally view and manage Dev Environments across projects.
CodeCatalyst is a unified software development service that makes it faster to build and deliver software on AWS. Dev Environments, a feature of CodeCatalyst, are preconfigured, scalable cloud development environments accessible from popular IDEs.
A space represents your company, or department, which contains projects, team members, and the associated cloud resources you create in CodeCatalyst. Using the new dashboard, you can view, stop, or delete Dev Environments belonging to your space. You can also modify configuration settings such as inactivity timeout and compute configurations of Dev Environments belonging to your team members.
The owners of the Dev Environments receive email notifications whenever a space administrator performs update or delete operations on these configurations. The new dashboard also supports filtering of Dev Environments using properties such as projects, users, alias, IDE, resource, and status. To use the new UI, navigate to your space, choose Settings, and then choose Dev Environments.
With preconfigured Dev Environments, you can onboard developers faster by avoiding lengthy setup process and work with supported IDEs such as AWS Cloud9, Visual Studio Code, and JetBrains IDEs including IntelliJ IDEA Ultimate, PyCharm Professional and GoLand.
Amazon EMR now makes troubleshooting easier with enhanced error details
Amazon EMR on EC2 now provides contextual error details that makes it easier to troubleshoot cluster provisioning failures. EMR lets you provision your EMR on EC2 cluster without worrying about managing compute infrastructure or open-source application setup. However, there can be circumstances when your cluster provisioning fails, such as an insufficient EC2 instance capacity error, bootstrap action failure or a VPC-subnet misconfiguration error.
With today’s launch, you will now find additional error details in the new EMR console, AWS Command Line Interface (AWS CLI), and the AWS SDK. These additional error details are automatically enabled for all EMR versions and no further action is needed.
Previously when users encountered a provisioning error, such as a bootstrap failure, they would receive an error message that the cluster failed but with no further context. Identifying whether the root cause was an invalid or conflicting bootstrap action or a misconfigured bootstrap source file location took multiple steps.
With today’s launch, you will get specific error details for cluster provisioning failures along with detailed root cause, and recommendations to resolve the failure. You can find these details in the Cluster list view in the new console and via APIs.
AWS Lambda adds support for Python 3.10
AWS Lambda now supports Python 3.10 as both a managed runtime and a container base image. Developers creating serverless applications in Lambda with Python 3.10 can take advantage of numerous Python language enhancements to make code more readable and maintainable.
These include pattern matching for data structures, parenthesized context managers to simplify managing resources such as file handles or database connections, and better error handling. For more information on Lambda’s support for Python 3.10, see our blog post at Python 3.10 runtime now available in AWS Lambda.
To deploy Lambda functions using Python 3.10, upload the code through the Lambda console and select the Python 3.10 runtime. You can also use the AWS CLI, AWS Serverless Application Model (AWS SAM) and AWS CloudFormation to deploy and manage serverless applications written in Python 3.10.
Additionally, you can also use the AWS-provided Python 3.10 base image to build and deploy Python 3.10 functions using a container image. To migrate existing Lambda functions running earlier Node versions, review your code for compatibility with Node.js 18 and then update the function runtime to Python 3.10.
AWS will automatically apply updates to the Python 3.10 managed runtime and to the AWS-provided Python 3.10 base image, as they become available.
AWS IoT Core for LoRaWAN supports public LoRaWAN network and roaming with Everynet (Preview)
AWS IoT Core for LoRaWAN announces public network support (preview) for LoRaWAN-based Internet of Things (IoT) systems. With this update, customers can now easily connect their IoT devices to the cloud using publicly available LoRaWAN networks provided by Everynet, a global LoRaWAN network operator offering carrier grade networks.
Using AWS IoT Core for LoRaWAN, customers can simply register their devices to the cloud and opt for public network support in the AWS IoT console. Within minutes, they will be able to receive data from registered LoRaWAN devices in their AWS account.
AWS IoT Core for LoRaWAN is a fully managed LoRaWAN Network Server (LNS) that enables customers to connect their LoRaWAN-enabled wireless devices, typically used for low-power, long-range wide area network connectivity, to AWS. IoT system operators now do not need to deploy and maintain their own private LoRaWAN network, resulting in development, management, and operational cost savings. It also streamlines billing processes as system operators do not need to interface with individual network provider for managing network subscription costs.
AWS Backup announces support for SAP HANA databases on Amazon EC2
AWS Backup now offers a simple, cost-effective, and application-consistent backup and restore solution for SAP HANA databases running on Amazon EC2. With this launch, you can centrally automate backup and restore of your SAP HANA application data, in addition to the currently supported AWS services.
Using AWS Backup’s seamless integration with AWS Organizations, you can create and manage immutable backups of SAP HANA databases across all your accounts, help protect your data from inadvertent or malicious actions, and restore the data.
To get started with AWS Backup for SAP HANA on Amazon EC2, you can use the AWS Management console, CLI, or SDK to create backup policies to start protecting your SAP HANA databases. AWS Backup leverages AWS Systems Manager (SSM) for SAP to register these SAP HANA systems and AWS Backint Agent to take backups.
Once you define your backup policies and assign SAP HANA resources to the policies, AWS Backup automates the creation of SAP HANA backups that are application-consistent and stores those backups in an encrypted backup vault that you designate.
You can now take continuous backups with PITR (Point-In-Time Recovery) support, automatically set lifecycle policies to cold storage, and restore your data with a few clicks.
Amazon EMR Serverless adds job-level billed resources for efficient cost management
Amazon EMR Serverless is a serverless option that makes it simple for data analysts and engineers to run open-source big data analytics frameworks like Apache Spark and Apache Hive without configuring, managing, and scaling clusters or servers.
Starting this week, you can view the aggregated Billed resource utilization for each job within an EMR Serverless application, simplifying the cost calculation per job run.
When running Spark or Hive workloads, it is useful to see the resources used by individual job runs to help understand and manage your total costs. With this feature, you can get a detailed view of the vCPU-hours, memoryGB-hours, and storageGB-hours consumed by an EMR serverless job on completion.
Using this data and the pricing in each Region, you can accurately calculate the cost per job run. You can view these values both in the EMR Studio UI and the GetJobRun API.
Amazon RDS events now include tags for filtering and routing
Amazon Relational Database Service (Amazon RDS) events now include tags in the message body, which provide metadata about the resource that was affected by the service event. The message receiver can use these tags to do payload-based parsing with rule matching, enabling workflows such as routing, filtering, and down stream automation.
Now that Amazon RDS events include tags, you can simplify your application architecture, remove the need for describe API calls and enable downstream automation via services like Amazon Lambda and AWS Systems Manager.
Amazon RDS events including tags, combined with message body filtering, and routing logic from subscribing systems like Amazon Simple Notification Service (SNS), Amazon Simple Queuing Service (SQS), or Amazon EventBridge can also get your events to your preferred communication channel like email, chat or paging systems.
Prepare data easily with Amazon Personalize and Amazon SageMaker Data Wrangler integration
Amazon Personalize is integrating with Amazon SageMaker Data Wrangler to make it easier for customers to import and prepare their data. Amazon Personalize enables developers to improve customer engagement through personalized product and content recommendations – no ML expertise required.
The quality of data used for model training affects the quality of the recommendations, which makes data aggregation and preparation a critical step to get high-quality recommendations using Amazon Personalize. With this launch, Amazon Personalize gives you the ability to prepare your data through Amazon SageMaker Data Wrangler before using it in Amazon Personalize.
Customers can use Amazon SageMaker Data Wrangler to import data from 40+ supported data sources and perform end-to-end data preparation (including data selection, cleansing, exploration, visualization, and processing at scale) in a single user interface using little to no code.
This allows customers to rapidly prepare their users, items or interactions dataset using Amazon SageMaker Data Wrangler by leveraging over 300 built-in data transformations, retrieving data insights, and quickly iterating by fixing data issues.
Getting started with the Amazon SageMaker Data Wrangler integration is easy. Simply visit the Amazon Personalize console, open a Dataset from within your Dataset groups, select “Import and Prepare Your Data,” and then choose “Prepare Data with Data Wrangler”.
Please note that customers using Amazon SageMaker Data Wrangler will incur additional charges as per their usage, checkout their pricing page.
Amazon DynamoDB now supports up to 50 concurrent table restores
Amazon DynamoDB now supports up to 50 concurrent table restores per AWS account. The default service quota for table restores increased from 4 to 50, and is applicable to restores performed using point-in-time recovery and on-demand backups managed by DynamoDB and AWS Backup.
With the increased default service quota, customers who have a large number of tables can now run up to 50 concurrent table restore operations, reducing the delays associated with restoring four tables at a time.
The new default quotas are now effective in all AWS Regions, except AWS GovCloud (US) Regions. Start taking advantage of these quotas by using the DynamoDB console, the AWS Command Line Interface (AWS CLI), or AWS APIs.
AWS announces the Manufacturing and Industrial Competency
We are excited to announce the launch of our AWS Manufacturing and Industrial Competency. Previously known as the AWS Industrial Software Competency, the updated AWS Manufacturing and Industrial Competency has expanded to include new categories to further differentiate partners and help customers find the right solution for their unique business needs.
As the manufacturing industry experiences an increase in cloud adoption, the Manufacturing and Industrial Competency helps align validated and differentiated AWS Partner solutions to customers seeking to modernize their business through Engineering and Design, Smart Manufacturing (Robotics, Worker Safety, and Productivity), Smart Product and Services, Enterprise Solutions, Operational Technology Security, Supply Chain Management, Sustainability, and Operational Technology for an end-to-end value chain.
Aligning to these categories helps customers accelerate their time to results, optimize operations, seek new revenue streams, and generate sustainable business operations while reducing their carbon footprint.
Amazon EFS now supports up to 10 GiB/s of throughput
Amazon Elastic File System (Amazon EFS) has increased the maximum throughput per file system by 3x to 10 GiB/s of read throughput and 3 GiB/s of write throughput.
Amazon EFS provides serverless, fully elastic file storage that makes it simple to set up and run file workloads in the cloud. With these new throughput limits, you can easily run more performance-intensive workloads on AWS using Amazon EFS, such as machine learning, genomics, and data analytics applications.
Amazon Connect Customer Profiles now shows cases information in the agent workspace
Using Amazon Connect Customer Profiles inside the agent workspace, agents can now see cases from 3P case management solutions and Connect Cases inside a particular customer profile. Having both customer profile information together with case details in the same window makes it easier for agents to understand the customer context and reduces wasted time switching between applications.
Using the cases table inside the Customer Profile, agents can see case information such as status, date last updated, title, and source system. If agents need more detail, they can click on the case to access additional information from the source system. For example, agents handling transfers, can now see the caller’s open case regarding license renewal, quickly assist them without needing to gather more information, and help provide faster issue resolution.
As another example, agents can now see repeat callers’ open cases regarding late deliveries and quickly give them an update on the delivery status.
With Amazon Connect Customer Profiles, companies can help deliver faster and more personalized customer service by providing access to relevant customer information for agents and automated experiences.
Companies can bring customer data from multiple SaaS applications and databases into a single customer profile, and pay only for what they use based on the number of customer profiles.
Amazon Connect Customer Profiles is available in US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Canada (Central), Europe (Frankfurt), and Europe (London).
Amazon SageMaker Collections is a new capability to organize models in the Model Registry
Amazon SageMaker announced Collections, a new capability to organize your machine learning models in the Amazon SageMaker Model Registry. You can use Collections to group registered models that are related to each other and organize them in hierarchies to improve model discoverability at scale.
Amazon SageMaker Model Registry is a purpose-built tool for machine learning operations (MLOps) to help you centrally manage your ML models. You can track models and metadata, compare model versions, and review and approve them for deployment through the Amazon SageMaker Model Registry.
When you register a model, Amazon SageMaker Model Registry creates a Model Package and stores all successive versions of the model under one Model Package Group.
With Collections you can organize registered models that are associated with one another. For example, you could categorize your models based on the domain of the problem they solve under Collections titled ‘NLP-models, ’CV-models’, and ‘Speech-recognition-models’.
To organize your registered models in a tree structure you can nest Collections within each other. Any operations you perform on a Collection (create/read/update/delete) will not alter your registered models. You can use the Amazon SageMaker Studio UI or the Python SDK to manage Collections.
AWS Glue launches new capability to monitor usage of Glue resources
AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning (ML), and application development. Similar to other AWS services, AWS Glue has service limits to protect customers from an unexpected increase in their bill caused by excessive provisioning.
Customers can view their current resource limits and request an increase (where appropriate) by logging into the AWS Service Quota console. Today, we are pleased to announce a new capability on Glue that will allow customers to monitor the utilization of certain Glue resources in Cloudwatch and configure the appropriate CloudWatch alarms.
With the new feature, customers can monitor account level limits such as, number of Glue workflows, triggers, jobs, concurrent job runs, Blueprints, and number of Interactive Sessions. For more details, please refer to the AWS Glue documentation. The feature is available in all regions where Glue is available except the AWS GovCloud (US-East) and China Regions.
AWS Elastic Disaster Recovery now simplifies launch settings management
AWS Elastic Disaster Recovery (AWS DRS) now supports additional capabilities to help simplify managing the launch settings for your source servers. The launch settings you define using Elastic Disaster Recovery determine how to launch your source servers on AWS as drill and recovery instances.
Elastic Disaster Recovery minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery.
With this launch, you can now use Elastic Disaster Recovery to define default launch settings to apply to new source servers, and you can modify launch settings in bulk for multiple source servers. These additional management capabilities allow a simplified process to modify your launch settings at scale.
For applications running on AWS, you can also modify the recovery Availability Zone for multiple source servers, which helps simplify cross-Availability Zone recovery.
The additional launch settings management capabilities are available in all of the AWS Regions where Elastic Disaster Recovery is supported. See the AWS Regional Services List for the most up-to-date availability information.
Amazon DocumentDB (with MongoDB compatibility) provides ODBC driver to connect from BI tools
Amazon DocumentDB (with MongoDB compatibility) is a scalable, highly durable, and fully managed database service for operating mission-critical MongoDB workloads.
Today, Amazon DocumentDB announced an ODBC connector that enables connectivity from BI applications like Microsoft Excel and PowerBI to Amazon DocumentDB clusters. Using the ODBC connector, you can also now query and visualize data stored in DocumentDB from applications that support ODBC connectivity. You can access the source code and download the driver here.
The ODBC driver is open source and available for the user community under the Apache-2.0 license. You can use the GitHub repository to gain enhanced visibility into the driver implementation and contribute to its development.
Introducing the AWS CloudFormation Template Sync Controller for Flux
This week, AWS announced the preview release of the AWS CloudFormation Template Sync Controller for Flux, a new open source project that automates the process of syncing changes from CloudFormation templates to CloudFormation stacks.
Flux CD is an open source, Cloud Native Computing Foundation (CNCF) graduated project that keeps Kubernetes clusters in sync with sources of configuration including Git repositories, S3 buckets, and Open Container Initiative (OCI) compatible repositories (such as Amazon Elastic Container Registry). AWS CloudFormation is a service that helps you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.
You create a template that describes all the AWS resources that you want, and CloudFormation takes care of provisioning and configuring those resources for you.
The AWS CloudFormation Template Sync Controller for Flux is an extension to Flux that lets you store your CloudFormation templates in a Git repository and automatically deploy them as CloudFormation stacks in your AWS account.
After installing the CloudFormation Template Sync controller into your Kubernetes cluster, you can configure Flux to monitor your Git repository for changes to CloudFormation template files. When a CloudFormation template file is updated in a Git commit, the CloudFormation controller is designed to automatically deploy the latest template changes to your CloudFormation stack.

Source: cloud.google.com
Anthos Clusters on bare metal
Anthos clusters on bare metal 1.14.4 is now available for download. To upgrade, see Upgrading Anthos on bare metal. Anthos clusters on bare metal 1.14.4 runs on Kubernetes 1.25.
Anthos Service Mesh
Enabling mesh.googleapis.com
automatically enables trafficdirector.googleapis.com
, networkservices.googleapis.com
, and networksecurity.googleapis.com
. These APIs are required for managed Anthos Service Mesh. However, you can safely disable them on a project or fleet that has no managed Anthos Service Mesh clusters.
App Engine flexible environment Java
Java 11 and 17 are now generally available. These versions require you to specify an operating system version in your app.yaml. Learn more.
App Engine standard environment Go / PHP / Python
If you use the local development server to simulate an App Engine app in production, you must now run dev_appserver.py
with Python 3 and set the CLOUDSDK_DEVAPPSERVER_PYTHON
environment variable in your shell to the path of your Python 2 interpreter. Learn more about the required setup steps.
If you use the local development server to simulate an App Engine app in production, you must now run dev_appserver.py
with Python 3 and set the CLOUDSDK_DEVAPPSERVER_PYTHON
environment variable in your shell to the path of your Python 2 interpreter. Learn more about the required setup steps.
Apigee Integration
Apigee Integration fails to validate incorrect variable assignments in an integration. For example, you can currently assign a JSON value to an unassigned variable of String data type. This behaviour might cause data mapping and integration failures.
Until this issue is resolved, we recommend that you do the following:
- Assign values to an integration variable as per the variable data type.
- Verify and update existing integration variable values as per its respective variable data type.
Application Integration
Application Integration fails to validate incorrect variable assignments in an integration. For example, you can currently assign a JSON value to an unassigned variable of String data type. This behaviour might cause data mapping and integration failures.
Until this issue is resolved, we recommend that you do the following:
- Assign values to an integration variable as per the variable data type.
- Verify and update existing integration variable values as per its respective variable data type.
Assured Workloads
The FedRAMP Moderate compliance regime now supports the following products. See Supported products for more information:
- Access Approval
- Cloud Asset Inventory
- GKE Hub
- Traffic Director
The following compliance regimes now support the list of products below:
- Australia Regions with Assured Support
- Canada Regions and Support
- Canada Protected B
- Israel Regions and Support
- US Regions and Support
The following products are now supported. See supported products for more information:
- Artifact Registry
- Cloud Bigtable
- Cloud DNS
- Cloud HSM
- Cloud Interconnect
- Cloud Key Management Service (KMS)
- Cloud Load Balancing
- Cloud Monitoring
- Cloud NAT
- Cloud Router
- Cloud Run
- Cloud VPN
- Firestore
- Identity and Access Management (IAM)
- Identity-Aware Proxy (IAP)
- Network Connectivity Center
- Pub/Sub
- Virtual Private Cloud
- VPC Service Controls
BigQuery
Updates to preferred tables for existing BI engine reservations now take up to ten seconds to propagate, down from five minutes. This feature is generally available (GA).
Certificate Manager
Certificate Manager now supports Mutual TLS (mTLS) authentication. This is a public preview feature. For more information, see Trust configs.
Chronicle
Chronicle released the following additional data enrichment and precomputed analytic capabilities that can provide additional context during an investigation:
- Enriched entities with WHOIS data.
- Enriched entities with VirusTotal relationship data.
- Enriched events with VirusTotal file metadata.
- Data from Google Cloud Threat Intelligence curated threat feeds.
- Precomputed first-seen and last-seen occurrence for domains, IP addresses, and file hashes (SHA256, SHA1, MD5).
- Precomputed first-seen occurrence for assets and users.
Cloud Bigtable
The Cloud Bigtable documentation has been updated to include guidance on deleting data. For details, see Deletes.
Cloud Composer
Cloud Composer 2.1.13 release started on April 18, 2023. Get ready for upcoming changes and features as we roll out the new release to all regions. This release is in progress at the moment. Listed changes and features might not be available in some regions yet.
(Composer 2 only) Cloud Composer is now available in Taiwan (asia-east1), Jakarta (asia-southeast2), and Netherlands (europe-west4).
Java Runtime in Airflow workers and schedulers is updated from version 11 to version 17.
The apache-airflow-providers-google
package in images with Airflow 2.3.4 and 2.4.3 was upgraded to 2023.4.13+composer
. Changes compared to version 2023.3.14+composer
:
- Update Google Display and Video 360 operators to use API v2.
- Update Google Campaign Manager operators to use API v4.
- Update
google-cloud-dlp
package to version3.7.1
and adjust hooks and operators.
(Airflow 2.4.3 only) In environments with enabled data lineage integration, removed unnecessary warnings about deprecated operators that appeared in Airflow task logs.
The Google Display and Video 360 API v.1.1 is deprecated and its sunset date is April 27, 2023. Airflow operators that relied on API v1.1 will stop working after this date. If you use Google Display and Video 360 operators, then upgrade your environment to Cloud Composer version 2.1.13 or later. For more information about changes in operators, see Known Issues.
The Google Campaign Manager API v3.5 API is deprecated and its sunset date is May 1, 2023. Airflow operators that relied on API v3.5 will stop working after this date. If you use Google Campaign Manager operators, then upgrade your environment to Cloud Composer version 2.1.13 or later.
Cloud Composer 2.1.13 images are available:
- composer-2.1.13-airflow-2.4.3 (default)
- composer-2.1.13-airflow-2.3.4
Cloud Composer versions 2.0.10 and 1.18.6, have reached their end of full support period.
Cloud Data Loss Prevention
You can assign a sensitivity level to a built-in or custom infoType. Cloud DLP uses the sensitivity levels of individual infoTypes to calculate the sensitivity levels of tables that you profile. For more information, see Manage infoTypes through the Google Cloud console.
Cloud Database Migration Service
Database Migration Service now supports Oracle multi-tenant (CDB/PDB) architecture. For information about configuring pluggable databases for use with Database Migration Service, click here.
Cloud Functions
There is a change in retry policy for 1st gen functions that use Pub/Sub subscriptions. Newly created 1st gen functions with "retry on failure" enabled will now use exponential backoff, configured with a minimum backoff of 10 seconds and a maximum backoff of 600 seconds.
This new policy replaces the old "retry immediately" policy. This policy is applied to new 1st gen functions the first time you deploy them. It is not retroactively applied to existing functions, even if you redeploy them. 2nd gen functions will continue to use an exponential backoff strategy. For details, see Retrying event-driven functions.
A Cloud Functions (2nd gen) function will now accept requests from the Shared VPC network that it is connected to, including when Ingress is configured as "Internal" or "Internal and Cloud Load Balancing." (Preview)
Cloud Key Management Service
Cloud HSM resources are now available in the following regions:
europe-west12
me-central1
For information about which locations are supported by Cloud KMS, Cloud HSM, and Cloud EKM, see Cloud KMS locations.
Cloud Load Balancing
Typically with HTTPS communication, the authentication works only one way: the client verifies the identity of the server. For applications that require the load balancer to authenticate the identity of clients that connect to it, both a global external HTTP(S) load balancer and a global external HTTP(S) load balancer (classic) support mutual TLS (mTLS).
With mTLS, the load balancer requests that the client send a certificate to authenticate itself during the TLS handshake with the load balancer. You can configure a trust store that the load balancer uses to validate the client certificate's chain of trust.
For details, see the following:
- Mutual TLS authentication
- Set up mutual TLS with signed certificates
- Set up mutual TLS with a private CA
- Set up mutual TLS for a global external HTTP(S) load balancer (classic)
- Set up mutual TLS for a global external HTTP(S) load balancer
This capability is in Preview.
Global external HTTP(S) load balancers now support proxying traffic to external backends outside Google Cloud. To define an external backend for a load balancer, you use a global resource called an internet network endpoint group (NEG).
For details, see the following:
This capability is in Preview.
Cloud Logging
You can now configure Log Analytics on Cloud Logging buckets and BigQuery linked datasets by using the following Terraform modules:
Cloud Run
Cloud Run integrations (Preview) are now available in europe-west1
.
Session affinity for Cloud Run service revisions is now at general availability (GA).
A Cloud Run service revision will now accept requests from the Shared VPC network that it is connected to, including when Ingress is configured as "Internal" or "Internal and Cloud Load Balancing." (Preview)
Cloud SQL for MySQL
Cloud SQL for MySQL now supports 40+ new database flags. See supported flags for more information.
Cloud Tasks
You can now create tasks by sending an HTTP request to your queue. To learn more, read about the new BufferTask
method to Create tasks.
This feature is in Preview.
For tasks that have HTTP targets (as opposed to App Engine targets), you can now set routing for tasks at the queue level. If you set routing at the queue level, you do not have to set routing for each individual task. To learn more, see Configure routing.
This feature is in Preview.
Dataform
Cloud Logging is available for Dataform in Preview.
Dataproc
Announcing Dataproc General Availability (GA) support for CMEK organization policy.
Datastream
Datastream now supports Oracle multi-tenant (CDB/PDB) architecture. For information about configuring pluggable databases for use with Datastream, click here
Document AI Warehouse
Added the skip_ingested_documents flag in the Cloud Storage Ingest Pipelines to skip ingested documents.
Eventarc
Support for creating triggers for direct events from Cloud Firestore is available in Preview.
GKE
GKE cluster versions have been updated.
New versions available for upgrades and new clusters
The following Kubernetes versions are now available for new clusters and for opt-in control plane upgrades and node upgrades for existing clusters. For more information on versioning and upgrades, see GKE versioning and support and Upgrades.
Version 1.25.7-gke.1000 is now the default version
Recommender
New Service limit (quota) recommender is now available in Preview. The recommendations help you identify resources that may be approaching their quota limits.
Resource Manager
You can now create dry-run organization policies using the Google Cloud console.
VPC
Private Service Connect backends support using an internal regional TCP proxy load balancer to access published services. This feature is available in Preview.
Private Service Connect endpoints for published services can be configured with global access. When global access is configured, clients in any region can send traffic to endpoints. Global access for endpoints is available in Preview.
Microsoft Azure Releases And Updates
Source: azure.microsoft.com
Linux SMB clients can now use Azure Files identity based authentication and authorization. This capability is available to devices that are domain joined to either customer-managed Microsoft AD DS or Azure AD DS.
Azure are excited to announce the release of CycleCloud 8.4.0.
Here is the list of the improvements and bug fixes: New Features
• Slurm 3.0 is now supported o Support for dynamic nodes, and dynamic partitions via dynamic nodearrays, supporting both single and multiple VM sizes o New slurm versions 23.02 and 22.05.8 o Cost reporting via azslurm CLI o azslurm cli based autoscaler o Ubuntu 20 support o Removed need for topology plugin, and therefore also any submit plugin
• Encryption-At-Host is now supported
• Added Event Log for auditing storage credentials
• Cluster usage schema now includes VM size details
• New regions supported for pricing, including Poland, Sweden, and Jio India Resolved Issues
• VMSS which were removed outside of CycleCloud could trigger a null-pointer exception.
• Users were not able to interact with checkboxes in forms
• node.json was not parsed using utf-8 encoding on Windows
• Pricing for Ultra disks was not collected correctly
• VMs using Ultra disks failed to be created if there was no price information
• Restoring from backup sometimes picked an older version to restore from • Restoring from backup did not work if port 80/443 were in use
• LSF example template was broken
• In certain cases, nodes could be added to the cluster but not counted against the limits
• Users were not able to change their password
• CVE-2021-43980 is mitigated Here’s a link to the release notes in our public documentation: Release Notes v8.4.0 - Azure CycleCloud | Microsoft Learn UPGRADING CYCLECLOUD Please follow these links depending on the customer’s configuration: Upgrading on Debian or Ubuntu Upgrading on Enterprise Linux (RHEL) clones Upgrading from the Microsoft Download center More info: Upgrade or Migrate - Azure CycleCloud | Microsoft Docs QUESTIONS? Please feel free to reach out at askcyclecloud@microsoft.com, or the AskCycleCloud Teams channel
Public preview: Azure Container Apps available in Azure China
You can now use Azure Container Apps in Azure China.
Generally Available: Kubernetes 1.26 support in AKS
Benefit from Kubernetes 1.26 features in production.
Public preview: AKS service mesh addon for Istio
Installing and using Istio on your AKS clusters is now easier.
Generally Available: Long term support version in AKS
You now have the option to enable long term support for a specific Kubernetes version.
Public preview: Fail Fast Upgrade on API Breaking change detection
Use this feature to reduce time you have to spend on post upgrade workload troubleshooting
Generally Available: Azure CNI Overlay for Linux
Simplify your AKS cluster management, routing configurations, and efficiently scale your clusters.
OpenCost for AKS cost visibility
Get greater visibility into current and historic Kubernetes spend and resource allocation..
GA: Azure Active Directory workload identity with AKS
You can now use AAD workload identity with AKS in production
Azure Service Operator stable release version 2.0 now available
Create and manage over 100 Azure resources such as databases, storage accounts, VMs, and more from within Kubernetes.
Hotpatch is available for preview images of Windows Server Azure Edition VMs running the Desktop Experience installation mode.
Generally Available: Azure App Service - New Premium v3 Offerings
Introducing two new offerings in the Premium v3 (Pv3) service tier on Azure App Service
Public Preview: Isovalent Cilium Enterprise through Azure Marketplace
Isovalent Cilium Enterprise is now available to Azure Kubernetes Service (AKS) customers in the Microsoft Azure Marketplace marking a key milestone in delivering the expansive eBPF-powered Cilium and Isovalent Cilium Enterprise to Azure Kubernetes Service and Azure.
Regional expansion: Azure Elastic SAN Public Preview is now available in more regions.
Azure Elastic SAN (preview) is now available in Australia East, North Europe, Sweden Central, UK South, West Europe, East US, East US 2, South Central US, and West US 3.
Azure Storage Mover is now Generally Available
We are excited to announce that Azure Storage Mover is now generally available.
General Availability: Azure CNI Overlay
Azure CNI Overlay utilizes an overlay network to reduce IP utilization while providing better performance and scale.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Not knowing exactly what is in your cloud accounts, or those of your client's can be a worry. What exactly is running in there and what is it costing? What obsolete resources are you still being charged for? What legacy dev/test environments can be switched off? What open ports are inviting in hackers? You can answer all these questions with Hava.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (No credit card required and includes a forever free tier):