Hello,
We hope you are negotiating March as well as we are. There is lots happening at Hava and some transformational features likely to change the game when it comes to cloud documentation are about to drop, so stay tuned.
Here's the weekly cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday March 3rd 2023.
All the lastest Hava news can be found on our Linkedin Newsletter.
Of course we'd love to keep in touch at the other usual places. Come and say hello on:
Source: aws.amazon.com
Amazon MQ now supports RabbitMQ version 3.10.17
Amazon MQ now provides support for RabbitMQ version 3.10.17, which includes several important fixes and performance optimizations to the previously supported version, RabbitMQ 3.10.10. Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS.
You can reduce your operational burden by using Amazon MQ to manage the provisioning, setup, and maintenance of message brokers. Amazon MQ connects to your current applications with industry-standard APIs and protocols to help you easily migrate to AWS without having to rewrite code.
If you are running a version of RabbitMQ earlier than 3.10.10, we encourage you to upgrade to RabbitMQ 3.10.17 to get access to the latest security, performance and feature enhancements. This can be accomplished with just a few clicks in the AWS Management Console. If your broker has automatic minor versions upgrade enabled and is currently running version 3.10.10, Amazon MQ will automatically upgrade the broker to version 3.10.17 during a future maintenance window.
To learn more about upgrading, please see - Managing Amazon MQ for RabbitMQ engine versions in the Amazon MQ Developer Guide. This new version is available in all regions where Amazon MQ is available.
Announcing lower data warehouse base capacity configuration for Amazon Redshift Serverless
Amazon Redshift now allows you to get started with Amazon Redshift Serverless with a lower data warehouse base capacity configuration of 8 Redshift Processing Units (RPU). Amazon Redshift Serverless measures data warehouse capacity in RPU and you pay only for the duration of workloads you run in RPU-hours on a per-second basis.
Previously the minimum base capacity required to run serverless was 32 RPU. With the new lowered base capacity minimum of 8 RPU, you now have even more flexibility to support diverse set of workloads of small to large complexity based on your price performance requirements.
Amazon Redshift Serverless allows you to run and scale analytics without having to provision and manage data warehouse clusters. With Amazon Redshift Serverless, all users including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds.
With the new lower capacity configuration, you can use Amazon Redshift Serverless for production environments, test and development environments at an optimal price point when workload needs small amount of compute. You can increment or decrement the RPU in units of 8 RPU.
Amazon QuickSight enhances the developer experience with SDK 2.0
QuickSight Embedding SDK is a Javascript library that allows developers to integrate analytics functionality into their products and applications. The SDK version 2.0 supports TypeScript, ES6 (async/await) syntax, and utility features that enable developers to quickly build analytics experiences within their applications.
With SDK 2.0, host applications are able to dynamically communicate with the embedded iframe. Using info, warning, and error events, developers can monitor the embedded iframe lifecycle and react to different states in their applications.
Additionally, while a dashboard or visual is being rendered, developers can display a loading spinner, minimizing the need to write custom logic. Furthermore, developers can also externalize undo and redo dashboard actions into their application, as well as retrieve and pass parameters more efficiently.
With this release, we are further simplifying the developer workflow for embedding data visualization and analysis features such as charts, dashboards, KPIs, and reports into customer-facing applications.
Amazon RDS for MariaDB now supports RDS Optimized Writes
Amazon Relational Database Service (Amazon RDS) for MariaDB now supports Amazon RDS Optimized Writes enabling up to 2x higher write throughput at no additional cost. This is especially useful for RDS for MariaDB customers with write-intensive database workloads, commonly found in applications such as digital payments, financial trading, and online gaming.
In RDS for MariaDB, you are protected from data loss due to unexpected events, such as a power failure, using a built-in feature called the “doublewrite buffer”. But this method of writing takes up to twice as long, consumes twice as much I/O bandwidth, and reduces the throughput and performance of your database.
Starting this week, Amazon RDS Optimized Writes provides you with up to 2x improvement in write transaction throughput on RDS for MariaDB by writing only once while protecting you from data loss and at no additional cost. Optimized Writes uses Torn Write Prevention, a feature of the AWS Nitro System, to reliably and durably write to table storage in one step.
Amazon RDS Optimized Writes is available as a default option from RDS for MariaDB version 10.6.10 and higher on db.r6i, db.r6g, db.r5b, db.x2iedn and db.x2idn database instances. Optimized Writes is available in all AWS regions where these instances are supported including AWS GovCloud (US-East, US-West) Regions.
Amazon RDS for PostgreSQL supports minor versions PostgreSQL 14.7, 13.10, 12.14, and 11.19
Amazon Relational Database Service (Amazon RDS) for PostgreSQL now supports the latest minor versions PostgreSQL 14.7, 13.10, 12.14, and 11.19. AWS recommend you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of PostgreSQL, and to benefit from the bug fixes, performance improvements, and new functionality added by the PostgreSQL community. Please refer to the PostgreSQL community announcement for more details about the release.
Once you have upgraded your database to any of the above listed minor version, you can then upgrade to the latest PostgreSQL major version 15.2 in a few clicks. You are able to leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. Learn more about upgrading your database instances, including minor and major version upgrades, in the Amazon RDS User Guide.
AWS Database Migration Service Fleet Advisor now supports target recommendations
This week, AWS Database Migration Service (AWS DMS) announces the general availability of Fleet Advisor target recommendations, which gathers performance metrics and usage patterns of self-managed databases to recommend potential database engine and instance options for migration to AWS. Target recommendations can help to quickly identify the best migration option, based on estimated costs and limitations for each migration path.
AWS DMS Fleet Advisor provides recommendations for migrating self-managed Oracle, SQL Server, MySQL and PostgreSQL databases to AWS. In addition to providing recommended AWS database instance specifications for migration, this new capability also identifies database features in use where Amazon Relational Database Services (Amazon RDS) limitations may exist and provides prescriptive guidance on what is required for migration.
AWS DMS Fleet Advisor target recommendations are generally available, and you can use them in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Europe (Paris), Europe (London), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Hong Kong), South America (Sao Paulo) and Canada (Central).
Amazon Redshift now supports 200K tables in a single cluster
Amazon Redshift now supports up to 200K tables for Redshift Serverless and for clusters with ra3.4xlarge, ra3.16xlarge, dc2.8xlarge, and ds2.8xlarge node types. This feature is intended for customers with workloads that require a large number of tables to run with Amazon Redshift without having to split the tables across clusters or storing them in Amazon S3.
Until now Amazon Redshift supported 100K tables for above-mentioned node types. Customers with more tables had to split their tables across Redshift clusters or move some tables to Amazon S3.
Now AWS customers can migrate their workloads that use up to 200K tables to Amazon Redshift without splitting or moving their tables. This capability is automatically enabled for all supported node types for existing and new clusters. Customers don’t need to change their workloads, data ingestion, or their applications to take advantage of this feature.
The limit includes user-defined temporary tables and temporary tables created by Amazon Redshift during query processing or system maintenance. Views are not included in this limit.
AWS Glue now provides continuous logs in AWS Glue Job Monitoring
AWS Glue now displays continuous logs on the job run details page of AWS Glue Studio. AWS Glue is a serverless data integration and ETL service that helps discover, prepare, move, and integrate data for analytics and machine learning (ML). As your ETL and data integration jobs run, you can now see the logs update in real-time.
With this new feature, customers can view consolidated CloudWatch logs for the Apache Spark driver and executors in AWS Glue Job Monitoring as well as in the “Runs” tab in AWS Glue Studio’s job authoring interface. This makes it possible to track job run progress and identify errors without leaving AWS Glue.
AWS Glue introduces faster and simpler permissions setup
AWS Glue now offers guided permissions setup in AWS Console. AWS Glue is a serverless data integration and ETL service that helps discover, prepare, move, and integrate data for analytics and machine learning (ML).
Administrators can use the new setup tool to grant IAM roles and users access to AWS Glue and their data, as well as a default role for running jobs.
With this new feature, customers no longer need to read documentation or manually attach IAM policies to users that give them permissions to use AWS Glue functionality.
The new setup tool also sets a default role for new AWS Glue jobs and notebooks, so users can start authoring jobs and working with the Data Catalog without further setup.
Amazon Kinesis Data Firehose now supports data delivery to Elastic
Amazon Kinesis Data Firehose now supports streaming data delivery to Elastic. With this integration, Elastic users have an easier way to ingest streaming data to Elastic and consume the Elastic Stack (ELK Stack) solutions for enterprise search, observability, and security without having to manage applications or write code.
Amazon Kinesis Data Firehose makes it easier to reliably load streaming data into data lakes, data stores, and analytics services. You can use it to capture, transform, and deliver streaming data to Amazon S3, Amazon Redshift, Amazon OpenSearch Service, generic HTTP endpoints, and service providers like Splunk and Datadog.
It is a fully managed service that automatically scales to match the throughput of your data and handles all of the underlying stream management with no ongoing administration required. With Amazon Kinesis Data Firehose, you don't need to write delivery applications or manage resources.
Elastic is an AWS ISV Partner that helps you find information, gain insights, and protect your data when you run on Amazon Web Services (AWS). Elastic offers enterprise search, observability, and security that are built on a single, flexible technology stack that can be deployed anywhere.
Announcing Open Data Maps for Amazon Location Service
AWS announces the general availability of the Open Data Maps feature for Amazon Location Service. Open Data Maps is a new data provider option for the Maps feature based on OpenStreetMap (OSM), a geospatial data source supported by a global community.
Developers can now easily access reliable and up-to-date OSM data with no upfront investment or specialized geospatial knowledge. In addition to the commercial data provider options from Esri, HERE, and GrabMaps, Open Data Maps now gives developers more choices to integrate maps into their applications, enabling them to leverage OSM’s flexible licensing terms and continuously improving data quality.
With Open Data Maps, developers can easily integrate OSM-based map tiles into their web or mobile applications and overlay information on top, with four professionally designed map styles to support use cases such as logistics, delivery, and data visualization.
Developers can rely on the availability, low latency, security, and reliability of Amazon Location Open Data Maps, without the need to set up and operate specialized OSM tools. In addition, developers no longer need to be concerned with the freshness of their location data as Amazon Location refreshes the data on a regular basis.
Amazon RDS Proxy is now available in AWS Asia Pacific (Jakarta) Region
Amazon Relational Database Service (RDS) Proxy is now available in AWS Asia Pacific (Jakarta) Region. RDS Proxy is a fully managed and a highly available database proxy for RDS and Amazon Aurora databases. RDS Proxy helps improve application scalability, resiliency, and security.
Many applications, including those built on modern serverless architectures, may need to have a high number of open connections to the database or may frequently open and close database connections, exhausting the database memory and compute resources. RDS Proxy allows applications to pool and share database connections, improving your database efficiency and application scalability.
In the event of a failover, RDS Proxy can maintain and continue to accept connections from your application and reduce the failover time by up to 66%, improving availability for your RDS and Aurora databases. Finally, with RDS Proxy, your applications can enforce IAM authentication, reducing the need to store database credentials in your application.
AWS Config now supports 18 new resource types
AWS Config now supports 18 more resource types for services, including AWS Device Farm, AWS Budgets, Amazon Lex, Amazon CodeGuru Reviewer, AWS IoT Core, Amazon Route 53 Resolver, AWS RoboMaker, Amazon Elastic Compute Cloud (Amazon EC2), AWS IoT SiteWise, Amazon Lookout for Metrics, Amazon Simple Storage Service (Amazon S3), Amazon EventBridge, and AWS Elemental MediaPackage.
With this launch, customers can now use AWS Config to monitor configuration data for the following newly supported resource types:
Amazon Aurora MySQL-Compatible Edition now supports Microsoft Active Directory authentication
Amazon Aurora MySQL-Compatible Edition now supports authentication of database users using Microsoft Active Directory. You can use Active Directory to authenticate with Amazon Aurora using AWS Directory Service for Microsoft Active Directory or with your on-premise Active Directory by establishing a trusted domain relationship.
With support for Active Directory authentication in Amazon Aurora MySQL-Compatible Edition, you have access to single sign-on and centralized authentication of database users. Single sign-on reduces the operational overhead of database user management across multiple authentication approaches and credentials.
Moreover, a centralized authentication approach enables customers to leverage native Active Directory credential management capabilities to manage password complexities and rotation. This allows you to effectively keep pace with the myriad of compliance and security requirements across the globe and improve the security posture of your critical business assets.
Active Directory authentication is supported for Aurora MySQL version 3.03 (compatible with MySQL 8.0.26) and higher. To learn more about Active Directory authentication, please go to Aurora MySQL security.
Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services.
Amazon DynamoDB now supports table deletion protection
Deletion protection is now available for Amazon DynamoDB tables in all AWS Regions. DynamoDB now makes it possible for you to protect your tables from accidental deletion when performing regular table management operations.
When creating new tables or managing existing tables, authorized administrators can set the deletion protection property for each table, which will govern whether a table can be deleted. The default setting for the deletion protection property is disabled. When the deletion protection property is enabled for a table, the table cannot be deleted, irrespective of whether any AWS Identity and Access Management (IAM) permissions policies allow deletion of the table.
When deletion protection is disabled for a table, authorized administrators can delete the table if allowed by IAM permissions policies. The deletion protection property can be set through the AWS Management Console, AWS API, AWS CLI, AWS SDK, or AWS CloudFormation.
AWS Security Hub launches support for NIST SP 800-53 Rev. 5
AWS Security Hub now supports automated security checks aligned to the National Institute of Standards and Technology (NIST) Special Publication 800-53 Revision 5 (NIST SP 800-53 r5). Security Hub’s NIST SP 800-53 r5 standard includes up to 224 automated controls that conduct continual checks against 121 NIST SP 800-53 r5 requirements across 36 AWS services. This includes 10 new security controls that are unique to this standard.
The new standard is now available in all public AWS Regions where Security Hub is available and in the AWS GovCloud (US) Regions. To see and activate the new standard and the checks within it, visit the Standards page in Security Hub. You can also activate the standard using the BatchEnableStandards API or use our example script to engage the standard across many accounts and Regions.
AWS Elemental MediaConvert now supports enhanced color processing
AWS Elemental MediaConvert now has enhanced color processing features along with support for new color formats. As High Dynamic Range (HDR) content becomes more common, the ability to convert between Standard Dynamic Range (SDR) and HDR has also become important. While high-value content is commonly created using HDR color, most legacy, user-generated, and mass-produced content is created in SDR.
Dolby Vision dynamic metadata can now be generated on-the-fly in MediaConvert jobs to convert non-Dolby Vision HDR content into Dolby Vision, without the need for a color mastering suite and content specific color grading. MediaConvert now supports the DCI-P3 color space, commonly used for color grading content destined for final distribution in both HDR and SDR formats.
MediaConvert has also added the ability to set the reference white point when mapping SDR color values into the HDR space during format conversions. MediaConvert has added the ability to apply color range clipping in the color corrector to ensure outputs remain within broadcast legal limits.
With AWS Elemental MediaConvert, audio and video providers with any size content library can easily and reliably transcode on-demand content for broadcast and multiscreen delivery. MediaConvert functions independently or as part of AWS Media Services, a family of services that form the foundation of cloud-based workflows and offer the capabilities needed to transport, transcode, package, and deliver video.
Delegated administrator for AWS Organizations launches in the AWS GovCloud (US) Regions
We are excited to launch delegated administrator for AWS Organizations to help you delegate the management of your Organizations policies, enabling you to govern your AWS organization and member accounts with increased agility and decentralization.
You can now allow member accounts to manage policy types specific to their needs. By specifying fine-grained permissions, you can balance flexibility with limiting access to your highly privileged management accounts.
You can use AWS Organizations to centrally manage and govern multiple accounts with AWS. As you scale operations and need to manage more accounts within AWS Organizations, implementing and scaling policy administration requires coordination between multiple teams, and can take more time.
You can now delegate the management of policies to designated member accounts that are known as delegated administrators for AWS Organizations. You can select any policy type — backup policies, service control policies (SCPs), tag policies, and AI services opt-out policies — and specify permissible actions.
Once delegated access, users with the right permissions can go to the AWS Organizations console, see and manage policies that they have permissions for, and create their own policies.
AWS Glue 4.0 now supports Streaming ETL
AWS Glue now supports Streaming ETL in version 4.0, a new version of AWS Glue that accelerates data integration workloads in AWS. AWS Glue 4.0 upgrades data integration engines, including an upgrade to Apache Spark 3.3.0 and to Python 3.10.
AWS Glue streaming ETL jobs continuously consume data from streaming sources, clean and transform the data in-flight, and make it available for analysis in seconds. This release includes an optimized state-management store to build efficient streaming solutions across micro-batches.
This makes it easier to remove duplicates in a stream and to perform stream-based aggregations. You can also add a new column that indicates when a corresponding record was received by the stream for better data observability. This version also supports IAM authentication for Amazon Managed Streaming for Apache Kafka Serverless.
AWS Application Composer is now generally available
AWS Application Composer helps you simplify and accelerate architecting, configuring, and building serverless applications. You can drag, drop, and connect AWS services into an application architecture by using the AWS Application Composer browser-based visual canvas. AWS Application Composer helps you focus on building by maintaining deployment-ready infrastructure as code (IaC) definitions, complete with integration configuration for each service.
With AWS Application Composer, you can start a new architecture from scratch, or you can import an existing AWS CloudFormation or AWS Serverless Application Model (SAM) template. You can add and connect AWS services, and AWS Application Composer helps generate deployment-ready projects, then maintains the visual representation of your application architecture in sync with your IaC.
This general availability release adds improved resource support for the Amazon API Gateway—such as direct integration with Amazon Simple Queue Service (SQS)—and includes user interface improvements, interaction improvements, and localization in 10 languages.
AWS Resource Explorer supports 12 new resource types
AWS Resource Explorer supports 12 more resource types from services including Amazon Simple Queue Service (SQS), AWS Lambda, and Amazon ElastiCache.
With this launch, customers can now use AWS Resource Explorer to search for and discover resources for the following newly supported resource types:
1. elasticache:cluster
2. elasticache:globalreplicationgroup
3. elasticache:parametergroup
4. elasticache:replicationgroup
5. elasticache:reserved-instance
6. elasticache:snapshot
7. elasticache:subnetgroup
8. elasticache:user
9. elasticache:usergroup
10. lambda:code-signing-config
11. lambda:event-source-mapping
12. sqs:queue
Amazon EC2 I3 Bare Metal Instance is now available in the Middle East (Bahrain) region
Starting this week, storage optimized Amazon EC2 I3 bare metal instance is now also available in Middle East (Bahrain) region. Amazon EC2 I3 instances (i3.metal) provides your applications with direct access to the compute and memory resources of the underlying next generation AWS hardware and software infrastructure.
Amazon EC2 I3 bare metal instance let you run a variety of workloads on AWS, including non-virtualized workloads, workloads that benefit from direct access to physical resources, and workloads that may have licensing restrictions.
With this regional expansion, I3.metal is now also available in Middle East (Bahrain) region. Customers in this region can purchase the I3.metal instances via Savings Plans, Reserved, On-Demand, and Spot instances.
AWS announces new competition structure for the 2023 Season
This week, AWS launched the 2023 season of the award-winning AWS DeepRacer League - where developers of all skill levels advance their knowledge of machine learning (ML) and compete in the world’s first global autonomous racing league!
Starting March 1st, developers have more chances to earn achievements and win prizes than ever before with an all new three-tier competition. In the past, developers competed globally to win a spot in the World Championship.
Now developers have the opportunity to compete with their peers in national and regional races for a spot in the World Championship, a trip to re:Invent in Las Vegas, and the $43,000 prize purse.
Form March through October, the top 10% of competitors from each country’s monthly race will receive $50 to purchase AWS DeepRacer merchandise on amazon.com, while the top developer in each of the 6 regions (Europe, Greater China, Middle East and Africa, North America, South America) will also win a trip to the World Championship at re:Invent 2023 in Las Vegas.
Top developers in season standings at regional and world levels (based on total accumulated points) will also win trips to the World Championship at re:Invent in Las Vegas.
Amazon Redshift Query Editor V2 is now available in the AWS GovCloud (US) Regions
You can now use the Amazon Redshift Query Editor V2 with Amazon Redshift clusters in the AWS GovCloud (US) Regions. Amazon Redshift Query Editor V2 makes data in your Amazon Redshift data warehouse and data lake more accessible with a web-based tool for SQL users such as data analysts, data scientists, and database developers.
With Query Editor V2, users can explore, analyze, and collaborate on data. It reduces the operational costs of managing query tools by providing a web-based application that allows you to focus on exploring your data without managing your infrastructure.
Amazon Aurora extends the cross-region disaster recovery capabilities to additional AWS regions
Amazon Aurora now extends the disaster recovery capabilities of Global Database and Cross-Region database cluster snapshot copying in nine additional AWS Regions, including Africa (Cape Town), Asia Pacific (Hong Kong, Hyderabad, Jakarta ), Europe (Milan, Spain, Zurich) and Middle East (Bahrain, UAE).
An Amazon Aurora Global Database is a single database that can span up to six AWS Regions, enabling disaster recovery from region-wide outages and low latency global reads. Starting today, you can create an Aurora global database in additional regions for replicating writes from the primary to the secondary AWS Regions with a typical latency of less than one second, enabling both fast failover with minimal data loss and low latency global reads.
Aurora Global Database is available for both the MySQL-compatible and PostgreSQL-compatible editions of Aurora. For information about Aurora global database versions supported see Using Amazon Aurora global databases.
With this release, Cross-Region copying of a database cluster snapshots created either automatically or manually is now also available in these additional regions for your data retention, compliance, and/or disaster recovery needs. For more information see Copying a database cluster snapshot.
Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services.
IAM Roles for Amazon EC2 now provide Credential Control Properties
You can now use Credential Control Properties to more easily restrict the usage of your IAM Roles for EC2.
IAM Roles for EC2 allow your applications to securely make API requests without requiring you to directly manage the security credentials. Temporary and rotating IAM credentials are automatically provisioned to your instances' metadata service with permissions you've defined for the role. Your applications, usually through the AWS SDKs or CLI, then retrieve and use those temporary credentials.
Previously, if you wanted to restrict the network location where these credentials could be used, you would need to hard-code the VPC IDs and/or IP addresses of the roles in the role policy or VPC Endpoint policy. This required administrative overhead and potentially many different policies for different roles, VPCs, etc.
Each role credential now has two new properties, which are AWS global condition keys, adding information about the instance from which they were originally issued. These properties, the VPC ID and the Instance’s Primary Private IP address, can be used in IAM policies, Service Control Policies (SCPs), VPC endpoint policies, or resource policies to compare the network location where the credential originated to where the credential is used.
Broadly-applicable policies can now limit the use of your role credentials to only the location from where they originated. Examples of these policies are in this blog post. When creating IAM Roles, as with any IAM principal, use least-privilege IAM policies that restrict access to only the specific API calls your applications require.
AWS Snowball is now available in the AWS Middle East (UAE) Region
AWS Snowball, a member of the AWS Snow Family, is an edge computing, data migration, and edge storage device that comes in two configurations - Snowball Edge Storage Optimized and Snowball Edge Compute Optimized. The Snowball Edge Storage Optimized device provides 80 TB of Amazon S3 compatible object storage and is designed for local storage and large-scale data transfer.
You can use these devices for data collection, machine learning and processing, and storage in environments with intermittent connectivity (like manufacturing, industrial, and transportation) or in extremely remote locations (like military or maritime operations) before shipping the devices back to AWS.
AWS Snowball Edge Storage Optimized is offered with on-demand pricing option, which includes 10 days of device use and a per-day fee for each additional day you use the device before sending it back to AWS. AWS Snowball Edge Compute Optimized is available in Monthly and 1-Year Commit pricing options.
AWS Lambda now supports configuring up to 10,240 MB of ephemeral storage for functions in 6 additional regions - Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Europe (Spain), Europe (Zurich) and Middle East (UAE). This feature makes it easier to build and run data intensive workloads with Lambda functions.
With this release, you can now control the amount of ephemeral storage a function uses for reading or writing data, enabling you to use Lambda functions for workloads like ETL jobs, financial computations and machine learning inferences.
You can configure ephemeral storage (/tmp) between 512 MB and 10,240 MB using the AWS Management Console, AWS CLI, AWS Serverless Application Model (AWS SAM), AWS Cloud Development Kit (AWS CDK), the AWS Lambda API and AWS CloudFormation.
Introducing maintenance window feature for AWS IoT Device Management Jobs
AWS IoT Device Management Jobs add the capability for customers to schedule their remote actions within a maintenance window. Job scheduling allowed customers to define the start and end time of a job rollout. Following today's launch, Customers can configure the maintenance window recurrence on daily, weekly or monthly schedule (e.g. “Monday-Wednesday-Friday”) or define a customized recurrence for Continuous Jobs .
Devices that are added to target groups receive job execution notifications only within the pre-configured maintenance window duration without requiring any changes on device-side software. Customers with devices located in different time zones can also utilize this feature in combination with Dynamic Thing Groups and Device Shadow to schedule Job executions according to the local time of their devices (see this blog).
With Maintenance Window configuration, customers can automate device software updates to their enterprise or industrial assets based on devices' software deployment cycles.
AWS Migration Hub Refactor Spaces now supports environments without a network bridge
Starting this week, AWS Migration Hub Refactor Spaces supports creating refactor environments without an AWS Transit Gateway based network bridge. This feature lets you safely and incrementally refactor your application while using your existing network infrastructure. You can now create refactor environments with your network infrastructure in minutes while benefiting from Refactor Spaces’ orchestration and management of policies, routing, API Gateway, and Network Load Balancer.
Refactor Spaces reduces the undifferentiated heavy lifting of building and operating the AWS infrastructure necessary for incremental refactoring, letting you focus on evolving your applications into microservices. The refactor environment without a network bridge helps accelerate app refactoring by making it easier to experiment with different refactor approaches.
Commonly used refactor approaches include incrementally transforming your monolithic applications to microservices, extending applications with new microservices, and refactoring the application’s account structure as a first refactoring step.
You can now create a single AMI that can boot on both Unified Extensible Firmware Interface (UEFI) and Legacy BIOS.
In Amazon EC2, two variants of the boot mode software are supported: UEFI and Legacy BIOS. Previously, you could only create AMIs that support either UEFI or Legacy BIOS. If you wanted to support both types of boot modes, you would need to create and maintain separate sets of AMIs.
With this launch, you can now use the 'UEFI Preferred' boot mode which will boot your instance on UEFI if the Amazon EC2 instance type supports UEFI. If not, the instance will boot using Legacy BIOS.
This feature is available in all AWS Regions, including the AWS GovCloud (US) Regions.
Access Approval
Access Approval supports Cloud NAT in the GA stage
AlloyDB for PostgeSQL
Cloud Client libraries for the AlloyDB Admin API are in Preview. Supported languages include C++, C#, Go, and Java.
Anthos clusters on AWS
You can now launch clusters with the following Kubernetes versions:
cpp-httplib
issues with kube-api
server unable to reach Anthos Identity Service (AIS).fluent-bit
to v1.9.9 to fix CVE-2022-42898.This release fixes the following vulnerabilities:
Anthos clusters on Azure
You can now launch clusters with the following Kubernetes versions:
cpp-httplib
issues with kube-api
server unable to reach Anthos Identity Service (AIS).fluent-bit
to v1.9.9 to fix CVE-2022-42898.This release fixes the following vulnerabilities:
Anthos clusters on bare metal
Cluster lifecycle improvements 1.13.1 and later
Starting with Anthos clusters on bare metal release 1.13.1, you can use the Google Cloud console or the gcloud CLI to upgrade admin and user clusters managed by the Anthos On-Prem API. If your cluster is at version 1.13.0 or lower, you must use bmctl
to upgrade the cluster.
For more information about using the console or the gcloud CLI for upgrades, see the documentation for your version of Anthos clusters on bare metal:
Anthos clusters on VMware
Anthos clusters on VMware 1.14.2-gke.37 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.14.2-gke.37 runs on Kubernetes 1.25.5-gke.100.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.
We no longer silently skip saving empty files in diagnose snapshots, but instead collect the names of those files in a new empty_snapshots
file in the snapshot tarball.
Fixed an issue where user cluster data disk validation used the cluster-level datastore vsphere.datastore
instead of masterNode.vsphere.datastore
.
Fixed an issue with Anthos Identity Service to better scale and handle concurrent authentication requests.
Fixed an issue where component-access-sa-key
was missing in the admin-cluster-creds
Secret after admin cluster upgrade.
Fixed an issue where user cluster upgrade triggered through the Google Cloud console might flap between ready and non-ready states until CA rotation fully completes.
Fixed an issue where gkectl diagnose cluster
might generate false failure signals with non-vSphere CSI drivers.
Fixed an issue where admin cluster update doesn't wait for user control-plane machines to be re-created when using ControlPlaneV2.
Cluster lifecycle improvements versions 1.13.1 and later
You can use the Google Cloud console or the gcloud CLI to upgrade user clusters managed by the Anthos On-Prem API. The upgrade steps differ depending on your admin cluster version. For more information, see the version of the documentation that corresponds to your admin cluster version:
1.12.6 patch release
Anthos clusters on VMware 1.12.6-gke.35 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.12.6-gke.35 runs on Kubernetes v1.23.16-gke.2400.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.14, 1.13, and 1.12.
Anthos API Hub
On March 8, 2023, the Apigee Registry API documents were updated to include the Google APIs Explorer panel.
The Google APIs Explorer has been added to the Apigee Registry API documents. The Try this method panel acts on real data and lets you try Google API methods without writing code.
App Engine standard environment Ruby
The Ruby 3.20 runtime for App Engine standard environment is now available in preview.
Backup and DR
Backup and DR Service now supports logging and alerting via Cloud Logging and Cloud Monitoring. It:
BigQuery
Case-insensitive collation support is now generally available (GA). In addition to features available in the preview, the GA release includes:
Cloud Data Fusion
SAP BW OHD, SAP ODP, SAP OData, SAP SLT, and SAP Table plugins version 0.8 is generally available (GA) in Cloud Data Fusion versions 6.8.0 and later.
Cloud Load Balancing
The Cloud Load Balancing Console now allows you to see the equivalent API code for actions you take in the Console. When you create or update a load balancer, before you click Create or Update, you can click Equivalent Code to view the load balancer API resources that will be created, updated, or deleted.
This capability is in Preview.
Cloud Logging
You can now route logs through the Log Router of another Google Cloud project. The logs can then be managed by the other Google Cloud project, which includes log-based metrics, log-based alerts, and other log sinks. For more information, see Route logs to supported destinations.
Cloud Monitoring
You can now use the gcloud CLI to configure a snooze, which prevents Cloud Monitoring from sending notifications or creating incidents during specific time periods. You can also configure a snooze by using the Google Cloud Console and the API. For more information see Create and manage snoozes.
You can now view and list incidents on your custom dashboards. For more information, see Display incidents on a dashboard.
Cloud Run
You can now authenticate to a Cloud Run service by including a Google-signed OpenID Connect ID token in the X-Serverless-Authorization
header if your application already uses the Authorization
header for custom authorization.
Cloud Spanner
Cloud Spanner fine-grained access control is now generally available. Fine-grained access control combines the benefits of Identity and Access Management (IAM) with traditional SQL role-based access control. For more information, see About fine-grained access control.
Cloud Storage
In buckets with turbo replication enabled, objects uploaded using XML API multipart uploads are now included in the turbo replication RPO
Config Controller
Config Controller now uses the following versions of its included products:
Dataform
Query preview in a workspace is available in Preview.
Dataform in Preview is available in the following regions:
Dataproc
Added stronger validations to disallow upper-case characters in template IDs per Resource Names guidance, which allows Workflow template creation to fail fast instead of failing at workflow template instantiation.
Added decision metric field in Stackdriver autoscaler logs.
Dataproc Metastore
Dataproc Metastore 2 is now Generally Available (GA). Dataproc Metastore 2 provides horizontal scalability through fine grained scaling options. For more information, see Datproc Metastore versions.
The Spanner database type is generally available (GA).
Auxiliary versions is generally available (GA).
Filestore
Filestore data is compliant with at-rest and in-use data residency requirements pursuant with Google Cloud terms of service.
Google Cloud Deploy
Google Cloud Deploy now provides the ability to deploy to multiple targets at the same time, supported in preview.
GKE
Backend Service-based external Network load balancers are now generally available with GKE. Regional Backend Service is a foundational element of a Google Cloud Load Balancer and using it for your external LoadBalancer Services will unlock new capabilities going forward. To learn more, see how to deploy a backend service-based external network load balancer.
IAM
You can now set an expiry time for all newly created service account keys in your project, folder, or organization. This feature is generally available (GA).
Network Intelligence Center
Network Topology now includes cross-project metrics for network traffic sent across Shared VPC or VPC Network Peering boundaries within the same organization. For more information, see Network Topology overview.
You can now see allow
rules that are no longer active based on usage patterns and trends. For more information, see Allow rules with no hits based on trend analysis.
You can now see shadowed rule insights for hierarchical firewall policies and global network firewall policies in Firewall Insights. For more information, see Firewall Insights categories and states.
Resource Manager
You can now create dry-run organization policies to monitor how policy changes would impact your workflows before they are enforced.
Secret Manager
Support for Annotations in Secret Manager is now generally available. Annotations are used to define custom metadata about a secret.
Transfer Appliance
ta check
is a command line tool to detect and help fix configuration issues with Transfer Appliance and Edge Appliance.
Microsoft Azure Releases And Updates
Source: azure.microsoft.com
You can now transfer the backups of Azure Files to the vault to get complete protection against various data loss scenarios.
Deploy a Virtual Machine Scale Set with both Spot and standard virtual machines using flexible orchestration mode.
General availability: Yocto Kirkstone recipes for IoT Edge 1.4 LTS
Announcing the addition of Kirkstone compatible recipes to build the latest IoT Edge 1.4.x LTS release.
Enable scheduled backups for Azure Blob using Azure Backup to ensure business continuity and recovery from inadvertent or malicious deletion or ransomware attacks.
Generally available: Azure Ultra Disk Storage in the China North 3 Azure region
Now available in the China North 3 Azure region, Azure Ultra Disk Storage provides high performance along with sub-millisecond latency for your most demanding workloads.
Public Preview: Azure Monitor managed service for Prometheus now supports querying PromQL
Azure Workbooks now have a Prometheus data source allowing users to write PromQL natively in the portal. The Prometheus explorer offers an out of the box querying experience.
Public Preview: Collect Syslog from AKS nodes using Azure Monitor - Container Insights
Customers can now collect Syslog from their AKS Clusters using Azure Monitor – Container Insights.
Generally available: App Insights Extension for Azure Virtual Machines and VM Scale Sets
Application insights extension enables easy to use application monitoring for IIS-hosted .NET Framework and .NET Core applications running on Azure VMs and VM scale sets.
Public Preview: Data sharing lineage and search for Azure Storage in Microsoft Purview
Data Sharing Lineage is now available in Microsoft Purview for Azure Data Lake Storage (ADLS) Gen2 and Azure Blob (Blob) Storage in public preview. Data Sharing Lineage is aimed to provide detailed…
Generally Available: Model Serving on Azure Databricks
Model Serving on Azure Databricks is now generally available
More transactions at no additional cost for Azure Standard SSD
Save money on billable transactions as Azure waive transaction fees for Azure Standard SSD customers that exceed the maximum hourly limit.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Not knowing exactly what is in your cloud accounts, or those of your client's can be a worry. What exactly is running in there and what is it costing? What obsolete resources are you still being charged for? What legacy dev/test environments can be switched off? What open ports are inviting in hackers? You can answer all these questions with Hava.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (No credit card required and includes a forever free tier):