Hava Blog and Latest News

In Cloud Computing This Week [Mar 3rd 2023]

Written by Team Hava | March 2, 2023



How is it March already? 2023 seems to be off to a flying start and slightly less whacky than the last two years, Alien UFO's notwithstanding. 

Our dev team have been extremely busy working on some killer new features which will be dropping very soon, so stay tuned for that news, we think you'll love it.

Here's the weekly cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday March 3rd 2023.

All the lastest Hava news can be found on our Linkedin Newsletter.

Of course we'd love to keep in touch at the other usual places. Come and say hello on:

Facebook.      Linkedin.     Twitter.

AWS Updates and Releases

Source: aws.amazon.com

AWS Network Firewall is now available in the Middle East (UAE) Region

Starting this week, AWS Network Firewall is available in the AWS Middle East (UAE) Region, enabling customers to deploy essential network protections for all their Amazon Virtual Private Clouds (VPCs).

AWS Network Firewall is a managed firewall service that is easy to deploy. The service automatically scales with network traffic volume to provide high-availability protections without the need to set up and maintain the underlying infrastructure.

It is integrated with AWS Firewall Manager to provide you with central visibility and control over your firewall policies across multiple AWS accounts.

AWS Step Functions Distributed Map is now available in more Regions

AWS Step Functions Distributed Map is now available in Middle East (UAE), Europe (Spain and Zurich), Asia Pacific (Hyderabad) and Australia (Melbourne) Regions.

AWS Step Functions is a visual workflow service capable of orchestrating over 11,000 API actions from over 250 AWS services to automate data processing workloads.

Now, with the distributed Map mode, AWS Step Functions can iterate over objects such as images, logs and financial data stored in Amazon Simple Storage Service (Amazon S3), a cloud object storage service. AWS Step Functions launches thousands of parallel workflow executions to process the data, and save the results of executions to Amazon S3.

You can use the distributed Map mode to analyze millions of log files for security risks or iterate terabytes of data for business insights. To process your data, use compute services such as AWS Lambda and write code in any language supported, or choose from over 220 purpose-built AWS services to accelerate your development.

Amazon DocumentDB (with MongoDB compatibility) adds support for MongoDB 5.0 wire protocol and client-side field level encryption

Amazon DocumentDB (with MongoDB compatibility) continues to increase compatibility with MongoDB, and now offers added support for MongoDB 5.0 drivers with Amazon DocumentDB 5.0. Amazon DocumentDB is a fast, scalable, highly available, and fully managed document database service that supports MongoDB API based workloads. Amazon DocumentDB makes it easy and intuitive to store, index, and query JSON data.

The following are some of the major features and capabilities introduced in Amazon DocumentDB 5.0 and Elastic Cluster:

  • Client-side Field Level Encryption (FLE) - With Amazon DocumentDB 5.0 and Elastic Cluster, you can now encrypt sensitive data in-application before it is sent to Amazon DocumentDB, and build applications that handle sensitive and Personally Identifiable Information (PII) data and comply with data regulatory requirements. To learn more, please see Getting started with Client-side FLE on DocumentDB.
  • 128 TiB Storage Limit - Amazon DocumentDB doubles the storage volume in Amazon DocumentDB 5.0 clusters and Elastic Cluster shards. Storage scales automatically with no additional cost and you are only charged for the storage that you use in an Amazon DocumentDB cluster volume. 
  • Aggregate Operators - You can now use $dateAdd and $dateSubstract MongoDB API operators when working with date fields. For more information, see Supported MongoDB APIs, Operations, and Data Types.

You can now upgrade your 3.6 and 4.0 Instance-based clusters to DocumentDB 5.0 clusters.

AWS Control Tower announces a progress tracker for landing zone setup and upgrades

AWS Control Tower announced a new progress tracker that depicts milestones and related statuses of the landing zone setup and upgrade process. A landing zone is a well-architected, multi-account AWS environment that is a starting point from which you can deploy workloads and applications.

AWS Control Tower automates the setup of a new landing zone using AWS best-practices blueprints for identity, federated access, logging, monitoring, and account structure. 

The new progress tracker depicts the status (Not Started, In Progress, Success) of key milestones in the resource deployment and configuration workflows of the landing zone, such as updating the shared accounts for logging, configuring Account Factory to provision member accounts, or enabling mandatory controls.

Showing the progress milestones gives you transparency into where you are in the landing zone setup or upgrade process and provides detail about the time left to complete the process.

Amazon Timestream now supports batch loading data

This week Amazon Timestream launched Batch Load, a new feature for ingesting batched time series data into Timestream. Batch Load provides customers with a robust, serverless, and scalable solution to migrate data from CSV files stored in S3 directly into Timestream.

With Batch Load, customers specify one or more source files, the target Timestream database and table, and a data model to apply. Batch Load then automatically and reliably ingests the data in parallel. Learn more by visiting the 5-minute Batch Load demo video or the developer guide below.

Amazon Timestream is a fast, serverless, and purpose-built time series database for real-time analytics, DevOps, and IoT applications that can scale to process trillions of time series events per day. Amazon Timestream simplifies data lifecycle management through the use of data tiers and user-defined data retention policies.

The purpose-built query engine lets you access and analyze recent and historical data across these tiers. Additionally, visualizing your data is simple with integration and connector support through Amazon Quicksight, Grafana, and JDBC. Amazon Timestream also automatically scales up or down to adjust for capacity and performance, so you don’t need to manage the underlying infrastructure, freeing you to focus on building your applications.

The service is also HIPAA eligible, ISO certified, FedRAMP (Moderate) compliant, PCI DSS compliant, and in scope for AWS’s SOC reports SOC 1, SOC 2, and SOC 3. 

Amazon Detective adds graph visualization for interactive security investigations

Amazon Detective finding groups now include a dynamic visual representation of Detective's behavior graph to emphasize the relationships between security findings and the associated entities within a finding group.

This addition makes it easier for customers to triage potential security issues with at-a-glance visuals that include finding types, severity levels, associated account(s), and linkages within the Detective behavioral graph that can be used to investigate related activity within a finding group.

Detective finding groups consist of related Amazon GuardDuty findings and include severity, affected AWS accounts, and resources to help you reduce the amount of time you spend investigating individual findings and make it easier to understand the scope of a potential attack.

Detective finding groups use the Detective visualization as a starting point for your most critical security investigations within the finding group profile page. Detective’s visualization supports multiple layouts that can aide you in the identification of anomalous behavior by visual inspection of trends, outliers, and patterns of behavior.

From this panel, you can also manually rearrange the findings and entities to better understand their interconnectedness, select items to see more details, and more quickly assess the makeup of the finding group. This visualization also allows you to view what resource types are more prevalent in this finding group by leveraging the graph database.

Introducing Amazon Lightsail for Research

Amazon Lightsail now offers Amazon Lightsail for Research, a new offering that makes it simple for you to accelerate your research using the power of the cloud. Lightsail for Research provides access to analytical applications such as Scilab, RStudio, and Jupyter running on powerful virtual computers in just a few clicks.

With Lightsail for Research, you can now move large data sets and/or time-consuming analysis from your laptop onto virtual computers, run multiple analyses simultaneously, and continue computations even when your laptop is off or being used for other activities.

By accessing AWS’s computing power along with pre-installed research software, you can get to work quickly without requiring computer setup, software installation, or technical support to perform those tasks. Additionally, Lightsail for Research offers bundled pricing, making it simple to understand costs before starting your work.

Amazon Aurora MySQL 3.03 (compatible with MySQL 8.0.26) is generally available

Starting this week, Amazon Aurora MySQL-Compatible Edition 3 (with MySQL 8.0 compatibility) supports MySQL 8.0.26. In addition to several security enhancements and bug fixes, MySQL 8.0.26 includes several changes, such as enhanced tablespace file segment page configuration and new aliases for certain identifier names. For more details, please review Aurora MySQL 3 and MySQL 8.0.26 release notes.

To upgrade to Aurora MySQL 3.03, you can initiate a minor version upgrade manually by modifying your DB cluster, or you can enable the “Auto minor version upgrade” option when creating or modifying a DB cluster. For more details, see Automatic Minor Version Upgrades for MySQL. This release is available in all AWS regions with Aurora MySQL. Please review the Aurora documentation to learn more. 

Amazon EMR Serverless now supports application log encryption with Customer Managed Keys (CMK)

Amazon EMR Serverless is a serverless option in Amazon EMR that makes it easy for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Today, we are excited to announce that you can use your own Customer Managed Keys (CMK) with AWS Key Management Service (AWS KMS) to encrypt EMR Serverless application logs when stored in managed storage and in Amazon S3 buckets.

When you submit a job to an EMR Serverless application, you can decide where to store your application logs - in a managed storage system, an Amazon S3 location, or both. EMR Serverless encrypts application logs by default for 30 days in managed storage using AWS owned keys.

If you want to retain logs longer or have additional requirements to analyze the logs, you also have the option to upload the logs to an S3 bucket of your choice. With this feature, you can now use CMK to encrypt logs in managed storage and Amazon S3 buckets, adding a self-managed security layer to help you meet the compliance and regulatory requirements of your organization. 

AWS Comprehend simplifies custom model retraining and management

Amazon Comprehend announced self-service flywheel APIs to simplify the retraining and version management of custom Comprehend models. 

Amazon Comprehend is a Natural Language Processing (NLP) service that provides pre-trained and custom APIs to derive insights from textual data. Customers can bring their own data, and train custom Comprehend models to classify documents, and extract entities of interest for their specific business needs.

Until this week, customers needed manual processes for merging existing and new datasets to train new models, evaluating improvements in model performance, and managing model versions over time.

Starting now, customers using the flywheel feature just need to provide the new dataset for retraining. The feature retrains the model by automatically merging existing and new datasets, displays the performance of the model against previous versions that it maintains, and enables customers to select the best version as the production model. 

AWS App Runner is now available in AWS Asia Pacific (Singapore, Sydney) and AWS Europe (Frankfurt) regions

AWS App Runner expands availability to the AWS Asia Pacific (Singapore), AWS Europe (Frankfurt), and AWS Asia Pacific (Sydney) regions, enabling customers to build, deploy, and run containerized web applications and API services in AWS cloud, at scale, and without managing infrastructure.

AWS App Runner makes it easier for developers to quickly build and deploy containerized web applications automatically, load balances traffic with encryption, and scales to meet your traffic needs. You can also configure how your services are accessed and communicate with other AWS applications in a private Amazon VPC.

Amazon Neptune Serverless now scales down to 1 NCU to save costs

Starting now, Amazon Neptune Serverless has lowered its minimum scaling requirements to 1 Neptune Capacity Unit (NCU) from 2.5 NCUs. This brings down the cost of running Neptune Serverless by reducing the resources used by up to 2.5X when the graph database is not actively responding to user queries. With a 1 NCU minimum, you could save on your graph database by using Neptune Serverless.

Amazon Neptune Serverless allows you to run and instantly scale graph workloads, without the need to manage and optimize capacity. Neptune Serverless automatically determines and provisions the compute and memory resources to run the graph database, and scales capacity based on the workload’s changing requirements to maintain consistent performance.

For example, Neptune Serverless automatically provisions and adjusts resources required to run a fraud graph as the query volumes processed vary. Neptune Serverless reduces costs by up to 90% compared with provisioning for peak capacity.

With Neptune Serverless, you only pay for the database capacity you consume, making it cost effective for unpredictable workloads with long off-peak times and sudden bursts of activity. You can start with Neptune Serverless without any upfront investment.

Amazon Kinesis Data Streams increases On-Demand write throughput limit to 1 GB/s

Amazon Kinesis Data Streams now supports an increased On-Demand write throughput limit to 1 GB/s, a 5x increase from the current limit of 200 MB/s. Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store streaming data at any scale.

On-Demand is a capacity mode for Kinesis Data Streams that automates capacity management, so that you never have to provision and manage the scaling of resources. In the On-Demand mode you pay for throughput consumed rather than for provisioned resources, making it easier to balance costs and performance.

By default, On-Demand capacity mode can automatically scale up to 200MB/s of write and 400MB/s of read throughput. Now you can request a limit increase through AWS Support to enable your On-Demand stream to scale up to 1 GB/s write and 2 GB/s read throughput.

You can create a new On-Demand data stream or convert an existing data stream into the On-Demand mode with a single-click. When you choose On-Demand capacity mode, Kinesis Data Streams automatically accommodates your workloads as they ramp up.

On-Demand mode provides the same high availability and durability that Kinesis Data Streams already offers. When switching existing streams to On-Demand, you can continue to use your existing applications without making any code changes or requiring downtime.

On-Demand streams work with all existing AWS integrations, such as Amazon CloudWatch Logs, Amazon DynamoDB, Amazon Kinesis Data Firehose, Amazon Kinesis Data Analytics, Amazon Lambda, and open-source technologies such as Apache Spark and Apache Flink.

Amazon Managed Blockchain (AMB) announces general availability of Ethereum Token-Based Access

This week, AWS announces the general availability of Amazon Managed Blockchain (AMB) Token-Based Access (TBA) for fully managed Ethereum nodes. TBA provides a more convenient way for customers to connect to their Ethereum nodes from popular blockchain development tools.  

TBA increases AMB’s Ethereum node interoperability with popular tools used by blockchain developers. Customers that do not need the level of security provided by AWS Signature Version 4 (SigV4) can create an Accessor token through the AWS Console, CLI ,or SDK.

This Accessor token can be used to easily connect popular developer tools such as ether.js, Hardhat, and Metamask with AMB Ethereum nodes. To learn more, please see the documentation and our “how-to-use” blog

AMB is a fully managed service that allows you to join public networks or set up and manage scalable private networks using popular open-source frameworks. Amazon Managed Blockchain eliminates the overhead required to create the network or join a public network, and automatically scales to meet the demands of thousands of applications running millions of transactions. AMB currently supports Ethereum, Goerli, and Rinkeby.

Amazon RDS for SQL Server now supports Cross Region Automated Backups with encryption

Amazon Relational Database Service (RDS) for SQL Server now supports Cross Region Automated Backups with encryption. The Amazon RDS Cross-Region Automated Backups feature enables disaster recovery capability for mission-critical databases by providing you the ability to restore your database to a specific point in time within your backup retention period.

This allows you to quickly resume operations in the event that the primary AWS Region becomes unavailable. If you've enabled encryption on the source RDS for SQL Server DB instance, you can use this feature to copy encrypted DB snapshots to regions outside of the primary AWS region.

Pricing is comprised of the storage for snapshots and the data transfer of the snapshots and the transaction logs. Data transfer between the primary AWS Region to a secondary AWS Region is billed based on the data transfer rates of the applicable AWS Regions. See Amazon RDS for SQL Server Pricing for up-to-date pricing of instances, storage, data transfer, and regional availability.

Amazon DevOps Guru for RDS supports Proactive Insights

Amazon DevOps Guru for RDS now supports Proactive Insights, a new set of findings that inform you of impending database performance and availability issues before they become critical. With Proactive Insights, Amazon DevOps Guru for RDS continuously monitors database instances for potential issues that can lead to degraded database health in the future.

When such conditions are detected, Proactive Insights will generate a finding that describes the nature of the impending problem, and specific actions you can take to mitigate it. Proactive Insights are available for Amazon DevOps Guru customers, with no additional setup or configuration needed. Proactive Insights are supported for Aurora PostgreSQL and Aurora MySQL database engines at the moment.

Amazon DevOps Guru for RDS is a Machine Learning (ML) powered capability for Amazon Relational Database Service (Amazon RDS) that automatically detects and diagnoses database performance and operational issues, enabling you to resolve bottlenecks in minutes rather than days. 

Amazon DevOps Guru for RDS is a feature of Amazon DevOps Guru, which detects operational and performance related issues for Amazon RDS engines and dozens of other resource types. Amazon DevOps Guru for RDS expands upon the existing capabilities of Amazon DevOps Guru to detect, diagnose, and provide remediation recommendations for a wide variety of database-related performance issues, such as resource over-utilization and misbehavior of SQL queries.

When an issue occurs, Amazon DevOps Guru for RDS immediately notifies developers and DevOps engineers and provides diagnostic information, details on the extent of the problem, and intelligent remediation recommendations to help customers quickly resolve the issue. Amazon DevOps Guru for RDS is available in these regions.

Amazon RDS for MariaDB supports new minor versions 10.6.12, 10.5.19, 10.4.28 and 10.3.38

Amazon Relational Database Service (Amazon RDS) for MariaDB now supports MariaDB minor versions 10.6.12, 10.5.19, 10.4.28 and 10.3.38. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MariaDB, and to benefit from the numerous bug fixes, performance improvements, and new functionality added by the MariaDB community.

You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. Learn more about upgrading your database instances, including automatic minor version upgrades, in the Amazon RDS User Guide.

Autocomplete suggestions are now available on AWS Marketplace search

This week, AWS launched Autocomplete suggestions for AWS Marketplace search. With this feature, users that visit the AWS Marketplace website or console would get search suggestions within the search bar as they type. User queries show up as bolded prefixes within the Autocomplete suggestions which they can select to complete their query and see the results on the main results page. The suggestions are sorted by relevance and tells what’s available, how to spell difficult terms, and what others are searching for.

Autocomplete suggestions minimizes the number of characters a user has to type, thus reducing the potential for errors and gets the searchers to the detail page quicker. Autocomplete suggestions will help users save time by helping them refine their search to create better queries and get better results, without having to look at the main search results.

Autocomplete suggestions assure users that they are on the right track with the products that are available within the AWS Marketplace catalog so that users can continue to build additional queries upon these suggestions.

AWS Private CA releases open source samples to help create Matter compliant certificate authorities

Today, AWS Private Certificate Authority (Private CA) released sample AWS Cloud Development Kit (CDK) scripts and AWS CloudFormation stack templates to help you create Certificate Authorites (CAs) that issue Matter Device Attestation Certificates (DACs).

Matter is a new standard for smart home security and device interoperability. Matter uses X.509 digital certificates to identify devices. Matter certificates can be issued only by CAs that comply with the Matter PKI Certificate Policy (CP). You can use the AWS CDK and CloudFormation samples to help you configure Matter-compliant CAs.

The samples not only construct the CA, but they also create the configuration and auditing infrastructure needed to help you comply with the Matter PKI CP. This includes AWS Identity and Access Management (IAM) roles & permissions, log configuration & retention policies. 

The Matter PKI CP has specific requirements for the separation of CA roles, and record keeping of CA operations. Before issuing device attestation certificates, you have to provide evidence to the CSA that your Matter CAs are operated in compliance with the Matter PKI CP.

The samples released today help you create Matter CAs for issuing DACs. The samples also configure other AWS services like IAM, AWS CloudTrail, Amazon CloudWatch, Amazon S3 and AWS Backup to setup CA roles and access policies, and the recording and retention of CA operations. You can now provision Matter CAs as well as setup other AWS services to help you meet the requirements of the Matter PKI CP, as part of your infrastructure deployments. 

AWS SimSpace Weaver now supports AWS IAM Identity Center

This week, AWS are excited to announce the v1.12.1 AWS SimSpace Weaver app SDK update. This update enables support for AWS IAM Identity Center (successor to AWS Single Sign-On) and the ability to use temporary credentials to start, stop, and manage simulations. AWS SimSpace Weaver is a fully managed compute service that helps customers deploy large spatial simulations in the cloud.

Launched this past re:Invent, SimSpace Weaver allows customers to create seamless virtual worlds with millions of objects that can interact with one another in real time without ever worrying about managing the back-end infrastructure.

AWS IAM Identity Center allows you to create or connect workforce identities and centrally manage access across your AWS accounts and applications. Prior to today, simulation builders had to self-manage their long term credentials when working with SimSpace Weaver.

IAM Identity Center is the new recommended approach for workforce authentication and authorization on AWS and developers can now utilize temporary credentials to start, stop, and manage their simulations. As a best practice, we recommend all users manage these temporary credentials through IAM Identity Center. If you’re already using IAM Identity Center then you’re all set.

Code scans for Lambda functions within Amazon Inspector now in preview

Amazon Inspector now supports code scanning of Lambda functions, expanding the existing capability to scan Lambda functions and associated layers for software vulnerabilities in application package dependencies. With this expanded capability, Amazon Inspector now also scans the custom proprietary application code within a Lambda function for code security vulnerabilities such as injection flaws, data leaks, weak cryptography, or missing encryption based on AWS security best practices.

When code vulnerabilities are identified in the Lambda function or layer, Inspector generates actionable security findings along with impacted code snippets and remediation guidance. All findings are aggregated in the Amazon Inspector console, routed to AWS Security Hub, and pushed to Amazon EventBridge to automate workflows.

During the preview period, Lambda code scanning is available in five AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), and Europe (Ireland) at no additional cost to customers. Learn more about our Lambda scanning capabilities here.

Amazon Inspector is a vulnerability management service that continually scans AWS workloads for software vulnerabilities, code vulnerabilities, and unintended network exposure across your entire AWS Organization.

Once activated, Amazon Inspector automatically discovers all of your Amazon Elastic Compute Cloud (EC2) instances, container images in Amazon Elastic Container Registry (ECR), and AWS Lambda functions, at scale, and continuously monitors them for known vulnerabilities, giving you a consolidated view of vulnerabilities across your compute environments.

AWS Lambda now supports Amazon DocumentDB change streams as an event source

Amazon Inspector now supports code scanning of Lambda functions, expanding the existing capability to scan Lambda functions and associated layers for software vulnerabilities in application package dependencies. With this expanded capability, Amazon Inspector now also scans the custom proprietary application code within a Lambda function for code security vulnerabilities such as injection flaws, data leaks, weak cryptography, or missing encryption based on AWS security best practices.

When code vulnerabilities are identified in the Lambda function or layer, Inspector generates actionable security findings along with impacted code snippets and remediation guidance. All findings are aggregated in the Amazon Inspector console, routed to AWS Security Hub, and pushed to Amazon EventBridge to automate workflows.

During the preview period, Lambda code scanning is available in five AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), and Europe (Ireland) at no additional cost to customers. Learn more about our Lambda scanning capabilities here.

Amazon Inspector is a vulnerability management service that continually scans AWS workloads for software vulnerabilities, code vulnerabilities, and unintended network exposure across your entire AWS Organization. Once activated, Amazon Inspector automatically discovers all of your Amazon Elastic Compute Cloud (EC2) instances, container images in Amazon Elastic Container Registry (ECR), and AWS Lambda functions, at scale, and continuously monitors them for known vulnerabilities, giving you a consolidated view of vulnerabilities across your compute environments.

AWS Elemental MediaConvert now Ingests FLAC and Animated GIF Inputs

This week, AWS announces the general availability of FLAC audio and animated GIF video input sources for AWS Elemental MediaConvert. These new input formats are compatible with all MediaConvert outputs. For example, you can convert lossless FLAC files into compressed audio formats like AAC, MP3, and Ogg Vorbis, or use FLAC files as sidecar audio sources to be joined with video files.

Animated GIF inputs can be converted into higher-efficiency video streaming codecs like AVC and HEVC and distributed as standalone MP4 files or adaptive-bitrate streaming packages such as HLS or DASH.

Using AWS Elemental MediaConvert, video providers with can easily and reliably transcode on-demand content. With this wider range of input support, MediaConvert now supports more audio-only customer workflows centered around lossless sources. More user-generated content workflows are also supported, with Animated GIF support enabling customers to add accompanying audio or simply convert to different video formats.

FLAC and GIF input support is available in all regions where AWS Elemental MediaConvert is available. Visit the AWS region table for a full list of AWS Regions supported by MediaConvert.

VMware Cloud on AWS now available in Middle East (Bahrain) Region

This week AWS announced the availability of VMware Cloud on AWS in the AWS Bahrain region. This marks the 23rd AWS region to offer VMware Cloud on AWS, providing customers with a faster and more efficient way to migrate their VMware-based workloads to the cloud.

With this launch, VMware customers in the Bahrain region can now seamlessly extend their on-premises workloads to the AWS Cloud. VMware Cloud on AWS is the only jointly engineered solution designed for speed and security. It offers a managed service that combines compute, network and storage capabilities, and is maintained, supported and operated by VMware and AWS.

Using VMware Cloud on AWS, customers can leverage the same familiar VMware tools, skillsets and governance across their on-premises and cloud environments, while taking advantage of the scalability, cost-effectiveness and sustainability of the AWS cloud. 

This allows customers to extend their on-premises datacenters, migrate workloads and modernize through access to the more than 200 AWS services. They can also deploy the entire VMware stack, including ESXi, vCenter, NSX and vSAN, along with additional storage options.

VMware Cloud on AWS is the preferred provider for vSphere workloads and is built on a sustainable architecture with a commitment to reaching net zero carbon emissions by 2030. This new availability in Bahrain allows customers to leverage the scale of all AWS data centers.

Amazon RDS for PostgreSQL now supports major version PostgreSQL 15

Amazon Relational Database Service (Amazon RDS) for PostgreSQL now supports the latest major version PostgreSQL 15. New features in PostgreSQL 15 include the SQL standard "MERGE" command for conditional SQL queries, performance improvements for both in-memory and disk-based sorting, and support for two-phase commit and row/column filtering for logical replication.

The PostgreSQL 15 release also adds support for new extension pg_walinspect, and server-side compression with Gzip, LZ4, or Zstandard (zstd) using pg_basebackup.  Please refer to the PostgreSQL community announcement for more details about the release.

Amazon RDS for PostgreSQL supports over 80 extensions and loadable modules including PostGIS 3.3.2 for geospatial data, plv8 3.1.4, pgrouting 3.4.1 , and pg_hint_plan 1.5.0. You can upgrade your database using several options including upgrade in-place, restore from a snapshot, replicate with the pglogical extension, or use AWS Data Migration Service. 

AWS SAM CLI announces preview of Rust build support

The AWS Serverless Application Model (SAM) Command Line Interface (CLI) now announces the preview of sam build support for building and packaging serverless applications developed in Rust.

The AWS SAM CLI is a developer tool that makes it easier to build, test, package, and deploy serverless applications. Developers building serverless applications with Rust used cargo-lambda to build their applications, but cargo-lambda wasn't supported within the SAM CLI build workflow.. Starting now, you can use cargo-lambda in the SAM CLI build workflow for your Rust applications.

Developers building serverless applications using Rust can now use sam build and SAM Accelerate to rapidly iterate on their code changes in the cloud, achieving the same levels of productivity they're used to when testing locally, while testing against a realistic application environment in the cloud.

Amazon Redshift announces general availability of ROLLUP, CUBE, and GROUPING SETS in GROUP BY clause

Amazon Redshift now supports new SQL functionalities including ROLLUP, CUBE, and GROUPING SETS, to simplify building multi-dimensional analytics applications. ROLLUP, CUBE, and GROUPING SETS simplifies data warehouse migrations by offering the commonly used syntax across databases. 

Multi-dimensional analysis requires you to build complex processes and queries to aggregate and analyze core business facts such as revenue, and expense against multiple dimensions of your business metrics such as product category, geography, and time.

Now, with a single SQL statement leveraging the ROLLUP, CUBE, and GROUPING SETS in the GROUP BY clause, you can get the same functionality, simplifying your ability to perform analytics with Amazon Redshift. 

AWS Service Catalog now supports the ability to disassociate and delete products in one-action

AWS Service Catalog customers can now disassociate and delete their AWS Service Catalog products in one-action, making it easier to update and manage their catalog of AWS resources.

Customers use AWS Service Catalog to manage and share their infrastructure-as-code templates, and apply governance to them. These templates are saved as products in AWS Service Catalog, and can be arranged in logical sets called portfolios. Prior to this feature, customers were prevented from deleting products if there were any associated resource components.

For example, customers would have to manually and individually disassociate other resource components such as portfolio associations, TagOptions, Service Actions, and Budgets, prior to being able to delete their AWS Service Catalog products. Now, using the console or creating a custom alias through the AWS Command Line Interface (CLI), customers can efficiently delete their AWS products in one step.

When customers select ‘Delete’ from the console, a new module will appear that lists all the components that are associated for a given AWS Service Catalog product. Customers can see and verify these associations and upon confirmation, select ‘Disassociate and Delete’, which will automatically disassociate all components and delete the AWS product.

Amazon CloudWatch Internet Monitor is now generally available

This week, AWS announced the general availability of Amazon CloudWatch Internet Monitor, a feature of Amazon CloudWatch that helps you monitor internet availability and performance metrics between your AWS-hosted applications and application end users.

Internet Monitor helps you quickly visualize the impact of issues, pinpoint locations and providers that are affected, and then helps you take action to improve your end users' network experience. You can see a global view of traffic patterns and health events, and drill down into information about events at different geographical granularities.

If an issue is caused by the AWS network, you’ll receive an AWS Health Dashboard notification that tells you the steps that AWS is taking to mitigate the problem. Internet Monitor also provides insights and recommendations that can help you improve your users' experience by using other AWS services.

Internet Monitor publishes measurements to Amazon CloudWatch Metrics and Amazon CloudWatch Logs that include geographies and networks specific to your application. For viewing information at different geographical granularities and getting recommendations, Internet Monitor uses the top 500 city networks for your application in case rerouting traffic is required. Internet Monitor also sends health event notifications through Amazon EventBridge. 

SageMaker Autopilot now offers the ability to select algorithms while launching a machine learning training experiment

Amazon SageMaker Autopilot, a low-code machine learning (ML) service which automatically builds, trains and tunes the best ML models based on your data, now supports selection of underlying training algorithms while creating an Autopilot experiment. The ability to select algorithms provides you the flexibility to customize your AutoML journey and complete experiments much faster.

Amazon SageMakerAutopilot supports auto or manual selection of two training methods - Ensemble and Hyperparameter Optimization (HPO), to address different machine learning problems. Ensemble and HPO training modes support eight and three algorithms respectively.

Each training mode runs a pre-defined set of algorithms on your dataset to train model candidates. By default, Autopilot pre-selects all the available algorithms for the given training mode.

Starting now, you can select the algorithm(s) from the list of offered algorithms and customize the Autopilot experiment to meet your model training requirements. Selecting algorithms not only eliminates the need to iterate over non-preferred algorithms but also improve the overall job runtime.

Amazon ECS now supports deletion of inactive task definition revisions

Amazon Elastic Container Services (Amazon ECS) now enables customers to delete inactive task definition revisions programmatically or via the Amazon ECS console. With this new capability, customers can permanently delete task definition revisions that are no longer needed or contain undesirable configurations, simplifying their resource management and improving security posture.

A task definition serves as the blueprint for running tasks and services on Amazon ECS. Customers can update task definitions to create new revisions, and deregister old revisions that are no longer needed.

Deregistered task definition revisions are marked INACTIVE by Amazon ECS and cannot be used for creating new services or running standalone tasks. While deregistration makes it easier for customers to disregard INACTIVE task definition revisions, these resources could pile up over time.

Starting now, customers can use a new DeleteTaskDefinitions API or the Amazon ECS console to submit a list of INACTIVE task definition revisions for deletion. ECS will delete the submitted revisions that are not associated with any active tasks, and is designed to retry deletion for the remaining ones, eventually deleting them when all associated tasks have stopped. Customers can track the deletion status using the Amazon ECS Console or the DescribeTaskDefinition API. 

Amazon Aurora Serverless v1 now supports customer configurable maintenance windows

Amazon Aurora Serverless v1 now allows you to select a specific window for scheduling a maintenance event. You can use the maintenance window to specify, for example, when your PostgreSQL 10 cluster gets upgraded to PostgreSQL 11.

You can set the maintenance window with just a few clicks in the AWS Management Console or using the latest AWS SDK or CLI. Review the Aurora documentation to learn more. 

Aurora Serverless v1 is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible and PostgreSQL-compatible editions).

AWS Lake Formation extends Data Filters to all regions for supported services

AWS Lake Formation is a service that makes it simple to set up a secure data lake in days. A data lake is a centralized, curated, and secure repository that stores your data, both in its original form and prepared for analysis. A data lake enables you to break down data silos and combine different types of analytics to gain insights and guide better decisions.

AWS Lake Formation already supports table, column, row, and cell-level permissions in certain regions, making it straightforward to restrict access to sensitive information by granting users access to only the portions of the data they are allowed to see.

With today’s launch, you can use the Data Filter functionality in all regions where AWS Lake Formation is supported when using AWS Analytics services that support row-level security like Amazon Athena (V3) and Amazon Redshift Spectrum.

Google Cloud Releases and Updates
Source: cloud.google.com


Anthos clusters on bare metal

Anthos clusters on bare metal 1.14.2 is now available for download. To upgrade, see Upgrading Anthos on bare metal. Anthos clusters on bare metal 1.14.2 runs on Kubernetes 1.25.


  • Updated Anthos Identity Service to better handle concurrent authentication webhook requests.
  • Updated stackdriver-operator to set CPU and memory resource limits.

Anthos clusters on VMware

A new vulnerability (CVE-2022-4696) has been discovered in the Linux kernel that can lead to a privilege escalation on the node. Anthos clusters on VMware running v1.12 and v1.13 are impacted. Anthos clusters on VMware running v1.14 or later are not affected.

For instructions and more details, see the Anthos clusters on VMware security bulletin.

Anthos Service Mesh


1.14.6-asm.9 is now available for in-cluster Anthos Service Mesh.

You can now download 1.14.6-asm.9 for in-cluster Anthos Service Mesh. It includes the features of Istio 1.14.6 subject to the list of supported features.

Apigee Connectors

The following new connectors are available in preview:

The IBM MQ connector now supports requestReply messages.

The Cloud Storage connector now supports the following actions for file operations:

  • UploadObject
  • DownloadObject
  • MoveObject
  • CopyObject
  • DeleteObject

The MongoDB connector now supports the following actions:

  • InsertDocument
  • UpdateDocument
  • DeleteDocument
  • GetDocument

Apigee UI

Public preview release of the Apigee UI in the Google Cloud console

This release includes a new version of the Apigee UI that is integrated with the Google Cloud console. The new UI makes it easier to perform Apigee tasks that are managed in the Cloud console. We welcome your feedback on the new UI: click Send Feedback at the top of the UI.

For now, you can continue to use the classic Apigee UI if you wish: just click Back to Classic Apigee in the new UI.

The following tabs in the classic Apigee UI have not yet been implemented in the Apigee UI in the Cloud console, but they will be available there soon:

  • Develop > Integrations
  • API Security
  • Monetization
  • Analyze > API Metrics > Cache Performance,
  • Analyze > API Metrics > Target Performance
  • Analyze > Developers
  • Analyze > End Users
  • Publish > Portals

If you need to use these features, you can do so by switching to the classic Apigee UI.

This release will be rolled out over the next week, so you might not be able to view the new Apgee UI until the rollout is complete.

App Engine flexible environment Node.js

The Node.js 18 runtime is now available in preview, and is built on a modern and secure operating system (Ubuntu 22). This new runtime version uses Google Cloud's buildpacks and requires updates to your app.yaml. Learn more.

App Engine standard environment Python

The Python 3.11 runtime for App Engine standard environment is now generally available.


The WITH RECURSIVE clause is now generally available (GA). This clause lets you include one or more recursive common table expressions (CTEs) in a query.

You can set default values on columns in your BigQuery tables. This feature is now generally available (GA).

ML - The multivariate time-series forecasting model ARIMA_PLUS_XREG is now available to on-demand users.


Schedule Chronicle dashboard reports

You can schedule the delivery of Chronicle dashboard reports over email for both the default dashboards and custom dashboards. In addition to setting the time interval, email address, and format to deliver the report, you can also set the pagination details and test the delivery of the report. For more information, see Schedule Chronicle dashboard reports.

Chronicle Feed Management enhanced the support for the Qualys VM log type to include Qualys VM Detections API. See the Feed Management documentation for information.

The following supported default parsers have changed. Each is listed by product name and ingestion label, if applicable.

  • 1Password (ONEPASSWORD)
  • Airlock Digital Application Allowlisting (AIRLOCK_DIGITAL)
  • Apache (APACHE)
  • Atlassian Confluence (ATLASSIAN_CONFLUENCE)
  • AWS Cloudtrail (AWS_CLOUDTRAIL)
  • Azure AD Directory Audit (AZURE_AD_AUDIT)
  • Azure Cosmos DB (AZURE_COSMOS_DB)
  • Compute Engine (GCP_COMPUTE)
  • CrowdStrike Detection Monitoring (CS_DETECTS)
  • CrowdStrike Falcon (CS_EDR)
  • Cybereason EDR (CYBEREASON_EDR)
  • Google Chrome Browser Cloud Management (CBCM) (N/A)
  • iBoss Proxy (IBOSS_WEBPROXY)
  • JumpCloud Directory Insights (JUMPCLOUD_DIRECTORY_INSIGHTS)
  • Juniper Mist (JUNIPER_MIST)
  • Kubernetes Node logs (KUBERNETES_NODE)
  • Microsoft Azure Activity (AZURE_ACTIVITY)
  • Microsoft Graph API Alerts (MICROSOFT_GRAPH_ALERT)
  • Okta (OKTA)
  • Okta Access Gateway (OKTA_ACCESS_GATEWAY)
  • Palo Alto Networks Firewall (PAN_FIREWALL)
  • pfSense (PFSENSE)
  • Salesforce (SALESFORCE)
  • Sentinelone Alerts (SENTINELONE_ALERT)
  • SentinelOne EDR (SENTINEL_EDR)
  • Signal Sciences WAF (SIGNAL_SCIENCES_WAF)
  • SonicWall (SONIC_FIREWALL)
  • Windows Event (WINEVTLOG)
  • Workspace Activities (WORKSPACE_ACTIVITY)
  • Yubico OTP (YUBICO_OTP)
  • Zscaler Private Access (ZSCALER_ZPA)

For details about changes in each parser, see Supported default parsers.

Cloud Data Fusion 

Cloud Data Fusion version 6.8.1 is generally available (GA). This release is in parallel with the CDAP 6.8.1 release.

Changes in Cloud Data Fusion 6.8.1:

Cloud Data Fusion version 6.7.3 is generally available (GA). This release is in parallel with the CDAP 6.7.3 release.

Fixed in 6.7.3:

  • Fixed an issue that allowed reading secure keys in the system namespace with only the Data Fusion Viewer role (datafusion.viewer) or Instance Accessor role (datafusion.accessor). For more information about predefined roles for role-based access control in Cloud Data Fusion, see the Role-based access control (RBAC) overview.

  • Fixed an issue in the BigQuery Replication Target plugin that caused Replication jobs to fail when the BigQuery target table already existed. The new version of the plugin will automatically be used in new Replication jobs (CDAP-19599).

  • Fixed an issue that prevented upgrades for MySQL and SQL Server Replication jobs in version 6.6.0. Upgrades are supported from version 6.6.0 to 6.7.3 and 6.8.1 (CDAP-19622).

  • Fixed an issue that prevented upgrades for Oracle by Datastream Replication jobs in version 6.6.0. Upgrades are supported from versions 6.6.0 , 6.7.0, 6.7.1, and 6.7.2 to version 6.7.3 (CDAP-20013).

  • Fixed an issue that caused pipelines to fail if they used a connection property, such as the Service Account JSON property, which used a secure macro with JSON as the value (CDAP-20271).

  • Fixed an issue that occurred in certain upgrade scenarios, where pipelines didn't have the Use Connection property set and the UI didn't display a plugin's connection properties, such as Project ID and Service Account Type (CDAP-20392).

  • Fixed an issue where the Replication Source plugin's event reader was not stopped by the Delta worker when there were errors, which caused leakage of the plugin's resources (CDAP-20394).

  • Fixed an error in security-enabled instances that caused pipeline launch to fail and return a token expired error when evaluating secure macros in provisioner properties (CDAP-20146).

  • In the Oracle Batch Source, when the source data included fields with the Numeric data type with undefined precision and scale, Cloud Data Fusion set the precision to 38 and the scale to 0. If any values in the field had scale other than 0, values were truncated, which could have resulted in data loss. If the scale for a field was overridden in the plugin output schema, the pipeline failed.

    If an Oracle source has Numeric data type fields with undefined precision and scale, you must set the scale for these fields in the plugin output schema. If there are any numbers present in the fields with a scale greater than the scale defined in the plugin, truncation might occur. If there are numbers with undefined precision and scale in the pipeline, warnings appear in the pipeline log. For more information about setting precision and scale in a plugin, see Changing the precision and scale for decimal fields in the output schema (PLUGIN-1433).

  • Improved performance for batch pipelines with MySQL sinks (PLUGIN-1374).

  • For Database plugins, fixed a security issue where the database username and password were exposed in the logs (CDAP-20235).

Cloud Functions

Cloud Functions now supports the Python 3.11 runtime at the General Availability release level.

Cloud Functions has added support for a new runtime, Ruby 3.2, at the Preview release level.

New performance recommendations are supported for Cloud Functions, which analyze cold starts and suggest setting up minimum instances to improve function performance. At the Preview release level.

Cloud Logging

You can now use the gcloud CLI to do the following:

  • Create a log bucket and upgrade that bucket to use Log Analytics.
  • Upgrade an existing log bucket to use Log Analytics.
  • Create a linked dataset in BigQuery.

For more information, see Configure log buckets.

Log buckets in the following regions can now be upgraded to use Log Analytics:

  • us-central1
  • us-west1
  • asia-south1

For more information, see Supported regions for Log Analytics.

Cloud Spanner

The new System insights dashboard displays metrics and scorecards for the resources that your instance or database uses and helps you get a high-level view of your system's performance. For more information, see Monitor instances with system insights.

Cloud SQL for MySQL / PostgreSQL / SQL Server

Cloud SQL now supports the ability to get details for a Cloud SQL user for a database instance using the API or gcloud. To learn more about the new method, see Cloud SQL Admin API REST Resource.

Compute Engine

Generally available: When creating a reservation, you can now include a compact placement policy to specify that VMs should be located as close to each other as possible to reduce network latency. Learn how to create a reservation that specifies a compact placement policy.


Terraform now supports Datastream private connectivityconnection profile, and stream resources. For more information, see Getting started with Terraform and Datastream.


The Dialogflow CX audio input duration limit has been increased from one minute to two minutes.



High Scale and Enterprise tier instances now support overlapping permissions (GA).

Google Cloud Deploy

The ability to verify your deployment is now generally available.


A new vulnerability (CVE-2022-4696) has been discovered in the Linux kernel that can lead to a privilege escalation on the node. GKE clusters, including Autopilot clusters, are impacted. GKE clusters using GKE Sandbox are not affected. For instructions and more details, see the GKE security bulletin.



Workforce identity federation now supports browser-based sign in. The feature is generally available (GA). To use it, see Browser-based sign-in in Obtain short-lived tokens for workforce identity federation, or locate the Browser-based sign-in section in the configuration guide for your identity provider.

Workforce identity federation and workload identity federation can now accept encrypted SAML assertions. The feature is generally available (GA). To use the feature, locate the Create the workload identity pool and provider section in the configuration guide for your identity provider and follow the gcloud CLI instructions for the SAML workflow.

Transcoder API

Validation checks added for segmentDuration and gopDuration for all video codecs as outlined in the documentation. This change was released earlier this month.

Vertex AI 

A new custom training overview page is available. The new overview page covers the following topics:

  • What is custom training?
  • Benefits of custom training on Vertex AI.
  • How custom training works.
  • Custom training workflow.


Microsoft Azure Releases And Updates
Source: azure.microsoft.com

Generally Available: Azure Monitor Logs now supports Availability Zones in Canada Central, France Central and Japan East

Azure Monitor Logs continues to extend its Availability Zone support by adding three regions – Canada Central, France Central and Japan East – to the East US 2 and West US 2 regions, which are already supported.

Azure SQL—General availability updates for early March 2023

General availability enhancements and updates released for Azure SQL in early March 2023


Disclosure: In-tree disk and file drivers will no longer be supported starting in Kubernetes v1.26

Migrate your existing in-tree disk and file volumes to CSI drivers using provided guidance.

Azure Virtual Network Manager Event Logging now in public preview

Interact with Azure Virtual Network Manager (AVNM) event logs for network group membership changes.

New Azure for Operators products and partner programs released

Azure for Operators announces the next wave of services to empower operators to modernize and monetize their 5G investments, enable enterprises with ubiquitous computing from cloud to edge.

Public preview: Azure PostgreSQL migration extension for Azure Data Studio

Provide end-to-end assessment experience with right-sized target recommendations for Azure Database for PostgreSQL using the Azure PostgreSQL migration extension in Azure Data Studio.

General availability: Power BI with Azure Database for MySQL - Flexible Server

Integrate Power BI with Azure Database for MySQL - Flexible Server directly from the Azure portal to quickly get started with building your data visualizations!

Generally available: Burstable compute for single node configurations for Azure Cosmos DB for PostgreSQL

Use new burstable compute in single node configurations to start small with distributed Postgres and scale out later.

Now Available: Azure Monitor Ingestion client libraries

The initial stable release of the Azure Monitor Ingestion client libraries is now available. Upload custom logs to Log Analytics workspaces in .NET, Java, JavaScript, and Python.

General availability: Azure Archive Storage now available in West US 3

Reduce spending by storing rarely accessed data to Azure Archive Storage, now in new regions.


General availability: New enhanced connection troubleshoot

Azure Network Watcher announces enhanced connection troubleshoot helping resolve network connectivity issue comprehensively along with actionable insights

Public preview: Login and TDE-enabled database migrations with Azure Database Migration Service

Provide a secure and improved user experience for migrating TDE databases and SQL/Windows logins to Azure SQL.

Generally available: 4 TiB, 8 TiB, and 16 TiB storage per node for Azure Cosmos DB for PostgreSQL

Onboard even larger distributed Postgres workloads to Azure Cosmos DB for PostgreSQL with fewer nodes using 4 TiB, 8 TiB, or 16 TiB storage per node.

Public preview: Confidential containers on ACI

ACI now lets you run containers in a trusted execution environment (TEE)


GA: Online live resize of persistent volumes


Scale up persistent volumes without taking the application offline.

Public preview: Pod sandboxing in AKS

AKS now allows you to isolate workloads at the kernel level.

Public preview: Caching in ACR

You can now use caching in ACR to achieve faster and more reliable pull operations.

Public Preview: Auto vacuum metrics for Azure Database for PostgreSQL - Flexible Server

Monitor the auto vacuum process health for Azure Database for PostgreSQL – Flexible Server via Azure Monitor metrics and write custom alert rules on these metrics.


Public preview: AKS NodeOSUpgrade channel

You now have more flexible options for managing your upgrades.

General availability: Scale improvements and metrics enhancements on Azure’s regional WAF

Azure’s regional Web Application Firewall (WAF) running on Application Gateway now supports improved scale limits, including HTTP listener count, as well as additional metrics dimensions.


Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes.  Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.

Not knowing exactly what is in your cloud accounts, or those of your client's can be a worry. What exactly is running in there and what is it costing? What obsolete resources are you still being charged for? What legacy dev/test environments can be switched off? What open ports are inviting in hackers? You can answer all these questions with Hava.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free. 

When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here (No credit card required and includes a forever free tier):