40 min read

In Cloud Computing This Week [Nov 25th 2022]

November 25, 2022

 

 

Cloud_News_Roundup

Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday November 25th 2022.

This week we saw great take up of the new Hava Terraform Provider. This allows you to include diagram source creation directly inside your deployment code so you can take care of your network documentation as you create them.

https://www.hava.io/blog/hava-releases-hashicorp-terraform-provider

To stay in the loop, make sure you subscribe using the box on the right of this page.

Of course we'd love to keep in touch at the usual places. Come and say hello on:

Facebook.      Linkedin.     Twitter.


Getting_Started_aws_logo

AWS Updates and Releases

Source: aws.amazon.com

Amazon QuickSight launches cluster points for Geospatial Visual

Authors of Amazon QuickSight can now improve readability of points on maps visual by changing the points style to cluster points. When hundreds of data points are clumped together with many of them so close in proximity that they overlap and are not visible, cluster points makes is it easy for readers to find patterns or identify locations with greater of fewer number of data-points.

Cluster points is a different style for displaying points on maps, letting readers manage multiple markers at different zoom levels. When enabled, the markers in close proximity are clustered into one marker with a number indicating the number of markers in that cluster. As readers zoom into any of the cluster locations, the number on the cluster decreases, and they begin to see the individual markers on the map. Zooming out of the map consolidates the markers into clusters again. See here for more details.

Amazon Managed Grafana now supports visualizing Prometheus Alertmanager rules and new configuration APIs

Amazon Managed Grafana now supports visualizing Prometheus Alertmanager rules, new configuration APIs and additional visualization plugins. AWS customers using Amazon Managed Service for Prometheus, or running self-managed Prometheus environments can visualize and analyze their Alertmanager rules, alert states, silences and contact points directly in an Amazon Managed Grafana workspace.

AWS Customers can opt-in to viewing their Prometheus Alertmanager rules by turning on Grafana alerting from the Amazon Managed Grafana console or programmatically using the new configuration APIs, designed to manage workspace settings. Current workspace configuration details can be retrieved using the DescribeWorkspaceConfiguration API and settings can be updated via the UpdateWorkspaceConfiguration API.

The CreateWorkspace API, used to programmatically create, delete and manage Grafana workspaces, has been updated to allow customers to enable Grafana alerting during workspace creation. Amazon Managed Grafana also adds support for Sankey, Plotly and Scatter visualization plugins, providing customers more visualization options for their dashboards.

Amazon Managed Grafana is a fully managed service that takes care of the provisioning, setup, scaling, and maintenance of Grafana servers. Grafana alerting is available in every region where Amazon Managed Grafana is generally available. 

AWS IoT RoboRunner is now generally available

AWS IoT RoboRunner is an AWS for Robotics service that unlocks new use cases for robotics automation by helping fleets of robots seamlessly work together. AWS IoT RoboRunner reduces the complex development work required to build the applications you need to provide multivendor interoperability.

AWS IoT RoboRunner collects and combines data from each robot’s fleet manager and standardizes in a central repository data types like robot status and location. AWS IoT RoboRunner features facilitate interoperability. Such features include Fleet Manager System Gateway, which manages robot vendor connections to AWS IoT RoboRunner, and Shared Space Management, which coordinates robot traffic in common spaces such as corridors.

You can also use AWS IoT RoboRunner API operations and sample code to build applications on top of the centralized repository, for use cases such as robot location and status visualization. With AWS IoT RoboRunner, enterprises can improve the efficiency of running multivendor robotics fleets and reduce the costs of running operations.

Amazon EBS launches Rule Lock for Recycle Bin to prevent unintended changes to Region-level retention rules for Snapshots and AMIs

This week, Amazon Elastic Block Store (EBS) announced the availability of Rule Lock for Recycle Bin so customers can lock their Region-level retention rules to prevent them from being unintentionally modified or deleted. This new setting adds an additional layer of protection for customers to recover their EBS Snapshots and EC2 AMIs in case of inadvertent or malicious deletions. 

AWS Customers can set up retention rules in Recycle Bin to recover from accidental deletions of their EBS Snapshots and EC2 AMIs. Each rule specifies the retention period for which resources are retained in the Recycle Bin after their initial deletion. Now, with the Rule Lock setting, customers can lock their retention rules so that they cannot be modified or deleted by any user, including Recycle Bin administrators.

You can now specify a rule unlock delay period (between 7 and 30 days) after which a locked rule can be modified, giving you a layer of protection against unintentional or malicious deletions of snapshots and AMIs. This unlock delay period gives customers adequate time to take corrective actions between the time that a user unlocks a rule and when the rule is actually available for editing or deletion.

Rule Lock for Recycle Bin is available in all AWS commercial regions and the AWS GovCloud (US) Regions. Resources in the Recycle Bin are billed at their standard rates and there are no additional charges for using Rule Lock for Recycle Bin. The Rule Lock setting is available to customers through the AWS Console, AWS Command Line Interface (CLI), or AWS SDKs.

Amazon Rekognition adds new pre-trained labels, and introduces color detection

Amazon Rekognition Labels is a machine learning-based image and video analysis service that can detect objects, people, text, scenes, and activities. Starting this week, AWS customers get multiple improvements and enhancements in Amazon Rekognition Labels for images. In the new update, AWS have added 600 new labels and improved the accuracy of over 2,000 existing labels. AWS also introduced Image Properties, for image quality and color detection. Lastly, they have added an ability to filter API results by labels and label categories. 

Among the many new labels and label categories, AWS customers can now detect, for example, popular landmarks such as Brooklyn Bridge, Colosseum, Eiffel Tower, Machu Picchu, Taj Mahal; activities such as Applause, Cycling, Celebrating, Jumping, Walking Dog; and sports labels such as Baseball Game, Cricket Bat, Figure Skating, Rugby, Water Polo.

You can use the new Image Properties feature to detect dominant color of the entire image, image foreground, image background, and objects with localized bounding boxes. Image Properties also measures the sharpness, brightness and contrast of the image. Using Image Properties, customers can filter out low quality images or add color metadata to search content. Image Properties is an optional feature priced separately from general labels detection and is only available with the updated AWS SDKs.

Labels API response now contains “aliases” and “categories”. Aliases are other names for the same label and categories group individual labels together based on 40 common themes, such as Food and Beverage and Animals and Pets. Note: Aliases and categories are only returned with the updated AWS SDKs.

AWS announces availability of Microsoft SQL Server 2022 images on Amazon EC2

Amazon EC2 adds support for managed Amazon Machine Images (AMIs) with SQL Server 2022. With these AMIs, you can easily launch SQL Server 2022 on EC2 and take advantage of the fully compliant SQL Server licenses with per-second billing model.

The new AMIs are available for both Windows Server and Linux operating systems. In addition, you can use related AWS services such as AWS Launch Wizard and CloudWatch Application Insights to further simplify your SQL Server deployment and management experience on EC2.

AWS is the proven, reliable, secure cloud for your SQL Server workloads. By running SQL Server 2022 on EC2, you can simplify SQL Server backups to S3 through a simple T-SQL BACKUP command. In addition, all SQL Server AMIs come with pre-installed software such as AWS Tools for Windows PowerShell, AWS Systems Manager, AWS CloudFormation, and various network and storage drivers to make your management easier.

Amazon SNS adds support for payload-based message filtering

Amazon Simple Notification Service (Amazon SNS) now supports payload-based message filtering, expanding the feature set that already supported attribute-based message filtering. With this release, you can apply subscription filter policies to filter out messages based on their contents, unlocking a variety of workloads.

You may use this new capability to filter events from 60+ AWS services that publish events to Amazon SNS, including Amazon S3, Amazon EC2, Amazon CloudFront, and Amazon CloudWatch. You may also use payload-based message filtering for your cross-account workloads, where subscribers may not be able to influence a given publisher to have its messages published with attributes to Amazon SNS.

Amazon SNS is a messaging service for Application-to-Application (A2A) and Application-to-Person (A2P) communication. The A2A functionality provides high-throughput, push-based, many-to-many messaging between distributed systems, microservices, and event-driven serverless applications.

These applications include Amazon Simple Queue Service, Amazon Kinesis Data Firehose, AWS Lambda, and HTTP/S endpoints. The A2P functionality enables you to communicate with your customers via mobile text messages (SMS), mobile push notifications, and email notifications. Now, with payload-based message filtering, you can further simplify your application architecture by offloading additional message filtering logic from your subscriber systems, as well as message routing logic from your publisher systems. 

Amazon SageMaker Autopilot experiments run with Ensemble training mode provide additional metrics and visibility into the AutoML workflow

Amazon SageMaker Autopilot now provides insights into the underlying workflow for each trial within a SageMaker Autopilot experiment launched with ensemble training mode. SageMaker Autopilot ranks a list of machine learning (ML) models by inference latency i.e. the time one has to wait to get prediction result from a real time endpoint to which the model is deployed, and objective metrics such as accuracy, precision, recall, and area under the curve (AUC) in the model leaderboard.

SageMaker Autopilot automatically builds, trains and tunes the best ML models based on your data, while allowing you to maintain full control and visibility. 

Amazon SageMaker Autopilot recently added a new ensemble training mode powered by AutoGluon. In the ensemble training mode, multiple trials with different combinations of a subset of algorithms and AutoGluon configuration parameters are executed. Until now, only a single model from each trial run was returned as a trial output and was ranked by the objective metric on the model leaderboard.

Starting this week, SageMaker Autopilot experiments with ensemble training mode will not only provide increased visibility into the autoML experiment by listing all underlying set of base learner models that were run within each trial, but will also use both the best objective metrics and lowest inference latency to select the best model candidate for an experiment.

As an example, if two model candidates for a binary classification problem type have a similar f1 score objective metric of 0.678 but an inference latency of 0.43 secs and 0.39 secs respectively, SageMaker Autopilot will rank the latter as the best model in the leaderboard.

Amazon EMR on EKS adds support for configuring Spark properties within EMR Studio Jupyter Notebooks

AWS are excited to announce support for configuring Spark properties within EMR Studio Jupyter Notebook sessions for interactive Spark workloads. Amazon EMR on EKS enables customers to efficiently run open-source big data frameworks such as Apache Spark on Amazon EKS. Amazon EMR on EKS customers setup and use a managed endpoint (available in preview) to run interactive workloads using integrated development environments (IDEs) such as EMR Studio.

Data scientists and engineers use EMR Studio Jupyter notebooks with EMR on EKS to develop, visualize and debug applications written in Python, PySpark, or Scala. With this release, customers can now customize their Spark settings, such as driver and executor CPU/memory, number of executors, and package dependencies, within their notebook session to handle different computational workloads or different amounts of data, using a single managed endpoint.

AWS Control Tower now displays compliance status of external AWS Config rules

AWS Control Tower now displays the compliance status of AWS Config rules deployed outside of AWS Control Tower. This view provides you with visibility into the compliance status of externally applied AWS Config rules in addition to AWS Config rules set up by AWS Control Tower.

Prior to this feature, viewing the compliance status of external AWS Config rules required navigating to the AWS Config console. With this launch, you can now access the compliance status for your external AWS Config rules by navigating to the account details page in your AWS Control Tower management account. This makes it possible to evaluate the configuration settings of your AWS resources without leaving the AWS Control Tower console.

AWS Control Tower offers the easiest way to set up and govern a secure, multi-account AWS environment based on AWS best practices.

Amazon Kinesis Data Analytics for Apache Flink now supports Apache Flink version 1.15

Amazon Kinesis Data Analytics for Apache Flink now supports Apache Flink version 1.15. This new version includes improvements to Flink's exactly-once processing semantics, Kinesis Data Streams and Kinesis Data Firehose connectors, Python User Defined Functions, Flink SQL, and more.

The release also includes an AWS-contributed capability, a new Async-Sink framework which simplifies the creation of custom sinks to deliver processed data. For a complete list of features, improvements, and bug fixes please see the Apache Flink release notes for 1.15.  

Amazon Kinesis Data Analytics makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Kinesis Data Analytics reduces the complexity of building and managing Apache Flink applications.

Amazon Kinesis Data Analytics for Apache Flink integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon Opensearch Service, Amazon DynamoDB streams, Amazon S3, custom integrations, and more using built-in connectors.

Support for reading and writing data in Amazon DynamoDB and cross account Amazon S3 access with Amazon EMR Serverless

Amazon EMR Serverless announces support for reading and writing data in Amazon DynamoDB with your Spark and Hive workflows. You can now export, import, query and, join tables in Amazon DynamoDB directly from your EMR Serverless Spark and/or Hive applications. Amazon DynamoDB is a fully managed NoSQL database that meets the latency and throughput requirements of highly demanding applications by providing single-digit millisecond latency and predictable performance with seamless throughput and storage scalability. 

AWS users often have a need to process data stored in Amazon DynamoDB efficiently and at scale for downstream analytics. Amazon EMR team built and open-sourced emr-dynamodb-connector to help customers simplify access and configuration to Amazon DynamoDB using their Apache Spark and Apache Hive applications.

This connector enables multiple analytics use cases including efficiently processing data in Amazon DynamoDB or joining tables in Amazon DynamoDB with external tables in Amazon S3, Amazon RDS, or other data stores that can be accessed by Amazon EMR Serverless. With Amazon EMR release 6.9, you can get all the benefits of the Amazon DynamoDB connector with your Amazon EMR Serverless applications. You can use both cross-region and cross-account access Amazon DynamoDB tables. 

AWS are also delighted to share that EMR Serverless supports accessing specific Amazon S3 buckets from other AWS accounts to process data from your Spark and Hive applications. AWS customers use multiple AWS accounts to better separate different projects or lines of business.

Having cross-account capabilities simplifies securing and managing distributed data lakes across multiple accounts through a centralized approach. With cross-account access to Amazon S3, you can use your EMR Serverless Spark or Hive application in an AWS account and access data stored in specific buckets from other AWS accounts for processing. 

Manage Table metadata in Glue Data Catalog when running Flink workloads on Amazon EMR

Amazon EMR customers can now use AWS Glue Data Catalog from their streaming and batch SQL workflows on Flink. The AWS Glue Data Catalog is an Apache Hive metastore-compatible catalog. You can configure your Flink jobs on Amazon EMR to use the Data Catalog as an external Apache Hive metastore. With this release, You can then directly run Flink SQL queries against the tables stored in the Data Catalog.

Flink supports on-cluster Hive metastore as the out-of-box persistent catalog. This means that metadata had to be recreated when clusters were shutdown and it was hard for multiple clusters to share the same metadata information. Starting with Amazon EMR 6.9, your Flink jobs on Amazon EMR can manage Flink’s metadata in AWS Glue Data Catalog. You can use a persistent and fully managed Glue Data Catalog as a centralized repository. Each Data Catalog is a highly scalable collection of tables organized into databases. 

The AWS Glue Data Catalog provides a uniform repository where disparate systems can store and find metadata to keep track of data in data silos. You can then query the metadata and transform that data in a consistent manner across a wide variety of applications. With support for AWS Glue Data Catalog, you can use Apache Flink on Amazon EMR for unified BATCH and STREAM processing of Apache Hive Tables or metadata of any Flink tablesource such as Iceberg, Kinesis or Kafka. You can specify the AWS Glue Data Catalog as the metastore for Flink using the AWS Management Console, AWS CLI, or Amazon EMR API.

Announcing AWS Graviton2 support for Amazon EMR Serverless - Get up to 35% better price-performance for your serverless Spark and Hive workload

Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple to run applications using open-source analytics frameworks such as Apache Spark and Hive without configuring, managing, or scaling clusters. 

This week, AWS are excited to announce support for AWS Graviton2 (ARM64-based architecture) for EMR Serverless. AWS Graviton2 processors are custom built by Amazon Web Services using 64-bit Arm Neoverse cores that provide top-tier price performance for cloud workloads.

EMR Serverless applications powered by AWS Graviton2 offer up to 19 percent better performance and 20 percent lower cost per resource compared to x86-based instances. To use this option, simply choose ARM64-based architecture for your EMR Serverless application, and make sure that any custom library that you submit with your job is compatible with ARM64. Visit the documentation and Amazon EMR Serverless pricing page to learn more about using Graviton2 with EMR Serverless. 

Amazon S3 Select improves query performance by up to 9x when using Trino

Amazon S3 improves performance of queries running on Trino by up to 9x when using Amazon S3 Select. Trino is an open source SQL query engine used to run interactive analytics on data stored in Amazon S3.

With S3 Select, you “push down” the computational work to filter your S3 data instead of returning the entire object. By using Trino with S3 Select, you retrieve only a subset of data from an object, reducing the amount of data returned and accelerating query performance.

Starting this week, with AWS’s upstream contribution to open source Trino, you can use Trino with S3 Select to improve your query performance. S3 Select offloads the heavy lifting of filtering and accessing data inside objects to Amazon S3, which reduces the amount of data that has to be transferred and processed by Trino.

For example, if you have a data lake built on Amazon S3 and use Trino today, you can use S3 Select’s filtering capability to quickly and easily run interactive ad-hoc queries.

AWS X-Ray adds trace linking for event-driven applications built on Amazon SQS and AWS Lambda

AWS X-ray adds support for trace linking, enabling customers to visualize, and debug requests as they travel through event-driven applications built using Amazon Simple Queue Service (SQS) and AWS Lambda. Using trace linking, customers can now see the relationships between services and resources in their event-driven applications leveraging Amazon SQS and AWS Lambda, quickly identify performance bottlenecks, and explore individual requests to find the root cause of application health problems with just a few clicks. 

Customers with AWS X-Ray enabled on their AWS Lambda functions can start using trace linking without making any changes to their application code or service configuration. Traces in event-driven applications built using Amazon SQS and AWS Lambda are now automatically linked together, so that customers can visualize the end-to-end flow of each request making it easy to find the sources of performance or health events, like how long messages are sitting in Amazon SQS queues before being processed or identify Amazon SQS messages impacting AWS Lambda functions.

Customers can also see when multiple requests are processed together as a single batch. Thus trace linking offers critical debugging capabilities, lowering the mean time to resolution for issues in event-driven applications.

AWS X-Ray Trace Linking for Amazon SQS and AWS Lambda is now available in all AWS Commercial Regions where AWS X-Ray is available in. 

AWS Glue Crawlers Now Support Snowflake

AWS Glue crawlers now support Snowflake, making it easier for you to understand updates to Snowflake schema and extract meaningful insights.

To crawl a Snowflake database, customers can create and schedule a Glue crawler with JDBC URL with credential information from AWS Secret Manager. A configuration option allows you to specify if you want the crawler to crawl the entire database or limit the tables by including schema/table path and exclude pattern to reduce crawl time. With each run of the crawler, the crawler inspects and catalogs information, such as updates or deletes to Snowflake tables, external tables, views, and materialized views in the AWS Glue Data Catalog.

For Snowflake columns with non-Hive compatible types, such as geography or geometry, the crawler will extract that information as a raw data type and make it available in the Data Catalog.

Amazon QuickSight Now Supports Connectivity to Databricks

This week, Amazon QuickSight announced the general availability of a new connector for QuickSight that will enable customers to natively connect to Databricks. This launch allows you to connect to and visualize data from the Databricks E2 version of the platform.

With the QuickSight connector for Databricks, you will be able to create a new data source in QuickSight that connects to a Databricks Lakehouse (E2 version). Customers can choose to ingest the data from delta tables directly into QuickSight’s SPICE (Super-fast, parallel, in-memory Calculation Engine) engine or use direct query to query the data directly where it resides in the Databricks E2.

The new connector supports both public as well as private VPC connectivity.

The Databricks connector for QuickSight will be available in Amazon QuickSight Standard and Enterprise Editions in all QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Asia Pacific (Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland and London), South America (São Paulo) and AWS GovCloud (US-West).

Amazon Connect now supports configurable Lex timeouts within Chat experience

Amazon Connect Chat now allows you to configure timeouts for chat conversations between a customer and an Amazon Lex chatbot. This enables you to define how long to wait for a response from the customer (e.g. 5 minutes) before the session expires.

Amazon QuickSight supports NULL in parameter

Amazon QuickSight now fully support NULL in parameter use. Parameter, as a place holder for single value or multi-value variables, is a powerful and widely used entity in QuickSight. Previously, NULL is not supported as a valid value for parameter, which created some discrepancies when user data has NULL value in it.

With the full enablement for NULL, all functionalities consuming parameters are now supporting NULL value. Enablement of NULL in parameter provides a comprehensive experience when parameter is used in various cases. For further details, visit here.

AWS Service Catalog now supports syncing products with Infrastructure as Code template files from GitHub, GitHub Enterprise, or Bitbucket

AWS Service Catalog customers can now create AWS Service Catalog products that are synced to Infrastructure as Code (IaC) templates that are managed in external repositories such as GitHub, GitHub Enterprise, or Bitbucket.

AWS Service Catalog customers often manage their IaC templates in an external repository, such as GitHub, GitHub Enterprise or BitBucket. When creating new or updating existing product versions, they have to build and manage separate code pipelines that ensure alignment between their AWS Service Catalog product versions and the template files in their repositories.

With today’s launch, AWS Service Catalog customers no longer need to build and manage this additional code infrastructure. They can use their external repositories as a single source of truth to manage and update their IaC template files, and AWS Service Catalog will sync changes automatically.

To create synced AWS Service Catalog products, customers must authorize a one-time connection between their AWS account and external third-party provider account using AWS CodeStar Connections. Once a connection is established, AWS Service Catalog administrators can create new or update existing AWS Service Catalog products based on a template file within a given repository and branch. If a change to the template file is committed in the repository, AWS Service Catalog will automatically detect the change and create a new product version.

Previous product versions will be maintained up to the prescribed version limit and their status is changed to ‘deprecated’. This feature for AWS Service Catalog can be used through the AWS Command Line Interface(CLI), AWS API, or through the AWS Service Catalog console.

Amazon EventBridge introduces new capabilities that make it simpler to build rules

Amazon EventBridge introduces new capabilities that make it simpler to build rules. Amazon EventBridge now supports generating CloudFormation templates from the rules and buses console pages. CloudFormation templates help provision and manage the configuration of event buses and rules, and you can now export your existing configurations in the console directly to a CloudFormation template.

Simply select an existing rule in the console, pick JSON or YAML, and click the download button to export the configuration to a CloudFormation template. The CloudFormation template will contain the rule and target(s) of the rule. You can also select an event bus, choose whether or not to include existing rules on that bus, and download a CloudFormation template containing information for the bus, rules (if included), and targets. This makes it easier to generate CloudFormation templates for more complex rules and targets, and simplifies configuring rules and buses in different environments. 

Additionally, you can now generate event patterns from a schema. Schemas define the structure of an event. During rule evaluation, event patterns are used to check if the incoming event matches the expected event pattern. If it does, the event is sent to a downstream target. Previously, you had to write the event pattern manually, but with this launch you can now generate an event pattern using an existing schema.

You can view and select attributes to include from the schema, specify whether they are required or not, and define acceptable values for that attribute. You can then use the generated event pattern to verify that your rule will match against events as expected. 

AWS Backup announces support for SAP HANA databases on Amazon EC2 in Preview

AWS Backup now offers a simple, cost-effective, and SAP-certified application-consistent backup and restore solution for SAP HANA databases running on Amazon EC2. With this launch, you can centrally automate backup and restore of your SAP HANA application data in addition to the currently supported AWS services. Using AWS Backup’s seamless integration with AWS Organizations, you can centrally create and manage immutable backups of SAP HANA databases across all your accounts, help protect your data from inadvertent or malicious actions, and restore the data.

To get started with AWS Backup for SAP HANA on EC2, you can use the AWS Management console, CLI, or SDK to create backup policies to start protecting your SAP HANA databases. AWS Backup leverages AWS Systems Manager (SSM) for SAP to register these SAP HANA systems and AWS Backint Agent to take backups.

Once you define your backup policies and assign SAP HANA resources to the policies, AWS Backup automates the creation of SAP HANA backups that are application-consistent and stores those backups in an encrypted backup vault that you designate. For restores, AWS Backup automatically recovers your data with a few clicks.

Amazon RDS Custom for Oracle now supports Oracle Home customization

Amazon Relational Database Service (Amazon RDS) Custom for Oracle now supports customization of the Oracle Database installation, including the file system paths for Oracle Base and Oracle Home, and identities of the operating system user and group associated with the database. With today’s release, you can further customize your Oracle Database installation on RDS Custom for Oracle to suit the needs of your applications and conform to your organization’s standardized deployment practices. 

By using Amazon RDS Custom for Oracle, you can obtain access to the operating system required for a wider range of applications, and benefit from the agility of a managed database service, with features such as automated backups and point-in-time recovery. Oracle Home customization further enables you to run legacy applications that require the Oracle Database software to be located in a specific location or owned by a specific user.

Oracle Home customization is available for all Oracle major database versions supported in RDS Custom for Oracle today, including 12.1, 12.2, 18c, and 19c.

Amazon CloudFront launches continuous deployment support

Amazon CloudFront now supports continuous deployment, a new feature to test and validate the configuration changes with a portion of live traffic before deploying changes to all viewers.

Continuous deployment with CloudFront gives you a high level of deployment safety. You can now deploy two separate but identical environments—blue and green, and enable simple integration into your continuous integration and delivery (CI/CD) pipelines with the ability to roll out releases gradually without any domain name system (DNS) changes.

It ensures that your viewer gets a consistent experience through session stickiness by binding the viewer session to the same environment. Additionally, you can compare the performance of your changes by monitoring standard and real-time logs and quickly revert to the previous configuration when a change negatively impacts a service.

Typical use cases for this feature include checking for backward compatibility, post-deployment verification, and validating new features with a smaller group of viewers.

Continuous deployment support is available across all the CloudFront edge locations at no additional cost. You can access it through CloudFront Console, SDK, Command Line Interface, or CloudFormation template. 

AWS Lambda announces support for Attribute-Based Access Control (ABAC) in AWS GovCloud (US) Regions.

ABAC is an authorization strategy that defines access permissions based on tags which can be attached to IAM resources such as IAM users and roles, and to Amazon Web Services resources, like Lambda functions, to simplify permission management.

ABAC support for Lambda functions allows you to scale your permissions as your organization innovates and give granular access to developers without requiring a policy update when a user or project is added, removed or updated. With ABAC support for Amazon Lambda, IAM policies can be used to allow or deny specific Lambda API actions when the IAM principal's tags match the tags on a Lambda function.

This week AWS were excited to announce that Lambda supports ABAC in AWS GovCloud (US) Regions.

With this launch, Lambda supports ABAC only for Lambda APIs that use function, function version and function alias as the main resource type. Please review the full list of Lambda API actions and resource types here.

Amazon Textract launches the ability to detect signatures on any document

Amazon Textract is a machine learning service that automatically extracts printed text, handwriting, and data from any document or image.

Textract now provides you the capability to detect handwritten signatures, e-signatures, and initials on documents such as loan application forms, checks, claim forms and more. AnalyzeDocument Signatures reduces the need for human reviewers and helps customers reduce costs, save time, and build scalable solutions for document processing.

Signatures is available as a feature type in the AnalyzeDocument API and enables customers to automatically detect signatures on documents. AnalyzeDocument Signatures provides the location and the confidence scores of the detected signatures.

The feature can be used standalone or in combination with other structured data extraction types such as Forms, Tables, and Queries. Signatures is pre-trained on a wide a variety of financial, insurance, and tax documents. 

Amazon QuickSight launches admin asset management console

Amazon QuickSight launches an asset management console for administrators. With an interactive UI, administrators can now list and search all account assets regardless of who the owner of these assets are.

They can list all the assets a user or group has access to including in a multi-tenant environment. They can perform asset level or bulk actions like transferring assets from one person to another when someone leaves the organization, share assets with other users or revoke asset access. Current supported assets are dashboards, analyses, datasets, datasources and shared folders.

Asset management console is available to administrators with access to the QuickSight admin console pages via IAM credentials. For more information, visit here.

This release also supports APIs for searching assets which allows administrators to automate and govern at scale. Administrators and developers can programmatically search for assets a user or group has access to and search for assets by name. For more information, visit here.

AWS IAM Identity Center now supports session management capabilities for AWS Command Line Interface (AWS CLI) and SDKs

Starting this week, AWS IAM Identity Center (successor to AWS Single Sign-On) customers can manage the session duration (between 15 mins and 7 days) for AWS Command Line Interface (AWS CLI) and SDKs sessions. With this release, when you set access portal session duration for your organization in IAM Identity Center, it also applies to AWS CLI and SDKs sessions in addition to application and console sessions.

Prior to this release, the AWS CLI and SDKs session limit was eight (8) hours. Now, you can choose a longer session so that you can run your jobs for a longer duration without re-authentication, and avoid abrupt termination of long running jobs.

Alternatively, you can also shorten your session limit based on your organization’s security or compliance requirements. To enable session management feature for AWS CLI and SDKs, you will need to upgrade your AWS CLI and SDKs to minimum supported version. To learn more about session management features, please refer to the documentation here.

Run long running fault-tolerant SQL queries with Trino and Amazon EMR with checkpointing on Amazon S3 or HDFS

This week, Amazon EMR has announced support for long running fault-tolerant SQL queries on Trino engine (Project Tardigrade) with checkpointing in Amazon S3 or HDFS for fault-tolerance. Project Tardigrade aims to improve the user experience of long running, resource intensive queries on Trino, when used for ETL style workloads.

Project Tardigrade uses Amazon S3 for checkpointing buffered intermediate data. With Amazon EMR 6.9 release, we are also adding checkpointing on HDFS for performance sensitive and long running SQL workloads.

Long running ETL workloads can be challenging to run reliably and cost effectively on Trino. This is because restarting failed queries from scratch would waste cluster resources and lack of iterative query capability could cost more on large clusters. Project Tardigrade introduced a new fault-tolerant execution mechanism that enables Trino clusters to mitigate query failures by retrying them using the intermediate exchange data that is collected on S3.

Amazon EMR team extended this capability to check point in HDFS to further improve the performance for these Trino queries. With support for fault-tolerant long running queries, Amazon EMR users can now run ETL workflows reliably while also benefiting from performance and cost-saving because of iterative task runs. You can enable fault-tolerance on Amazon EMR Trino clusters using Trino configuration classification on the Amazon EMR console, CLI or using the API.

QuickSight dashboards now available for seller reporting and insights in AWS Marketplace (Preview)

This week, AWS Marketplace announced the Preview of two Amazon QuickSight dashboards for AWS Marketplace sellers. Sellers can now access the billed revenue dashboard and collections & disbursements dashboard from the Insights tab of AWS Marketplace Management Portal (AMMP).

Previously, sellers could access their business data via downloadable CSV reports on AMMP. Now, the sellers can view, analyze and track key trends and metrics in a visualized manner on QuickSight dashboards on AMMP.

The dashboards are based on up to 1.5 years of sellers’ historical sales and have daily refreshed data. They are pre-built with more than 5 key business metrics (such as gross revenue, gross refund, net revenue, amount disbursed, wholesale cost), more than 10 filter controls (such as invoice ID, invoice date, offer ID, subscriber geography and company name), and over 15 visualizations on billing and disbursement trends.

Sellers can download the granular data directly from the dashboard in CSV or Excel formats.


Getting_Started_gcp_logo
Google Cloud Releases and Updates
Source: cloud.google.com

 

Anthos Clusters on bare metal

Anthos clusters on bare metal 1.13.2 is now available for download. To upgrade, see Upgrading Anthos on bare metal. Anthos clusters on bare metal 1.13.2 runs on Kubernetes 1.24.

  • Ensured the kubeadmconfig Secret is deleted when a Cluster API node is removed.
  • Added preflight check command (bmctl check preflight) that you can use when upgrading version 1.13 and higher clusters.
  • Updated the commands bmctl check preflight and bmctl create cluster so that they fail if worker or control-plane nodes have docker credentials in /root/.docker/config.json. (Anthos clusters on bare metal version 1.13 and higher can no longer use Docker Engine as a container runtime. All clusters must use the default container runtime containerd).

Anthos Service Mesh

1.15.3-asm.6 is now available for in-cluster Anthos Service Mesh.

You can now download 1.15.3-asm.6 for in-cluster Anthos Service Mesh. It includes the features of Istio 1.15.3 subject to the list of supported features.

1.13.9-asm.3 is now available for in-cluster Anthos Service Mesh.

You can now download 1.13.9-asm.3 for in-cluster Anthos Service Mesh. It includes the features of Istio 1.13.9 subject to the list of supported features.

Cloud Asset Inventory

The following resource types are now publicly available through the Export APIs (ExportAssets, ListAssets, and BatchGetAssetsHistory), Feed API, and Search APIs (SearchAllResources, SearchAllIamPolicies).

  • Service Directory
    • servicedirectory.googleapis.com/Namespace

Cloud Bigtable

  • First pass on making retry configuration more consistent (#695) (c707c30)
  • Make internal rst_stream errors retriable (#699) (770feb8)
  • Make sure that the proper exception type is bubbled up for ReadRows (#696) (5c72780)
  • Prevent sending full table scan when retrying (backport #554) (#697) (c4ae6ad)

Cloud Composer

Google are currently experiencing an issue with gcloud CLI version 410.0.0. Some composer commands return non-zero error codes along with an additional gcloud crashed (TypeError): 'NoneType' object is not callable) output message.

This issue doesn't impact the functionality provided by the commands when used in interactive mode. It may contribute to misleading error stack traces and cause failures when using the commands programmatically since it returns non-zero error codes.

If your operations could be affected by this issue, please refrain from upgrading to gcloud CLI version 410.0.0.

If you already performed the upgrade, you can downgrade to a previous gcloud version. For more information see Cloud Composer known issues.

Cloud Functions

Cloud Functions has added support for a new runtime, Node.js 18, at the Preview release level.

Dialogflow

Dialogflow CX now integrates with GitHub. This integration makes it easy to export your agent to JSON for a push to GitHub, and to pull from GitHub for an agent restore.

Document AI 

Expense Parser Releases

As of November 18, 2022, for the Expense Parser, GCP have promoted the v1.3 Release Candidate version to a Stable version so that more customers can use it confidently. 

New Stable version

Features in the new Stable Expense Parser, pretrained-expense-v1.3-2022-07-15:

  • Support for a new language, Japanese, which has been requested by multiple customers.

  • Better entity performance

  • Addition of 3 new entity types (line_item/quantity, payment_type credit_card_last_four_digits)

  • Better support for hotel and car-rental related expenses 

New Release Candidate version

Along with this Stable version, we are also launching a new Release Candidate version of the Expense Parser, pretrained-expense-v1.4-2022-11-18, with the following new features, in addition to the features in the Stable version:

  • Improvements to overall performance

  • Support for two (2) new languages, Italian and Portuguese

  • Support for Uptraining to improve or add/remove entities in the schema

  • Support for Uptraining to add support for unsupported languages

  • Addition of 3 new entity types (traveler_name, reservation_id line_item/transaction_date)

  • Maximum pages (online/synchronous requests) limit has been increased to 15.

Deprecation of the old Stable version

The pretrained-expense-v1.1-2021-04-09 version of the Expense Parser will be deprecated following this release. 

Invoice Parser Updates

The previous Stable Invoice processor version, pretrained-invoice-v1.1-2021-04-09, is deprecated as of November 22, 2022.

The Invoice Parser, for v1.3 and v1.4, now has the following quotas and limits:

  • Maximum pages (online/synchronous requests): 15
  • Maximum pages (batch/offline/asynchronous requests): 200

GKE

GKE version 1.21.14-gke.9500 has an issue where Pods in certain conditions might get stuck terminating indefinitely, due to a Linux kernel bug. The version has been removed and is no longer available for new clusters. If your node pools are running 1.21.14-gke.9500 and experience the issue, we recommend downgrading the node pool to 1.21.14-gke.8500.

The Logs tab available for each cluster on the Kubernetes Engine > Clusters page now includes suggested queries for your logs. For more information about using your GKE logs, see Viewing your GKE logs.

Pub/Sub

  • Next release from main branch is 1.121.0 (#1406) (1b25b0e)

Traffic Director

Traffic Director deployment with automatic Envoy injection for Google Kubernetes Engine Pods currently installs Envoy version v1.24.0.

 


 

Getting_Started_Azure_Logo
Microsoft Azure Releases And Updates
Source: azure.microsoft.com

Public Preview: Use Azure Quota Rest APIs to manage service limits (quotas)

 

Use Azure REST Quota API to manage service limits (quotas), query current usage and quotas, and even update limits for your supported Azure resources, when required.

Public preview: Cross Subscription Restore for Azure Virtual Machines

Cross Subscription Restore for Azure Virtual Machines is provides flexibility to choose any subscription during restore.
 

Public preview: Azure Bastion now support shareable links

Shareable links allows users to connect to target resources via Azure Bastion without access to the Azure portal.

Public preview: Add an Azure Cosmos DB custom endpoint in IoT Hub

Azure IoT Hub now supports the ability to setup an Azure Cosmos DB account as a custom endpoint. This will help route device data from IoT Hub to Azure Cosmos DB directly.


 
All_Hava_Diagrams
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes.  Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
 
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free. 

When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
 
Check out the 14 day free trial here (includes forever free tier):


Learn More!

 

Topics: aws azure gcp news
Team Hava

Written by Team Hava

The Hava content team

Featured