Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday 17th June 2022.
A few more performance enhancements were rolled out this week and work is well underway on some more Hava UI improvements, additional resource support and streamlining the self-hosted process. It's all happening.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
Source: aws.amazon.com
AWS Elastic Beanstalk is now available in the Asia Pacific (Jakarta) Region
AWS Elastic Beanstalk enables customers to deploy and scale web applications and services without having to manage any of the underlying infrastructure. Elastic Beanstalk automatically scales your application up and down based on your application's specific needs. Starting today, you can run applications orchestrated by AWS Elastic Beanstalk in the Asia Pacific (Jakarta) Region.
UI Improvements in AWS Budgets
AWS Budgets has enhanced the console experience by adding a split-view panel that allows you to view budget details without leaving the budgets overview page. AWS Budgets helps you control AWS cost and usage by allowing you to set custom budgets that alert you when your cloud spend exceeds (or is forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set Savings Plans and Reservation alerts and receive notifications when your utilization or coverage targets drop below your desired thresholds. AWS Budgets is generally available in all public AWS Regions.
Using the new split-view panel in AWS Budgets, you can save time and clicks when analyzing budget performance by using navigation controls to select the next or previous budget. Also, you can now construct helpful budget history visualizations by selecting multiple budgets at once. The split-view panel provides budget details as well as alert filtering capability so you can quickly identify which alert thresholds are exceeded and whom to contact.
Amazon Quicksight now supports show/hide fields on pivot table
Amazon QuickSight now provides authors the ability to show or hide any column, row or value fields from the field well context menu on pivot table visuals. This capability is currently supported in table visuals and this launch extends it to pivot table visuals. Readers and authors can now export the data to CSV and Excel from both table and pivot table from the context menu.
Using the show/hide column feature, authors can hide unwanted columns which are often used for custom actions for interactivity and provide a better visual presentation. The newly added context menu options provides readers the ability to export both visible and hidden data fields to Excel and CSV which can be controlled by the authors with the new advance publishing option. This feature release provides a consistent experience of exporting options across both tables and pivot table visuals.
AWS Single Sign-On is now available in the Europe (Milan) Region
AWS Single Sign-On (AWS SSO) is now available in the AWS Europe (Milan) Region. For a full list of the regions where AWS SSO is available, see the AWS Regional Services List.
AWS SSO is where you create, or connect, your workforce identities in AWS once and manage access centrally across your AWS organization. You can choose to manage access just to your AWS accounts or cloud applications. You can create user identities directly in AWS SSO, or you can bring them from your Microsoft Active Directory or a standards-based identity provider, such as Okta Universal Directory or Azure AD. With AWS SSO, you get a unified administration experience to define, customize, and assign fine-grained access. Your workforce users get a user portal to access all of their assigned AWS accounts or cloud applications. AWS SSO can be flexibly configured to run alongside or replace AWS account access management via AWS IAM.
It is easy to get started with AWS SSO. With just a few clicks in the AWS SSO management console you can connect AWS SSO to your existing identity source and configure permissions that grant your users access to their assigned AWS Organizations accounts and hundreds of pre-integrated cloud applications, all from a single user portal.
Enable Amazon DevOps Guru for RDS from within the Amazon RDS Console
Amazon DevOps Guru for RDS now supports enablement from within the Amazon RDS Console. Starting today, you can enable Amazon DevOps Guru for RDS to quickly detect, diagnose, and remediate a wide variety of database-related issues in Amazon Aurora databases while creating a new database. You can also now enable Amazon DevOps Guru for RDS from within the RDS Performance Insights page.
Amazon DevOps Guru for RDS is a new Machine Learning (ML) powered capability for Amazon Relational Database Service (Amazon RDS) that automatically detects and diagnoses database performance and operational issues, enabling you to resolve bottlenecks in minutes rather than days. Amazon DevOps Guru for RDS is a feature of Amazon DevOps Guru, which detects operational and performance related issues for Amazon RDS engines and dozens of other resource types. Amazon DevOps Guru for RDS expands upon the existing capabilities of Amazon DevOps Guru to detect, diagnose, and provide remediation recommendations for a wide variety of database-related performance issues, such as resource over-utilization and misbehavior of SQL queries. When an issue occurs, Amazon DevOps Guru for RDS immediately notifies developers and DevOps engineers and provides diagnostic information, details on the extent of the problem, and intelligent remediation recommendations to help customers quickly resolve the issue. Amazon DevOps Guru for RDS is available for Amazon Aurora MySQL and PostgreSQL–Compatible Editions in these regions.
Amazon DevOps Guru is an ML powered service that makes it easy to improve an application’s operational performance and availability. By analyzing application metrics, logs, events, and traces, Amazon DevOps Guru identifies behaviors that deviate from normal operating patterns and creates an insight that alerts developers with issue details. When possible Amazon DevOps Guru, also provides proposed remedial steps via Amazon Simple Notification Service (SNS) and partner integrations, like Atlassian Opsgenie and PagerDuty.
AWS Config now supports 15 new resource types
AWS Config now supports 15 new resource types including Amazon SageMaker, Elastic Load Balancing, AWS Batch, AWS Step Functions, AWS Identity and Access Management Access Analyzer, Amazon WorkSpaces, Amazon Route 53 Resolver, Amazon Managed Streaming for Apache Kafka, and AWS Database Migration Service.
With this launch, you can now use AWS Config to monitor configuration data for the newly supported resource types in your AWS account. AWS Config provides a detailed view of the configuration of AWS resources in your AWS account, including how resources were configured and how the configuration changes over time.
Get started by enabling AWS Config in your account using the AWS Config console or the AWS Command Line Interface (AWS CLI). Select the newly supported resource types for which you want to track configuration changes. If you previously configured AWS Config to record all resource types, then the new resources will be automatically recorded in your account. AWS Config support for the new resources is available to AWS Config customers in all regions where the underlying resource type is available. To view a complete list of all supported types, see supported resource types page.
Newly supported resource types:
1. AWS::SageMaker::Model
2. AWS::StepFunctions::StateMachine
3. AWS::ElasticLoadBalancingV2::Listener
4. AWS::Batch::JobQueue
5. AWS::Batch::ComputeEnvironment
6. AWS::StepFunctions::Activity
7. AWS::AccessAnalyzer::Analyzer
8. AWS::WorkSpaces::Workspace
9. AWS::WorkSpaces::ConnectionAlias
10. AWS::Route53Resolver::ResolverRule
11. AWS::Route53Resolver::ResolverEndpoint
12. AWS::Route53Resolver::ResolverRuleAssociation
13. AWS::MSK::Cluster
14. AWS::DMS::EventSubscription
15. AWS::DMS::ReplicationSubnetGroup
This week, AWS are making it faster and easier to prepare and visualize data using PySpark and Altair with support for code snippets in Amazon SageMaker Data Wrangler. Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare data for machine learning (ML) from weeks to minutes. With SageMaker Data Wrangler, you can simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow, including data selection, cleansing, exploration, and visualization from a single visual interface. With SageMaker Data Wrangler’s data selection tool, you can quickly select data from multiple data sources, such as Amazon S3, Amazon Athena, Amazon Redshift, AWS Lake Formation, Amazon SageMaker Feature Store, Databricks, and Snowflake.
Now you can prepare and visualize data faster using PySpark and Altair code snippets in Amazon SageMaker Data Wrangler. PySpark is an interface for Apache Spark in Python. Altair is a declarative statistical visualization library for Python that is based on Vega and Vega-Lite. Previously, data scientists using Data Wrangler would start from a blank editor or search the internet for code snippets if they wanted to write code in PySpark or Altair to prepare and visualize their data. Now, data scientists who wish to use PySpark to write a custom transform in SageMaker Data Wrangler can search from over 30 PySpark code snippets for data processing needs such as dropping rows, bulk renaming, casting and reorganizing columns, and filtering text columns for values that include a specific string. In addition, data scientists who wish to write Altair code to create visualizations in SageMaker Data Wrangler can search from Altair code snippets to create heat maps, binned scatter plots, and filled step charts from within SageMaker Data Wrangler.
AWS Service Catalog's Application Registry now supports cross-account applications.
This week, AWS Service Catalog announced support for cross-account AppRegistry applications and attribute groups. With this release, applications can now be shared within your AWS Organization enabling recipient accounts to associate their local resources to shared applications. If you have application resources deployed in more than one account within your AWS Organization, you can now maintain a single repository of your applications and application metadata.
You first enable AWS Resource Access Manager (RAM) for your AWS Organization, a service that enables customers to easily and securely share AWS resources across accounts in an Organization. Once enabled, you use RAM to share your application to your Organization, organizational unit (OU), and accounts. In order to manage your application resources, AppRegistry creates and maintains an AWS Resource Group in every account where the shared application has resources. In these accounts you can navigate from AppRegistry to AWS Systems Manager Application Manager to view your application resources, monitor the application operational and compliance status, view operational items, and perform runbooks against application stacks or individual resources.
Amazon SageMaker Canvas announces support for VPC endpoints
Amazon SageMaker Canvas now supports VPC endpoints enabling secure, private connectivity to other AWS services. SageMaker Canvas is a visual point-and-click service that enables business analysts to generate accurate ML models for insights and predictions on their own — without requiring any machine learning experience or having to write a single line of code.
Amazon SageMaker Canvas runs in an Amazon SageMaker Studio managed VPC by default. You can also run Canvas in your own VPC when you create a SageMaker Studio domain in VPC only mode. This allows connectivity from the Studio domain running Canvas to your VPC without traversing the public internet. Starting today, you can extend this secure connection to other AWS services using VPC endpoints eliminating the need for an internet gateway, a network address translation (NAT) instance, or a VPN connection. With support for VPC endpoints, you can now securely connect from SageMaker Canvas to Amazon S3, Amazon Redshift, Amazon Forecast, Amazon CloudWatch, and Amazon CloudWatch Logs.
VPC support for Amazon SageMaker Canvas including VPC endpoints is available in all AWS regions where Canvas is supported.
Amazon Chime SDK now supports 100 webcam video streams
The Amazon Chime SDK now supports up to 100 webcam video streams per WebRTC session. The Amazon Chime SDK lets developers add intelligent real-time audio, video, and screen share to their web and mobile applications. Each client application can select up to 25 webcam video streams to display, enabling developers to create immersive video experiences that are bespoke for each user.
Developers have the flexibility to create tailored experiences for each session participant based on their application use case and the participant’s role. For example, a teacher may have a gallery view of students that automatically rotates through all students while separately displaying any actively talking student. Meanwhile, a student may have the webcam displayed for the teacher, any active talking classmate, and a few friends in the class they have selected and pinned for continuous viewing.
Developers use the client-side video prioritization policy to select up to 25 webcam video streams to display. Video streams are connected in priority order until all streams are connected or downlink bandwidth is exhausted. Suppose the client’s network becomes constrained or congested. In that case, video streams are paused in reverse priority order, releasing bandwidth for higher priority streams to continue in high quality. If a webcam video is simulcast in high and low-bitrate streams, the client will switch to the low bitrate stream first, only pausing the stream as a last resort.
To enable up to 100 webcam video streams, developers must first request an increase to the service quota “Video streams per meeting”.
Amazon OpenSearch Service now supports tag-based authorization for data read and write operations
Amazon OpenSearch Service now supports tag-based authorization for HTTP methods, making it easier for you to manage access control for data read and write operations. You can use Identity policies in AWS Identity and Access Management (IAM) to define permissions for read and write HTTP methods, allowing coarse-grained access control of data on your Amazon OpenSearch Service domains.
Amazon OpenSearch Service currently supports tag-based authorization for configuration APIs, enabling you to use resource tags, request tags or tag keys to allow or deny specific operations such as creating, modifying, or updating Amazon OpenSearch Service domains. With this release, you can also create an Identity Policy in IAM using resource tags that allows or denies access to specific HTTP methods.
Amazon Connect launches API to retrieve agents’ current activity
Amazon Connect now provides an API to programmatically access real-time details about agents’ current activity, such as current status (e.g., “Available”). If an agent is handling a contact, details include the contact’s state (e.g., “Connected” or “Missed”) and duration. Using this API, businesses can build custom dashboards for contact center supervisors to monitor their agents’ activity in real-time. For example, if more agents are needed to handle contacts, you can use this new API to identify agents who are on break the longest and reach out to them to switch to “Available” or change it programmatically using PutUserStatus.
To learn more about the GetCurrentUserData API, see the API documentation. This API is available in all AWS regions where Amazon Connect is offered. You can find out more about Amazon Connect, the easy to use omnichannel cloud contact center, by visiting the Amazon Connect website.
Amazon Connect launches 15 minute scheduled reports
Amazon Connect now provides the ability for customers to schedule historical metric reports that generate the latest data every 15 minutes. Historical metrics reports include data about completed customer contacts, agent activity, and performance, such as how many contacts an agent handled. This helps customers quickly identify insights into queue, routing profile, and agent performance. These insights can be used in a variety of ways, including evaluating and adjusting contact center forecasting and staffing plans.
Fifteen minute scheduled reports are available in all AWS regions where Amazon Connect is offered.
Amazon Aurora Serverless v1 supports in-place upgrade from MySQL 5.6 to 5.7
Amazon Aurora Serverless v1 now supports in-place upgrade from MySQL 5.6 to 5.7. Instead of backing up and restoring the database to the new version, you can upgrade with just a few clicks using the Amazon RDS Management Console or using the latest AWS SDK or CLI. No new cluster is created in the process which means you keep the same endpoints and other characteristics of the cluster. The upgrade completes in minutes as no data needs to be copied to a new cluster volume. The upgrade can be applied immediately or during the maintenance window. Your database cluster will be unavailable during the upgrade. Review the Aurora documentation to learn more.
Amazon EC2 C6i instances are now available in an additional region
Starting this week, Amazon EC2 C6i instances are available in AWS Region Europe (Frankfurt). C6i instances are powered by 3rd generation Intel Xeon Scalable processors (code named Ice Lake) with an all-core turbo frequency of 3.5 GHz, offering up to 15% better compute price performance over C5 instances for a wide variety of workloads, and always-on memory encryption using Intel Total Memory Encryption (TME). Designed for compute-intensive workloads, C6i instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances. These instances are an ideal fit for compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding.
Amazon DynamoDB Standard Infrequent Access table class is now available in AWS GovCloud (US) Regions
Amazon DynamoDB Standard Infrequent Access (DynamoDB Standard-IA) table class is now available in the AWS GovCloud (US) Regions. The DynamoDB Standard-IA table class is ideal for use cases that require long-term storage of data that is infrequently accessed, such as application logs, medical records, and financial transactions.
Now, you can optimize the costs of your DynamoDB workloads based on your tables’ storage requirements and data access patterns. The new DynamoDB Standard-IA table class offers 60 percent lower storage costs than the existing DynamoDB Standard tables, making it the most cost-effective option for tables with storage as the dominant table cost. The existing DynamoDB Standard table class offers 20 percent lower throughput costs than the DynamoDB Standard-IA table class. DynamoDB Standard remains your default table class and the most cost-effective option for the wider variety of tables that store frequently accessed data with throughput as the dominant table cost. You can switch between DynamoDB Standard and DynamoDB Standard-IA table classes with no impact on table performance, durability, or availability and without changing your application code. For more information about using DynamoDB Standard-IA, see Table Classes in the DynamoDB Developer Guide.
AWS Elastic Disaster Recovery is now available in 12 additional Regions
Starting this week, AWS Elastic Disaster Recovery (DRS) is available in 12 additional Regions: US West (N. California), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Canada (Central), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (São Paulo).
Elastic Disaster Recovery is AWS' recommended service for disaster recovery for AWS. It helps minimize downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. With Elastic Disaster Recovery, you can recover your applications on AWS from physical infrastructure, VMware vSphere, Microsoft Hyper-V, and cloud infrastructure. You can also use Elastic Disaster Recovery to recover Amazon EC2 instances in a different AWS Region.
Elastic Disaster Recovery replicates and recovers a wide range of applications, including critical databases such as Oracle, MySQL, and SQL Server, and enterprise applications such as SAP. It uses a unified process for drills, recovery, and failback, so you do not need application-specific skillsets to operate the service.
Bottlerocket adds ECS variant to support GPU-based Amazon EC2 instance types powered by NVIDIA
This week, AWS announced the availability of a Bottlerocket variant that supports NVIDIA GPU-based Amazon EC2 instance types on Amazon Elastic Container Services (Amazon ECS). Bottlerocket is a Linux-based operating system that is purpose-built to run container workloads. Customers can now benefit from using the same container-focused host operating system for both their non-GPU and GPU workloads while using ECS, including machine learning, video encoding, and streaming workloads. This helps customers standardize on a single operating system that utilizes the underlying specialized compute hardware.
In March, AWS released the Bottlerocket variant for Amazon Elastic Kubernetes Service (Amazon EKS) supporting NVIDIA GPU-accelerated workloads. With the availability of the Bottlerocket ECS variant, customers can now use Bottlerocket with NVIDIA GPUs on two popular orchestration services, Amazon EKS and ECS. The new Bottlerocket AMI includes the necessary software components required to run containerized accelerated workloads built into the base image, while using ECS as their container orchestration service. This configuration enables secure, seamless installation of required NVIDIA drivers and its updates, improves time to node-ready state, and reduces dependencies on external tools and repositories.
The new Bottlerocket AMI is available in all AWS commercial and GovCloud regions at no additional cost.
Announcing enhanced integration with Service Quotas for Amazon DynamoDB
Amazon DynamoDB now enables you to proactively manage your account and table quotas through enhanced integration with Service Quotas. Using Service Quotas, you can now view the current values of all your DynamoDB quotas. You can also monitor the current utilization of your account-level quotas.
Now, you can create Amazon CloudWatch alarms to notify you when your utilization of a given quota exceeds a configurable threshold. This enables you to better adapt your utilization based on your applied quota values and automate your quota increase requests.
DynamoDB enhanced integration with Service Quotas is now available in all AWS Regions where Service Quotas is available at no additional cost.
Easily customize your notifications while using Amazon Lookout for Metrics
AWS are excited to announce that you can now add filters to alerts and also edit existing alerts while using Amazon Lookout for Metrics. With this launch you can now add filters to your alerts configuration to only get notifications for anomalies that matter the most to you. You can also simply modify existing alerts as per your needs for notification as anomalies evolve.
Amazon Lookout for Metrics uses machine learning (ML) to automatically monitor the metrics that are most important to businesses with greater speed and accuracy than traditional methods used for anomaly detection. The service also makes it easier to diagnose the root cause of anomalies like unexpected dips in revenue, high rates of abandoned shopping carts, spikes in payment transaction failures, increases in new user sign-ups, and many more.
An alert is an optional feature that allows you to set up notifications on anomalies in the data sets, that are sent through Amazon Simple Notification Service (Amazon SNS) and AWS Lambda functions. Previously, when you set up an alert, you were notified on all detected anomalies above the severity score you selected. Now, by implementing filters and edits in the alert system, different business units within your organization will be able to specify the types of alerts they receive so they can quickly identify the most relevant anomaly to your business.
AWS Transfer Family expands server configuration options to support a broader set of clients
Starting this week, AWS Transfer Family customers can ignore the SETSTAT command and customize how they want to process TLS session resumption. These new features enable customers to support a broader set of clients without making any client-side changes.
AWS Transfer Family provides fully managed file transfers over SFTP, FTPS, and FTP for Amazon S3 and Amazon EFS. When uploading files, file transfer clients can issue the SETSTAT command to change attributes of remote files, however, the SETSTAT command is not compatible with object storage systems, resulting in errors when these clients upload files to Amazon S3. With this launch, customers can now configure their Transfer Family servers to ignore the SETSTAT command so files are uploaded without SETSTAT triggering any errors. Customers who want to use SETSTAT to preserve timestamp of the original file or modify other file attributes should use Amazon EFS as a backend storage with Transfer Family.
Additionally, AWS customers using FTPS Transfer Family servers can choose whether they want to process TLS session resumption requests. TLS session resumption can help to improve security and enhance performance by allowing clients to reuse recently negotiated TLS connections. However, some clients do not support TLS session resumption, and as a result, cannot establish connections to FTPS servers that enforce it. Now, AWS customers have the option to enable, disable, or enforce TLS session resumption on their FTPS servers, helping customers expand client compatibility options.
Amazon EC2 C6gn instances now available in additional regions
Starting this week, Amazon EC2 C6gn instances are available in the Europe (Paris, Milan), Asia Pacific (Seoul), and Middle East (Bahrain) regions.
Based on the AWS Nitro System, C6gn instances are powered by Arm-based AWS Graviton2 processors and feature up to 100Gbps network bandwidth, delivering up to 40% better price-performance versus comparable current generation x86-based network optimized instances for applications requiring high network bandwidth such as high performance computing (HPC), network virtual appliances, data lakes and data analytics.
These instances can utilize the Elastic Fabric Adapter (EFA) for workloads like HPC and video processing that can take advantage of lower network latency with Message Passing Interface (MPI) for at-scale clusters. Workloads on these instances will continue to take advantage of the security, scalability and reliability of Amazon’s Virtual Private Cloud (VPC).
Amazon Keyspaces now helps you monitor table storage costs through Amazon CloudWatch
Amazon Keyspaces (for Apache Cassandra), a scalable, highly available, and fully managed Apache Cassandra-compatible database service, now helps you monitor your table-level storage costs through Amazon CloudWatch.
Amazon Keyspaces helps you run Cassandra workloads more easily by using a fully managed and serverless database service. With Amazon Keyspaces, you don’t need to provision storage upfront and you pay for only the storage that you use. Now, you can use the BillableTableSizeInBytes CloudWatch metric to monitor and track your table storage costs over time. The BillableTableSizeInBytes metric provides the billable storage size of a table by summing up the encoded size of all the rows in the table.
AWS are excited to announce that the Amazon EC2 VT1 instances now support the AMD-Xilinx Video SDK 2.0, bringing support for Gstreamer, 10-bit HDR video, and dynamic encoder parameters. In addition to new features, this new version offers improved visual quality for 4k video, support for a newer version of FFmpeg (4.4), expanded OS/kernel support, and bug fixes.
VT1 instances are the first Amazon EC2 instances that deliver hardware acceleration for video transcoding, and are optimized for workloads such as live streaming, video conferencing, video library optimization, and just-in-time asset transcoding. These instances are powered by the AMD-Xilinx Alveo U30 media accelerator to deliver up to 30% lower cost per stream than Amazon EC2 GPU-based instances and up to 60% lower cost per stream than Amazon EC2 CPU-based instances.
AWS Customers can get started quickly with VT1 instances by launching the AMD-Xilinx Video SDK AMI available on the AWS Marketplace to access a pre-built development environment to test and migrate existing video pipelines built with FFmpeg or GStreamer. In the coming weeks, new AMIs optimized for deploying VT1 instances via Amazon ECS and EKS will be made available via the AWS Marketplace. To take advantage of the new features, existing customers should upgrade to the newest driver version and relaunch their instances.
Amazon Quicksight now provides drag controller for rows and columns for table and pivot table
Amazon QuickSight now provides an option for both author and readers the flexibility to use drag controller on table and pivot table. Authors and Readers can simply alter column width by dragging from cell, row header or column header from both parent and leaf level in case of pivot table.
Authors can also alter row height by dragging the edge of row headers and cells, on tables and pivots which will also be reflected in the format pane. With these changes we will have consistency between row and column interactions and improved user experience.
Announcing support for cross-region search in Amazon OpenSearch Service
Amazon OpenSearch Service now supports cross-cluster search across regions, enabling you to perform searches, aggregations, and visualizations across multiple domains in different regions with a single query.
Previously, with cross-cluster search, you could search across domains within the same region. Now, you can create a secure connection between domains in two different regions and use the connection for search. Cross-cluster search across regions is supported on domains running OpenSearch, or Elasticsearch version 7.10. To learn about cross-cluster search across regions please refer to the documentation.
Amazon RDS for SQL Server Now Supports TDE enabled SQL Server Database Migration
Amazon Relational Database Service (Amazon RDS) for SQL Server now supports TDE enabled database migrations using Native Backup/Restore for Microsoft SQL Server. Previously, you would need to disable TDE on your on-premises TDE enabled SQL Server database in order to migrate to Amazon RDS.
Now you can migrate the backup files of your on-premises database without disabling TDE. You will be able to take a backup of your existing TDE certificate, store it in Amazon S3, and restore it to an existing Amazon RDS instance with the TDE option enabled. From there, you can follow the existing procedures to export a full backup of your on-premises TDE enabled database, store it in Amazon S3, and restore that backup to your Amazon RDS instance directly using the custom stored procedures offered by Amazon RDS. This feature is available for Amazon RDS for SQL Server Single-AZ instances.
You can now use Identity and Access Management (IAM) condition keys to specify which resource types are permitted in the retention rules created for Recycle Bin. With Recycle Bin, you can retain deleted EBS snapshots and EBS-backed AMIs for a period of time so that you can recover them in the event of an accidental deletion. You can enable Recycle Bin for all or a subset of the Snapshots or AMIs in your account by creating one or more retention rule. Each rule also specifies a retention time period. A deleted EBS snapshot or de-registered AMI can be recovered from the Recycle Bin before the expiration of the retention period.
By using condition keys with Recycle Bin's retention rule APIs, you can enforce policies across any or all of your retention rule APIs based on the resource type addressed by your retention rule. This allows you to create separate administrative roles for managing EBS snapshots and EC2 AMIs. You can separate permissions by resource type, such as limiting permissions to only create retention rules for EBS snapshots. The new condition keys for Recycle Bin are available in all regions where Recycle Bin is available.
AWS Service Catalog announces support for Attribute Based Access Control (ABAC)
Amazon Web Services (AWS) Service Catalog now supports Attributed Based Access Controls (ABAC), allowing customers the ability to use tags to easily manage access and permissions to AWS resources in Service Catalog. Now, Service Catalog administrators have the ability to define their AWS Identity and Access Management (IAM) policies to grant access and specify finer-grained permissions based on tags shared between AWS resource(s) and IAM users or roles. For example, based on a matching set of tags, an IAM entity (e.g., user or role) may be allowed or denied to create resources in their Service Catalog account.
Cloud SQL for PostgreSQL
For enhanced security with built-in authentication, Cloud SQL now lets you set password policies at the instance level.
The following extensions in Cloud SQL for PostgreSQL are generally available:
Additionally, users with the cloudsqlsuperuser
role have full access to the pg_largeobject
system catalog.
Cloud SQL enables you to access to the pg_shadow
view. You can use the pg_shadow
view to work with the properties of roles that are marked as rolcanlogin
in the pg_authid
catalog.
For more information, see Access to the pg_shadow view.
GKE
Confidential GKE Nodes is now generally available in GKE version 1.22 and later for stateful workloads using persistent disks, and in all GKE versions for stateless workloads. Use Confidential GKE Nodes to encrypt your workload data in-use through Compute Engine Confidential VMs.
Version 1.21.11-gke.1900 is now the default version in the Stable channel.
Version 1.22.8-gke.202 is now available in the Stable channel.
Control planes and nodes with auto-upgrade enabled in the Stable channel will be upgraded from version 1.20 to version 1.21.11-gke.1900 with this release.
GKE Node System Configuration now supports setting pod pid limits.
Microsoft Azure Releases And Updates
Source: azure.microsoft.com
You can create an Azure Virtual Network Manager instance in nine more regions and manage your virtual networks at scale across regions, subscriptions, management groups, and tenants globally from a single pane of glass.
You can now redeem an ExpressRoute Direct authorization to create an ExpressRoute Circuit in a different subscription.
Preview remote management capabilities through a single pane of glass in the Azure portal.
Azure Firewall Manager supports the ability to manage DDoS Protection plans and Azure Web Application Firewall (WAF) policies for workloads and applications, within a centralized place, at scale.
Automate alerts, orchestrate business workflows and build low-code apps with Azure Data Explorer connector for Power Automate, Logic Apps, and Power Apps.
Learn how private link support for Application Gateway drives more secured network access.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here: