Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday 1st July 2022.
This week at Hava we've been putting the final touches to the embedded viewer upgrade. Now you can embed a full diagram, a light version with suppressed metadata and a png embed. All three methods self update, so if you insert an infrastructure diagram in a wiki for instance, when your environment changes, so does the png, all hands free, which is the way we like it.
Keep an eye out for a blog post announcing GA on Monday.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
Amazon Connect now supports personalization of the customer experience using Lex sentiment analysis in flows
Amazon Connect now allows you to further personalize the automated, self-service customer experience by leveraging Amazon Lex customer sentiment analysis as a branch within your flows. Amazon Lex allows customers to create intelligent chatbots that turn their Amazon Connect flows into natural conversations. With this launch, you can now build flows based on whether the customer expresses positive or negative utterances to your Lex bot. For example, you may want customers who express positive sentiment to be presented with additional upsell opportunities or you may want customers who express negative sentiment to be put directly in queue to speak with an agent. The new functionality can be setup using the “Get Customer Input” or the “Check contact attributes” flow blocks. In addition, all Lex related attributes (e.g., Intent, Slots, Sentiment) within flow blocks are now consolidated under one “type” within attribute selection to help simplify building your Lex experience within flows.
This new feature is available in all AWS regions where Amazon Connect is available.
Amazon Connect now supports branching of flows based on Lex confidence scores
Amazon Connect now allows you to further personalize, the automated self-service customer experience using Amazon Lex intent confidence scores as a branch within your flows. Amazon Lex allows customers to create intelligent chatbots that turn their Amazon Connect flows into natural conversations. By branching flows on Lex confidence scores, you can present the right solutions to your customers to help solve their issues faster. For example, when a confidence score is high you may want to present customers a with a self-service option immediately rather than requesting additional information or transferring them to an agent. This new functionality can be set up using the “Check contact attributes” flow block.
Integration of AWS Well-Architected Tool with AWS Organizations
AWS Well-Architected Tool now integrates with AWS Organizations enabling cloud architects to share their workloads and custom lenses more broadly across their organization. AWS Organizations is an account management service that allows customers to consolidate multiple AWS accounts into a single, centrally managed organization. This update will increase efficiency and make it easier to share lenses and workloads with multiple accounts.
Well-Architected customers will be able to give access to specific teams or departments by sharing their workloads and custom lenses across different subsets within an organization. This will promote collaboration between teams and will save customers time, allowing them to scale faster across a wider audience. Additionally, customers will be able to better control access to workloads and custom lenses.
This new capability is available to customers and AWS Partners at no additional charge in the AWS Management Console and is offered in all Regions where AWS Well-Architected Tool is available. The AWS Well-Architected Tool is designed to help you review the state of your applications and workloads, and provides a central place for architectural best practices and tools to improve decision-making, minimize risks, and reduce costs.
Amazon QuickSight Authors can now learn and experience Q before signing up
QuickSight Authors can now try, learn and experience Q before signing up. Authors can choose from six different sample topics to explore relevant dashboard visualizations and ask questions about data in the context of exploration to fully explore Q’s capability before signing up. This feature makes it easy for authors to understand and learn about Q before signing up.
QuickSight Authors can log in to their QuickSight accounts and click on “Q Topics“ in the left hand navigation to try and explore Q. They can select one among the six sample Topics (Product Sales, Marketing Campaigns, Financial Services, Clinical Trials, Student Enrollment Statistics, and AWS cost and Usage) and explore relevant visualizations and questions that can be enabled on these data and interact with them or ask their own natural language questions about this data before signing up for Q
Amazon Interactive Video Service launches edge location in Colombia
Amazon Interactive Video Service (Amazon IVS) announces its first point of presence (PoP) in Colombia. The new edge location will enable streamers and viewers based in Colombia to enjoy lower latency, better video quality, and increased capacity.
Amazon EventBridge cross-Region routing is now available in AWS GovCloud (US) Regions
Amazon EventBridge cross-Region routing allows customers to consolidate events from numerous regions into one central Region. This makes it easier for AWS customers to centralize their events in the destination Region and write code that reacts to them or replicate events from source to destinations Regions to help synchronize data across Regions. This week, AWS are excited to announce availability of cross-Region routing in AWS GovCloud (US) Regions.
EventBridge is a serverless event bus that enables you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. You can set up routing rules to determine where to send your data, allowing for application architectures to react to changes in your data and systems as they occur. Amazon EventBridge can make it easier to build event-driven applications by facilitating event ingestion, delivery, security, authorization, and error handling.
AWS CloudFormation Guard 2.1 is now generally available
AWS CloudFormation announces the general availability (GA) of AWS CloudFormation Guard 2.1 (cfn-guard), which enhances Guard 2.0 with new features. CloudFormation Guard is an open-source domain-specific language (DSL) and command line interface (CLI) that helps enterprises keep their AWS infrastructure and application resources in compliance with their company policy guidelines. CloudFormation Guard provides compliance administrators with a simple, policy-as-code language to define rules that can check for both required and prohibited resource configurations. It enables developers to validate their templates (CloudFormation Templates, K8s configurations, and Terraform JSON configurations) against those rules.
This GA release is backward compatible with cfn-guard 2.0, and enhances the developer experience. In additional to improving the stability and performance of cfn-guard, this release introduces four new features. First, developers have the option of viewing verbose color-coded outputs during template validation that will show code snippets on failure; allowing users to pinpoint corrective actions. Second, this release introduces parametrized rules, wherein policy authors can write a common polymorphic rule that changes its nature transparently based on the type of template being passed. For example, developers can write cfn-guard rules that work across AWS CloudFormation Templates, Terraform plans that use AWS CodeCommit, and AWS Config for asserting conditions. Third, this release introduces directory bundle support for running validations for templates allowing users to pass a directory as an input to scan all supported file types containing guard rules, data templates, or input parameters. Fourth, this release introduces dynamic data lookup for inspection via multiple data files. For example, users can pull a list of allowed security groups from a template, read those values into a ruleset and validate the template. The same ruleset can be used by passing stage-specific lookup values as input (DEV vs. PROD).
Amazon Pinpoint launches journey schedule for more precise communication delivery
Journeys in Amazon Pinpoint now allows customers to define a schedule for channel communications based on day of the week, and day of the year. In addition, Amazon Pinpoint has added two new journey sending limits to help customers control the volume of communications sent to a user. Amazon Pinpoint journeys are multi-step campaigns that send users on communication paths based on their actions or attributes. Journeys can use multiple channels including: SMS, email, push, and voice. Journeys are intended for customers who have user engagement use cases, and want to send targeted communications that drive high-value user actions.
Using the journey schedule, customers can set the start and end time for communication sends for each day of the week. For example, a customer may decide to set different delivery sending hours between weekdays and weekends. Customers can also set overrides for certain days of the year, for example set different hours, or do not send during public holidays. More precise delivery times mean users are more likely to engage with the communication and increase metrics such as open rates for SMS and email, as well as pick-up rates for voice calls. Journey sending limits can be used to control the volume of communications sent to a user for an individual journey and across all journeys. For example, a customer can ensure they only send a maximum of three messages to a user within a seven day period across all journeys. This can help prevent a user from receiving too many communications within a short period of time, which can hurt a customer’s brand and lower response rates.
AWS SAM Accelerate is now generally available - quickly test code changes against the cloud
The AWS Serverless Application Model (SAM) announces general availability of AWS SAM Accelerate. The AWS SAM Command Line Interface (CLI) is a developer tool that makes it easier to build, locally test, package, and deploy serverless applications. AWS SAM Accelerate is a new capability of AWS SAM CLI that makes it easier for developers to test code changes against a cloud-based environment, reducing the time from local iteration to production-readiness.
AWS SAM Accelerate allows developers to bring their rapid iteration workflows to serverless application development, achieving the same levels of productivity they're used to when testing locally, while testing against a realistic application environment in the cloud. AWS SAM Accelerate synchronizes infrastructure and code changes on a developer's local workspace with a cloud environment in near real time: code changes are updated in seconds in AWS Lambda; API definition changes in Amazon API Gateway; state machine updates to AWS Step Functions; and infrastructure changes are deployed via infrastructure-as-code tooling such as CloudFormation. AWS SAM Accelerate also supports synchronizing resources defined in CloudFormation Nested Stacks.
AWS Database Migration Service now supports VPC source and target endpoints
AWS Database Migration Service (AWS DMS) now supports virtual private cloud (VPC) endpoints as sources and targets. AWS DMS can now connect to any AWS service with VPC endpoints so long as explicitly defined routes to the services are defined in their AWS DMS VPC.
By supporting AWS VPC endpoints AWS DMS makes it easier to maintain end to end network security for replications without the additional networking configuration and setup that was previously required to do replications on VPCs. Customers can now take advantage of the end to end security capabilities of AWS VPCs when using AWS DMS.
To learn more, see Setting up a network for a replication instance.
Amazon FinSpace releases APIs to assign granular user permissions
With the release of new granular permission APIs, Amazon FinSpace customers can now fully manage user access within their FinSpace environment using the AWS SDK and CLI. This allows customers to integrate configuration of FinSpace access controls into their identity orchestration workflows to keep FinSpace in sync with their organization’s access policies.
For example, when a user joins an equity research team that uses FinSpace, they can be automatically enabled in FinSpace and setup to have access to the datasets used by for equities analytics. If this user later moves to the fixed income analysis team, their access to the original datasets can be removed and they can be given access to the datasets used by the fixed income team. Finally, if this user leaves the customer’s organization, their access can be automatically deactivated in FinSpace.
Amazon FinSpace is a fully managed analytics service for financial services customers that enables analysts to access and analyze data from multiple locations such as internal data stores like portfolio, actuarial, and risk management systems as well as petabytes of data from third-party data sources, such as historical securities prices from stock exchanges. With FinSpace, customers can store, catalog, and prepare data at scale, reducing the time it takes to gain insights from from months to minutes. The new APIs are available via the AWS SDK and CLI. To see more details on the Entitlements APIs, see the Amazon FinSpace Data API Reference.
Amazon Connect Customer Profiles now provides confidence scores to help companies merge duplicate customer records
Amazon Connect Customer Profiles now allows you to automatically merge duplicate customer records on confidence scores. Each time the identity resolution feature finds duplicate records, it provides a confidence score on a scale of zero to one to represent the accuracy of a match, where a score of one represents most accurate match and zero represents least accurate match. You can select a threshold anywhere between zero to one to automatically merge duplicate records into a unified customer profile.
When a customer contacts your customer service department, Customer Profiles provides your agents and interactive voice response (IVR) solutions with up to date information about the customer, enabling faster and more personalized customer service. Customer Profiles brings together customer information (e.g, address, purchase history, contact history) from multiple applications such as Salesforce, Amazon S3, and ServiceNow into a unified customer profile.
Announcing general availability of Amplify UI for React
Amplify UI is an open-source UI library that brings the simplicity and extensibility of AWS Amplify to UI development. It consists of connected components that simplify complex workflows like authentication and dynamic data, primitive components that form the building blocks to create consistency across applications, and themes to make Amplify UI to fit any brand.
Extensibility and customization are at the forefront of Amplify UI allowing easy integration into any application regardless of the front-end tech stack. With over 35 production-ready components, you can use Amplify UI as a foundation to build any application or design system.
Amplify UI offers a number of key features
AWS CloudShell is available in AWS GovCloud (US) Regions
AWS CloudShell is a browser-based shell that makes it easier to securely manage, explore, and interact with your AWS resources. CloudShell is pre-authenticated with your console credentials. Common development tools are pre-installed so no local installation or configuration is required. With CloudShell you can run scripts with the AWS Command Line Interface (AWS CLI), define infrastructure with the AWS Cloud Development Kit (AWS CDK), experiment with AWS service APIs using the AWS SDKs, or use a range of other tools to increase your productivity.
Amazon Virtual Private Cloud (VPC) customer-managed prefix lists is now available in five additional regions
Starting this week, Amazon Virtual Private Cloud (VPC) customers can create their own prefix lists in five additional AWS Regions: AWS Asia Pacific (Jakarta), AWS China (Beijing) region, operated by Sinnet and AWS China (Ningxia) region, operated by NWCD and both AWS GovCloud (US) regions.
A prefix list is a collection of CIDR blocks that can be used to configure VPC route tables, AWS Transit Gateway (TGW) route tables, and VPC security groups. Customers can share prefix lists with other AWS accounts using Resource Access Manager (RAM) to easily audit and apply prefix lists across all their accounts to have a consistent security posture and routing behavior.
VPC security groups, VPC route tables, and TGW route tables are used to control access and routing policies. Customers often have a common set of CIDR blocks for security group and route table configurations. Prefix lists allow customers to group multiple CIDR blocks into a single object, and use it as a reference in their security groups or route tables. This makes it easier for customers to roll out changes and maintain consistency in security groups and route tables across multiple VPCs and accounts.
There is no additional charge for using prefix lists.
Announcing bare metal support for Amazon EKS Anywhere
This week, AWS are excited to announce the general availability of Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere on bare metal which gives customers broader choice of infrastructure for running Kubernetes on-premises. As customers modernize their applications, they want to use Kubernetes consistently between their existing on-premises bare metal infrastructure and the cloud. Running Kubernetes on bare metal infrastructure is complex, and customers spend time, effort and money on infrastructure operations instead of focusing on business innovation.
Amazon EKS Anywhere on bare metal enables customers to automate all steps from bare metal hardware provisioning to Kubernetes cluster operations using a bundled open source toolset built on the foundation of Tinkerbell and Cluster API. Customers get choice of operating systems, and can pick either Bottlerocket (default) or Ubuntu for running their clusters. There are no upfront commitments or fees to use Amazon EKS Anywhere. Customers can optionally purchase Amazon EKS Anywhere Subscriptions to get 24/7 support from AWS’ highly-trained subject matter experts for Amazon EKS Anywhere cluster components as well as all bundled open source tooling.
AWS Systems Manager now supports patching Windows Server 2022 and other Linux operating systems
Patch Manager, a capability of AWS Systems Manager, now helps you automate patches deployments for instances running Windows Server 2022, Rocky Linux versions 8.4 and 8.5, CentOS Stream 8, and Red Hat Enterprise Linux (RHEL) versions 8.4 and 8.5, giving you more patching options for your nodes. Patch Manager helps you automate the process of patching nodes with security related and other types of updates. Patch Manager also helps you automate patches deployments for Linux instances running Windows Server, RHEL, Ubuntu Server, Amazon Linux, Amazon Linux 2, CentOS, and SUSE Linux Enterprise Server (SLES).
Amazon MemoryDB for Redis is now PCI DSS Compliant
Amazon MemoryDB for Redis is now a Payment Card Industry Data Security Standard (PCI DSS) compliant service. MemoryDB is a fully managed, Redis-compatible, in-memory database that provides low latency, high throughput, and durability at any scale.
You can now use MemoryDB to store sensitive payment card data with low latency and high throughput for use cases such as payment processing, mobile wallet, and payment fraud prevention that are subject to PCI DSS.
AWS Toolkit for Visual Studio adds Amazon CloudWatch Logs Integration
Developers can now access Amazon CloudWatch Logs within Visual Studio using the AWS Toolkit for Visual Studio. Directly from the IDE, it is now possible to search and filter log groups, log streams, and events. Additionally, log groups can be accessed from their associated resources, and log events can be downloaded to a file.
The AWS Toolkit for Visual Studio makes it easier to create, debug, and deploy .NET applications on Amazon Web Services using Visual Studio. The latest release of the toolkit includes several convenient CloudWatch logs features. Visual Studio users can list CloudWatch Log groups from the CloudWatch Logs node in the AWS Explorer. Individual log groups can be opened in a document tab, where you can view the log group’s streams, as well as export stream events to a local file. While viewing a log stream, you can search and filter log messages using keywords or phrases, such as "Exception" or "Error". You can also search using a time range, to see events that led up to and resulted from the error you were searching. Check out the Toolkit User Guide for more details.
AWS Single Sign-On is now available in the Middle East (Bahrain) and Asia Pacific (Hong Kong) Regions
AWS Single Sign-On (AWS SSO) is now available in the AWS Middle East (Bahrain) and Asia Pacific (Hong Kong) Regions. For a full list of the regions where AWS SSO is available, see the AWS Regional Services List.
AWS SSO is where you create, or connect, your workforce identities in AWS once and manage access centrally across your AWS organization. You can choose to manage access just to your AWS accounts or cloud applications. You can create user identities directly in AWS SSO, or you can bring them from your Microsoft Active Directory or a standards-based identity provider, such as Okta Universal Directory or Azure AD. With AWS SSO, you get a unified administration experience to define, customize, and assign fine-grained access. Your workforce users get a user portal to access all of their assigned AWS accounts or cloud applications. AWS SSO can be flexibly configured to run alongside or replace AWS account access management via AWS IAM.
Amazon S3 on Outposts now supports presigned URLs
Amazon S3 on Outposts now supports presigned URLs for granting time-limited access to objects stored locally on an Outpost. S3 on Outposts bucket owners can now more easily share objects with individuals in their Virtual Private Cloud (VPC).
Prior to presigned URLs, to share objects with other users, bucket owners would assign IAM policies to buckets. Now, with presigned URLs, bucket owners have a time-limited way to grant the ability to share, upload, or delete objects with other individuals in their VPC. Bucket owners can grant time-limited access with the presigned URL by choosing a custom expiration time that can be as low as one second and up to seven days, using the AWS CLI and SDKs.
Amazon S3 on Outposts delivers object storage to your on-premises AWS Outposts rack environment to help you meet your low latency, local data processing, and data residency needs. Using the S3 APIs and features, S3 on Outposts makes it easier to store, secure, tag, retrieve, report on, and control access to the data on your Outposts. AWS Outposts rack, as part of the AWS Outposts Family, is a fully managed service that extends AWS infrastructure, services, and tools to virtually any data center, co-location space, or on-premises facility for a truly consistent hybrid experience.
NICE DCV releases version 2022.1 with performance improvements and support for additional Linux distributions
NICE DCV version 2022.1 introduces performance improvements and multiple new features, such as support for Rocky Linux 8.5 and Ubuntu 22.04 servers. NICE DCV is a high-performance remote display protocol that helps customers securely access remote desktop or application sessions, including 3D graphics applications hosted on servers with high-performance GPUs.
NICE DCV version 2022.1 contains the following features and improvements:
Now bring your own development environment in a custom image to RStudio on Amazon SageMaker
Starting this week, you can now bring your own development environment in a custom image to RStudio on Amazon SageMaker. RStudio on SageMaker is the industry’s first fully managed RStudio Workbench in cloud. You can quickly launch the familiar RStudio Integrated Development Environment (IDE), dial up and down the underlying compute resources without interrupting your work, and even switch to programming using Python on Amazon SageMaker Studio Notebooks. All your work, including code, datasets, repositories, and other artifacts are synchronized between the two environments. You can bring your current RStudio license to Amazon SageMaker at no additional charge to quickly get started.
RStudio on SageMaker already comes with a built-in image pre-configured with R programming and data science tools including SageMaker SDK, AWS CLI, AWS SDK, and Reticulate package for integration with Python-based interfaces. Starting today, you can register your own custom image with packages and tools of your choice, and make them available to all the users sharing the RStudio on SageMaker domain. Bringing your own custom image has several benefits. You can standardize and simplify the getting started experience for data scientists and developers by providing a starter base image, pre-configure the drivers required for connecting to data stores, or pre-install specialized data science software for your business domain.
AWS Backup Audit Manager adds new control to audit recovery point objective (RPO)
AWS Backup Audit Manager adds a new control that you can use to audit and report on the recovery point objective (RPO) of your backups. With this launch, you can now specify your organizational recovery point objectives for your resources and evaluate whether your recovery points are in compliance.
To begin using this new AWS Backup Audit Manager control, simply enable it in your existing AWS Backup Audit Manager frameworks. Once you enable this control, AWS Backup Audit Manager automatically runs it on a daily basis, monitoring your RPOs based on your data protection policies, and alerts you to take corrective actions, if needed. You can also generate auditor-ready reports to help prove compliance of your data protection policies with your defined industry-specific regulatory requirements.
Amazon AppStream 2.0 enables UDP streaming for Windows native client
Starting this week, Amazon AppStream 2.0 introduces support for streaming over UDP, when using the Windows native client. Amazon AppStream 2.0 is a fully managed service that provides non-persistent desktops, and application streaming products to end-users. Previously, you were able to stream over TCP via Windows native client. With your end users now working from home or in different countries compared to your corporate offices, they may operate in sub-optimal network conditions you are unable to control. These network conditions can impact your end users experience and productivity. With UDP streaming your end users will experience a more responsive streaming quality in sub-optimal network conditions, with higher round trip latency.
Enabling UDP does not require any changes to your applications, and can simply be enabled from Amazon AppStream 2.0 console. If you opt-in and for any reason an UDP based stream cannot be established we will automatically fall back to TCP based streaming.
To get started with Amazon AppStream 2.0 UDP streaming for Windows native clients, launch the AppStream 2.0 console and go to your Stacks. There you can select UDP as the streaming preference in the Streaming Experience Settings card. In addition to selecting this option you also need to ensure 1/ your network supports UDP streaming on port 8433 for the AWS IP Ranges, 2/ your end users are using the latest Windows native client, and 3/ the base image you are using for your fleet supports UDP. If any of these conditions are not met, we will automatically fall back to streaming over TCP. You can choose to view and edit protocols at a Stack level from AWS Console, APIs or CLI.
Amazon QuickSight launches rolling date functionality
Amazon QuickSight now enables authors to setup rolling date to dynamically generate dashboard for end users. The rolling date functionality is now available for both date & time range filters and datetime parameters. Users will be able to set up rolling rules to fetch a date, such as today, yesterday, or different combinations of (start/end) of (this/previous/next) (year/quarter/month/week/day) and dynamically update the dashboard content based on when the dashboard is loaded. This feature brings flexibility and simplicity for users to build time-related dashboards. Without rolling date, users have to set up a static date and manually change it as needed. For further details, visit here.
AWS Direct Connect announces new location in Ireland
This week, AWS announced the opening of a new AWS Direct Connect location in Dublin, Ireland. AWS customers in Ireland and across Europe can now establish network connections from their premises to AWS in the Servecentric, Blanchardstown Corporate Park data center to gain high-performance, secure access to all public AWS Regions (except Amazon Web Services China (Beijing) Region, operated by Sinnet and Amazon Web Services China (Ningxia) Region, operated by NWCD). This is the third Direct Connect location in the Dublin metropolitan area.
Amazon QuickSight launches Level Aware Calculation (LAC)
Amazon QuickSight launches a suite of functions called Level Aware Calculations (LAC). The new calculation capability enables customers to specify the level of granularity that they want the window functions (in what window to partition by) or aggregate functions (at what level to group by) to be conducted. This brings flexibility and simplification for users to build some advanced calculations and powerful analyses. Without LAC, user will have to prepare pre-aggregated tables in their original data source, or run queries in the data prep phase to enable those calculations. For further details, visit here.
The new Level Aware Calculation functions are available in Amazon QuickSight Standard and Enterprise Editions in all QuickSight Regions.
AWS Application Migration Service is now in scope for AWS SOC reports and supports temporary IAM credentials
You can now use AWS Application Migration Service (AWS MGN) for use cases that are subject to System and Organization Controls (SOC) reporting. You can also now install the AWS Application Migration Service agent on your source servers using AWS Identity and Access Management (IAM) temporary security credentials with limited permissions. AWS Application Migration Service allows you to quickly migrate and modernize applications on AWS.
AWS SOC reports are independent third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives. In addition to meeting standards for SOC, AWS Application Migration Service is Health Insurance Portability and Accountability Act (HIPAA) eligible, Payment Card Industry – Data Security Standard (PCI DSS) compliant, and International Organization for Standardization (ISO) compliant. You can download the AWS SOC reports in AWS Artifact, and you can visit AWS Services in Scope by Compliance Program to see a full list of services covered by each compliance program.
AWS Snowcone SSD is now available in the AWS Europe (Paris)
The AWS Snowcone solid state drive (SSD) device is now available in the AWS Europe (Paris) Region, adding to the growing list of Regions already offering Snowcone SSD, including AWS US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California) Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Ireland), Europe (London), Asia Pacific (Mumbai), Canada (Central), and South America (São Paulo).
AWS Snowcone is the smallest device of the AWS Snow Family of edge computing, edge storage, and data migration devices. Snowcone is available in both hard disk drive (HDD) and solid-state drive (SSD) device types, and both device types are portable, rugged, and secure. They are small and light enough to fit in a backpack, and can withstand harsh environments. Customers use Snowcone to deploy applications at the edge, and to collect data, process it locally, and move it to AWS either offline by shipping the device to AWS, or online by using AWS DataSync on Snowcone to send the data to AWS.
Amazon Polly adds new male Neural TTS voices in 4 languages
Amazon Polly is a service that turns text into lifelike speech. Today, we are excited to announce the general availability of 4 male Neural TTS voices: Liam for Canadian French, Arthur for UK English, Daniel for German and Pedro for US Spanish.
TTS voices simplify the way you can create, implement, update, and maintain your speech-enabled applications and products. You can use Amazon Polly to enhance the user experience and improve the accessibility of your text content with the power of voice. Common use cases include interactive voice response (IVR) systems, audiobooks, newsreaders, eLearning content, and virtual assistants.
All four voices have been modeled on Matthew, an existing US English voice. While customers continue to appreciate Matthew for his naturalness and professional-sounding quality, he has so far exclusively served English-speaking traffic. Now we were able to decouple language and speaker profile, which allowed us to preserve native-like fluency across different languages without the necessity of obtaining multilingual data from the same speaker. As a result, we transferred the highly likeable vocal characteristics of Matthew to four new languages.
AWS DataSync can now copy data to and from Amazon FSx for NetApp ONTAP
AWS DataSync now supports copying files to and from Amazon FSx for NetApp ONTAP, a storage service that allows customers to launch and run fully managed ONTAP file systems in the cloud. Using AWS DataSync, you can quickly and securely migrate your data from your on-premises storage, the edge, or other clouds to your FSx for NetApp ONTAP file systems running in AWS. You can also use DataSync to move data between your FSx for NetApp ONTAP file system and Amazon S3 buckets, Amazon EFS file systems, or other Amazon FSx file systems.
AWS DataSync is an online data transfer service that makes it easy to move petabytes of data between on-premises storage, edge locations, or other cloud providers, and AWS Storage services. It uses a purpose-built network protocol and scale-out architecture to accelerate data movement and provides encryption of data in-transit and at-rest, along with end-to-end data integrity verification. DataSync provides control and monitoring capabilities such as data transfer scheduling and include and exclude filters, and gives you granular visibility into the transfer process through Amazon CloudWatch metrics, logs, and events.
Amazon GameLift launches new console experience
Amazon GameLift now offers a new console experience that provides a more intuitive way to manage and scale your game servers on AWS. The redesigned console has new left-hand navigation that makes it easy to switch between various GameLift features such as managing and creating builds, scripts, fleets, FlexMatch, and includes helpful resource links like “Prepare to launch”, and service quotas. The new interface will make it quicker and easier to configure and manage your game server instances by providing a view of all your settings in one location.
Amazon GameLift has also extended the integration into Amazon CloudWatch so customers can create their own dashboards and custom views of GameLift resources. The CloudWatch integration now includes metrics previously only available in the console, including aggregate metrics for instance performance, utilization/capacity, and player sessions. This gives more flexibility and choice when it comes to managing and monitoring the performance of your GameLift resources.
Amazon RDS increases concurrent copy limit to 20 snapshots per destination region
Amazon RDS now allows you to have up to 20 concurrent snapshot copy requests per destination region per account, an increase from the former limit of five concurrent copies per destination region per account.
The new limit applies to snapshots of Microsoft SQL Server, Oracle engines, Amazon RDS for PostgreSQL, Amazon RDS for MySQL, and Amazon RDS for MariaDB engines in AWS Regions where Amazon RDS is available. This feature has been enabled on your account and no further action is needed from you.
Amazon SageMaker built-in algorithms now provides four new Tabular Data Modeling Algorithms
Amazon SageMaker provides a suite of built-in algorithms, pre-trained models, and pre-built solution templates to help data scientists and machine learning practitioners get started on training and deploying machine learning models quickly. These algorithms and models can be used for both supervised and unsupervised learning. They can process various types of input data including tabular, image, and text.
Starting this week, Amazon SageMaker provides four new tabular data modelling algorithms: LightGBM, CatBoost, AutoGluon-Tabular and TabTransformer. These popular, state-of-the-art algorithms can be used for both tabular classification and regression tasks. They are available through the SageMaker JumpStart UI inside of SageMaker Studio, as well as through python code using SageMaker Python SDK. To learn how to use these algorithms, you can find SageMaker example notebooks below:
Amazon AppStream 2.0 is now available in the AWS US East (Ohio) Region
Amazon AppStream 2.0 is now available in the AWS US East (Ohio) region. You can now deploy AppStream 2.0 for your active workloads, as well as to meet your disaster recovery (DR) and business continuity needs. With this launch, you can deploy General Purpose, Compute Optimized, Memory Optimized, Graphics Design, Graphics Pro and Graphics G4 instances to meet the needs of your users.
AppStream 2.0 is a fully managed non-persistent desktop and application virtualization service that allows customers to stream applications and desktops from AWS to users without acquiring, provisioning, and operating hardware or infrastructure. AppStream 2.0 can help customers provide users with secure, instant-on access to the applications they need with a responsive, fluid user experience from anywhere on the device of their choice.
Amazon EC2 placement groups now support host-level spread on AWS Outposts rack
Starting this week, you can use Amazon EC2 placement groups to spread instances across distinct hosts on an AWS Outposts rack. Host-level spread placement groups distribute instances across hosts to reduce the likelihood of correlated failures, benefiting workloads that require High Availability (HA) like mission-critical databases.
AWS Outposts rack, a part of the AWS Outposts family, is a fully managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises datacenter or co-location space for a truly consistent hybrid experience. When you create a spread placement group that will be used in an Outpost, you can set an optional Spread Level parameter to “rack” or “host”, which determines how instances are distributed across the underlying hardware. If no Spread Level is defined, “rack” is selected by default. Spreading instances across distinct hosts is useful in Outposts environments when you want to protect against host failures that can occur within a rack. The number of instances you can launch into a host-level spread placement group is limited only by the number of hosts available in your Outpost.
AWS Glue Streaming ETL now supports auto-decompression
AWS Glue streaming ETL (Extract Transform and Load) can now detect compressed data streaming from Amazon Kinesis, Amazon Managed Streaming for Apache Kafka (Amazon MSK), and self managed Apache Kafka. It can then automatically decompresses this data without customers having to write code, saving them development hours. AWS Glue Streaming ETL jobs continuously consume data from streaming sources, cleans and transforms the data in-flight, and makes it available for analysis in seconds. Customers compress data prior to streaming in-order to improve performance and to avoid throttling limits by Amazon Kinesis and Amazon MSK. Prior to this feature, customers had to write user defined functions to uncompress data from a stream, which is time consuming.
AWS Support announces an improved create case experience
As of this week, you can experience the new interface for creating support cases in the AWS Support Center console. When you create a case, Support Center can better anticipate and understand your issue by capturing your case details. Support Center can then provide targeted and specific remediation, such as answers to frequently asked questions and links for related information.
AWS updated the create case experience from a single page layout to a simplified 3-step process that guides you through the create case flow.
First, you choose your case type. Next, you enter additional information to describe your case. Finally, you choose how you want to be contacted.
Each step will collect rich information relevant to your problem, provide highly relevant and actionable recommendations during case creation, and enable our support agents to resolve your case efficiently with fewer correspondences.
Amazon DocumentDB (with MongoDB compatibility) enables dynamic resizing for storage space
The storage space allocated to your Amazon DocumentDB (with MongoDB compatibility) cluster will now dynamically decrease when you delete data from the cluster. Amazon DocumentDB is a database service that is purpose-built for JSON data management at scale, fully managed and integrated with AWS, and enterprise-ready with high durability. Previously, when Amazon DocumentDB data was removed, such as by dropping a collection, the overall allocated space remained the same. The free space was reused automatically when data volume increased in the future.
Now, with dynamic resizing, the allocated storage space automatically increases up to a maximum size of 64 Tebibyte (TiB) when you need more storage and decreases automatically when you delete data. You only pay for the storage you use. Dynamic resizing for storage space is now being enabled for all Amazon DocumentDB v4.0 clusters in all regions. All new Amazon DocumentDB v4.0 clusters will have this feature enabled by default.
Anthos Clusters on bare metal
Anthos clusters on bare metal 1.12.0 is now available for download. To upgrade, see Upgrading Anthos on bare metal. Anthos clusters on bare metal 1.12.0 runs on Kubernetes 1.23.
The dockershim component in Kubernetes enables cluster nodes to use the Docker Engine container runtime. However, Kubernetes 1.24 removed the dockershim component. Starting from Anthos clusters on bare metal 1.12.0, you will not be able to create new clusters that use the Docker Engine container runtime. All new clusters should use the default container runtime
Improved cluster lifecycle functionalities:
Upgraded Anthos clusters on bare metal to use Kubernetes version 1.23.
Upgraded container runtime to
Updated preflight check to forward default SSH key if no key is provided.
Added support for new
GCPAccounts field in the cluster configuration file. This field enables the assignment of a
cluster-admin role to end-users.
Added labels to control plane, control plane load balancer, and load balancer node pools, so that these different node pools can be distinguished from each other.
Added nodepool reference label to nodes so that worker nodes can be listed in the UI.
GA: Added Summary API metrics. These metrics are scraped from the Kubernetes Summary API and provide CPU, memory, and storage metrics for Pods, containers, and Nodes.
Added separate flags to enable logging and monitoring for user applications separately:
EnableGMPForApplications. The legacy flag
EnableStackdriverForApplications will be deprecated and removed in future releases.
Preview: Added Google Cloud Managed Service for Prometheus to collect application metrics and monitor cluster health.
Upgraded GKE Metrics Agent (gke-metrics-agent) from version 1.1.0 to 1.8.3. This tool scrapes metrics from each cluster node and publishes them in Cloud Monitoring.
Added the following resource utilization metrics. For more information about these and other metrics, see View Anthos clusters on bare metal metrics:
Added sample dashboards for monitoring cluster health to Cloud Monitoring sample dashboards. Customers can install these dashboards with one click.
Scoped down the RBAC permissions of
stackdriver-operator, a component that performs logging and monitoring.
AIS CA deprecation. AIS certs are now signed by cluster CA.
ca-rotation container image so that it uses a distroless rather than a Debian-based image.
RBAC permissions of the
cluster-operator component have been eliminated or reduced to address elevated permissions.
Anthos Config Management
Shell access is disabled by default in the Config Sync
hydration-controller container. This disables the ability to use Kustomize remote bases. To use Kustomize remote bases, enable shell access by setting the field
spec.override.enableShellInRendering: true in RootSync and RepoSync.
Policy Controller now supports Cloud Monitoring. It will automatically export runtime metrics for both Cloud Monitoring and Prometheus. Users can also configure which monitoring backends metrics are exported to. To learn more, see Monitor Policy Controller.
Anthos Config Management is now compatible with GKE Autopilot with some cluster requirements. Policy Controller mutations are not compatible with Autopilot. Config Sync resource requests and limits adjustments will be further adjusted by GKE Autopilot. To learn more, see Install Config Sync.
Config Sync supports syncing configurations stored as OCI images in Google Artifact Registry or Container Registry as a preview feature. To learn more, see Publish config images to Artifact Registry.
Added a field
spec.override.reconcileTimeout in RootSync and RepoSync, for configuring the threshold for how long to wait for resources in an apply group to reconcile before giving up. An apply group consists of resources without direct or indirect dependencies on each others.
The constraint template library includes a new template:
K8sRequiredResources. For reference see Constraint template library.
Apigee API Hub
On June 27, 2022 Apigee hub released a new version of the software.
You can now set default configurations at a project or organization level. This feature is now generally available (GA).
You can now set the
view field in the
tables.get() API method to indicate which table information is returned. Setting the value to
BASIC reduces latency by omitting some storage statistics.
Previously, all BigQuery BI Engine projects had a maximum reservation size per project per location limit of 100 GB. This limit is now 250 GB.
This is the General Availability release of Certificate Manager.
You are now able to configure the storage utilization target for a cluster when you use autoscaling for Cloud Bigtable. This feature is generally available (GA).
Cloud Bigtable now gives you the option to undelete a table for up to seven days from the time of deletion using the
gcloud CLI. This feature is generally available (GA).
You can view your GKE costs by cluster, namespace, and pod labels in the Detailed cost export, and the built-in reports in the Google Cloud console.
Cloud Billing export to BigQuery
In the Detailed cost export to BigQuery, you can use the
labels.key column to filter the data by these label keys:
goog-k8s-cluster-name: Filter your GKE resources by cluster.
k8s-namespace: Filter your GKE resources by namespace.
k8s-label: View all your GKE resources.
Cloud Billing reports
In the Cloud Billing report, Cost breakdown report, and Cost Table report, you can use the Label selector to filter and group your data by cluster or namespace, using one of these label keys:
goog-k8s-cluster-name: Filter or group your GKE resources by cluster.
k8s-namespace: Filter or group your GKE resources by namespace.
To start viewing and analyzing your GKE cost data, see these pages:
GCP have added new features to view your billing information and cost estimates in the Google Cloud Console mobile app. You can view your cost trends and forecasts, the costs for your top project, and how much you're spending on your top Google Cloud services.
To see your billing data in the app, select the Billing tab in the navigation bar, then select Overview.
Attribution for your committed use discounts (CUDs) now appears at the same time as eligible usage.
Previously, the subscription fees and credits associated with your CUDs would appear in billing reports and BigQuery usage cost exports after the corresponding eligible resource usage. This could result in apparent spikes in cost if you viewed your billing data before the attribution process completed.
With this release, subscription fees and credits appear at the same time as eligible usage, meaning that your net costs are always accurate whenever you view your billing data.
Learn about how your CUD fees and credits are attributed across your resources.
Regional support for default pools and build triggers is now generally available. To learn more, see Cloud Build locations.
Cloud Composer supports Per-folder Roles Registration.
Cloud Functions now supports Python 3.10 at the General Availability release level.
Cloud Functions now supports Java 17 at the General Availability release level.
Customers enrolled in Key Access Justifications will now see justifications listed in Cloud Audit Logs for Cloud KMS.
You can now collect Apache Flink logs from the Ops Agent, starting with version 2.17.0. For more information, see Monitoring third-party applications: Flink.
Managed Service for Prometheus: You can now query Cloud Monitoring metrics by using PromQL. For more information, see Mapping Monitoring metric names to PromQL.
The new experience for creating metric-based alerting policies by using the Google Cloud console is now Generally Available. For more information, see Create metric-based alert policy.
Cloud Code Extension updated to 1.18.3
Update includes a new and improved Kubernetes development experience with the Development Sessions Explorer, support for private clusters, a refreshed welcome page, and more! Review the Cloud Code release notes for a complete list of features, updates, and fixes.
Cloud Shell Editor is built with Theia 1.25.0
Review the Theia release notes for a complete list of features/updates/bug fixes.
Cloud Shell now defaults to Python 3
Python 2 is still included as a development tool in Cloud Shell and may be invoked using
ANALYZE DDL command allows administrators to manually update the query statistics package that the optimizer uses to build query execution plans. This complements the existing automatic updates to provide faster feedback cycles when data, queries, or indexes change frequently.
Query insights is now generally available. Query Insights helps you visually detect and identify query performance issues for Cloud Spanner databases. You can also dig deeper and analyse the query details to know the root cause of these issues.
To learn more, see Detect query performance issues with Query Insights.
Cloud SQL for MySQL
Cloud SQL for MySQL supports in-place major version upgrades in Preview. You can upgrade your instance's major version to a more recent version. For more information, see Upgrade the database major version in-place.
Cloud SQL for PostgreSQL
A second June maintenance changelog is now available. For more information, use the links at Maintenance changelog.
The fix to the silent data corruption when using the CREATE INDEX CONCURRENTLY or REINDEX CONCURRENTLY SQL commands in PostgreSQL 14 (BUG #17485) is now available in the self-service maintenance release POSTGRES_14_2.R20220331.02_012 for PostgreSQL 14.2.
After applying the self service maintenance, you can fix any silent data corruption if it already happens using REINDEX CONCURRENTLY SQL command on the specific indexes, or reindexdb client command for your entire instance.
Object Lifecycle Management now supports new conditions and a new action.
MatchesSuffix conditions allow you to restrict lifecycle actions to objects with specific prefixes and suffixes.
AbortIncompleteMultipartUpload action allows you to remove abandoned XML API multipart uploads.
The XML API now supports setting a default Cloud KMS key on a bucket when creating the bucket.
Cloud VPN no longer checks a peer's IKE identity.
This change simplifies the configuration of your VPN peers, because you no longer need to explicitly set a peer's IKE identity to a specific value.
Note: Some Cloud VPN tunnels that were previously unestablished due to unmatched IKE identity might now become established.
If you don't want the affected tunnels to become established, delete them as needed on the Cloud VPN side, on the on-premises side, or on both sides.
If you want the affected tunnels to become established, no action is required on your part.
Previously, Cloud VPN required peers to use an IKE identity of type
ID_IPV4_ADDR, which is equal to the peer's public IP address. Removing this restriction enables easier interoperation with peers that don't support changing their IKE identity, especially when such peers are located behind NAT (Network Address Translation).
Generally available: You can now create shared reservations of Compute Engine zonal resources using the Google Cloud Console. Learn about shared reservations and creating a shared reservation.
GA: You can now use the SSH troubleshooting tool from the Cloud console to help you determine the cause of failed SSH connections. For more information, see SSH troubleshooting tool.Dataproc Metastore
Metadata federation is generally available (GA).
Metadata federation lets you access metadata that is stored in multiple Dataproc Metastore instances.
To set up a federation, you create a federation service and then configure multiple Dataproc Metastore instances as your backend metastores. The federation service then exposes a single gRPC endpoint, which you can use to access metadata across all of your metastore instances.
Private Service Connect for Dataproc Metastore is generally available (GA).
Support for Firebase Realtime Database is in Preview.
Firestore in Datastore mode
Not-equal (!=), IN, and NOT_IN query filters now available in all client libraries:
Filestore High Scale SSD tier is generally available (GA).
GKE Cost Allocation has been released for public preview. With GKE Cost Allocation public preview, you will be able to see cost breakdowns in clusters for namespaces, and pod labels for utilized CPU and MEM. For complete details, refer to View detailed breakdown of cluster costs
(2022-R16) Version updates
The following versions are now available in the Regular channel:
(2022-R16) Version updates
Version 1.21.12-gke.1500 is now the default version in the Stable channel.
(2022-R16) Version updates
Control plane and node version 1.24.1-gke.1800 is now available.
You can now give multiple containers time-shared access to the full compute resources of a single NVIDIA GPU accelerator. Time-sharing GPUs is generally available in GKE version 1.23.7-gke.1400 and later. For more information, refer to Time-sharing GPUs on GKE.
Google Cloud Armor
Google Cloud Armor now supports TCP Proxy load balancers and SSL proxy load balancers in General Availability For more information, see the security policy overview.
Advanced network DDoS protection is now available for network load balancers, protocol forwarding, and VMs with public IP addresses in public preview. For more information, see Configure advanced DDoS protection.
Google Cloud Deploy
The ability to deploy to Anthos user clusters is now generally available.
In June 2022, IAM had an issue that resulted in excess usage metrics for service accounts and service account keys when any of the following actions were performed:
Each time you took any of these actions, Cloud Monitoring recorded an authentication usage metric for the parent service account, and for each of its service account keys, regardless of whether you used the service account or its keys to authenticate. These excess metrics were visible in Cloud Monitoring, and in the metrics for individual service accounts and keys, from June 7, 2022, through June 17, 2022.
In addition, these excess metrics were visible in other systems that use data from Cloud Monitoring, including Activity Analyzer, which shows when service accounts and keys were used to authenticate, and service account insights, which provide findings about unused service accounts. Excess metrics were visible in these systems from June 7, 2022, through June 22, 2022.
This issue has been corrected, and Cloud Monitoring is no longer recording these excess metrics. However, the last authentication time for each service account and key will continue to reflect the excess metrics indefinitely, until you authenticate with the service account or key again.
Identity Platform Web v9 modular SDK is now available at the GA stage. For details, see Upgrade to the modular Web SDK (v9) .
Storage Transfer Service
Expanded overwrite options are new generally available (GA). The overwriteWhen field can be used to specify whether data that already exists in the destination should be overwritten always, never, or only when ETags and checksum values indicate that the file has changed.
Metadata preservation options are now generally available (GA). This includes the option of preserving POSIX attributes and symlinks when transferring to, from, and between POSIX filesystems; as well as object ACLs, CMEK, temporary holds, and object creation time when transferring between Cloud Storage buckets.
See Metadata preservation for details.
Transfer Appliance now supports monitoring of the amount of data stored on your appliance, and whether online transfer is enabled, through Cloud Monitoring. See Monitor Transfer Appliance for details.
Vertex AI Forecasting is available in GA. The following features are available:
Microsoft Azure Releases And Updates
General availability: Azure Active Directory authentication for Application Insights
Azure Active Directory (Azure AD) authentication for Azure Monitor Application helps ensure that only authenticated telemetry is ingested in your Application Insights resources.
Generally available: Azure Backup multi-user authorization for recovery services vaults
Multi-user authorization for Backup provides enhanced protection for your backup data in recovery services vaults against unauthorized critical operations.
Generally available: 2022-05-31 Azure IoT Central REST API release
The Azure IoT Central REST API now enables GA support for creating and managing device groups and organizations, configure file uploads destination, and add organization association to existing endpoints.
Generally available: Resize rows in Azure IoT Central device raw data view
You can now resize rows in Azure IoT Central’s device raw data view allowing the ability to analyze more data in a single view.
General availability: Temporary access pass for Azure Active Directory
Temporary Access Pass is now generally available in Public and US Gov clouds.
General availability: MATCH clause for Query
The MATCH clause is used in the Azure Digital Twins query language and allows you to specify which pattern should be followed while traversing relationships in the ADT graph.
Generally available: Export device data under an organization in Azure IoT Central
Azure IoT Central data export feature now supports the ability to filter data under an organization as well as include organizations path in the exported data.
Public preview: Multiple backups per day for Azure Virtual Machines
Protect Azure Virtual Machines with mission critical applications with low recovery point objective.
Public preview: Create an additional 5000 Azure Storage accounts within your subscription
Azure Storage is announcing public preview of the ability to create additional 5000 storage accounts per subscription per region.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here: