Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday 8th July 2022.
This week at Hava we've released the embedded diagram viewer upgrade. Now you can embed a full interactive diagram, a light version with suppressed metadata and a png embed. All three methods self update, so if you insert an infrastructure diagram in a wiki for instance, when your environment changes, so does the png, all hands free, which is the way we like it. We've also got some news on self-hosted deployments coming soon - stay tuned.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
EC2 Auto Scaling now publishes predictive scaling policy’s forecasts as a CloudWatch metric, enabling you to analyze, monitor, and set alarms on the accuracy of predictive scaling. Predictive Scaling is a scaling policy that proactively increases the capacity of your Auto Scaling group ahead of predicted demand, improving the availability of your application while reducing the need to stay overprovisioned that otherwise would have increased your EC2 bill. As predictive scaling only increases the capacity for your Auto Scaling groups, applying it to your current scaling configurations strictly enhances your application availability. However, an inaccurate prediction can potentially increase your cost. Now, you can use the extensive list of CloudWatch features to measure accuracy of predictions, view forecasts using the familiar CloudWatch graphs, and also set automatic alarms and notifications when predictions are above your desired levels.
Amazon EC2 Auto Scaling is a service that helps you meet application demand by automatically adding or removing EC2 instances to an Auto Scaling group according to the conditions you define. It already publishes a wide range of metrics to Amazon Cloudwatch, which is an AWS native service to monitor the health of your infrastructure and applications running on AWS. Now, predictive scaling forecasts are published as CloudWatch metrics for the past timestamps. You can leverage various CloudWatch features like Metric Math to create accuracy metrics such as Mean Absolute Percentage Error (MAPE) that are commonly used to measure timeseries forecasting accuracy, view multiple metrics on a single graph to understand when and by how much scaling policies are changing the capacity of your groups, or create dashboards and alarms for more automated monitoring experience.
This week, AWS announced the general availability of a new feature of AWS IoT Core that simplifies the registration of certificate authorities (CAs) necessary for device provisioning and makes it easier to move devices between customers' multiple AWS accounts within the same AWS region and between different regions. This reduces the complexity of registering devices to AWS IoT Core and helps customers accelerate the development lifecycle for their IoT implementations when using AWS IoT Core Just-in-Time Provisioning (JITP) and Just-in-Time Registration (JITR) device provisioning methods of AWS IoT Core.
AWS IoT Core requires customers to register CA to validate the signature of device certificates during provisioning. Previously, customers needed access to the CA's private key to prove its ownership before registering the CA, but the private keys are often managed by device vendors or security teams of organizations that operate their own CAs and are not easily accessible to developers. Effective today, customers can directly manage the registration of CAs to simplify device provisioning.
Customers also often manage different AWS accounts to differentiate between development, testing, and production workloads. Until now, they had to configure multiple CAs to connect the same device to multiple accounts during the development process. With this update, customers can use the same CA across multiple accounts to simplify device provisioning using JITP or JITR and improve security posture by having fewer CAs.
AWS IoT Core is a managed cloud service that lets connected devices easily and securely interact with cloud applications and other devices. Customers must provision their devices before devices can securely connect and communicate with AWS IoT Core. Provisioning refers to registering devices' digital identities with the cloud service, attaching permissions for the devices to access cloud resources, and associating contextual information such as device serial numbers and location with registered digital identities. With AWS IoT Core Just-in-Time Provisioning and Just-in-Time Registration features, customers can have their devices provisioned automatically when devices first attempt to connect to AWS IoT Core.
Amazon Keyspaces (for Apache Cassandra) is a scalable, serverless, highly available, and fully managed Apache Cassandra-compatible database service.
Amazon Keyspaces helps you run Cassandra workloads more easily by using a fully managed and serverless database service. With Amazon Keyspaces, you don’t need to provision storage upfront and you only pay for the storage that you use. In June, AWS released a new CloudWatch metric BillableTableSizeInBytes to monitor and track your table storage costs over time. Now AWS have added Managed Policies that enable you to access the BillableTableSizeInBytes from your Keyspaces console.
Console access for the BillableTableSizeInBytes is available in all AWS Regions where AWS offers Amazon Keyspaces.
For more information, see Monitoring Amazon Keyspaces with Amazon CloudWatch. If you are new to Amazon Keyspaces, the getting started guide will show you how to quickly provision a keyspace and explore the the query capabilities and scaling capabilities of Amazon Keyspaces.
AWS CloudFormation StackSets launched a new feature that allows you to deploy stack sets to selected AWS accounts in an Organizational Unit (OU) in a single operation. You can use this feature to target or skip stack sets deployment to AWS accounts within an OU. For example, you can use this feature to skip deployment of an AWS Config policy in AWS accounts that already have the policy within an OU. In a few clicks, you can re-deploy stack sets to those AWS accounts in which the earlier stack sets deployment had failed. Similarly, you can skip stack set deployment to suspended AWS accounts in an OU.
This feature introduces a new filter called AccountFilterType in DeploymentTargets. AccountFilterType allows you to perform advanced deployment strategies across OUs within your enterprise. With AccountFilterType, you can limit deployment targets to AWS accounts using three options: intersection, difference, and union. For example, you can create a deployment strategy that targets all AWS accounts from OU1 and selected AWS accounts from OU2.
Amazon Elastic Compute Cloud (Amazon EC2) M1 Mac instances are now generally available (GA). Built on Apple Silicon Mac mini computers and powered by the AWS Nitro System, Amazon EC2 M1 Mac instances deliver up to 60% better price performance over x86-based EC2 Mac instances for building and testing iOS and macOS applications. You still enjoy the same elasticity, scalability, and reliability that the secure, on-demand AWS infrastructure has offered to millions of customers for more than a decade. EC2 M1 Mac instances also enable native Arm64 macOS environments for the first time on AWS to develop, build, test, deploy, and run applications for Apple devices. As a developer who is rearchitecting your macOS applications to natively support Apple Silicon Macs, you can now provision Arm64 macOS environments within minutes, dynamically scale capacity as needed, and benefit from pay-as-you-go pricing to enjoy faster builds and convenient distributed testing. To learn more or get started, see Amazon EC2 Mac Instances.
AWS CloudFormation has expanded the availability of StackSets to the AWS Asia Pacific (Jakarta) Region. StackSets allows you to provision and manage deployment of cloud resources to multiple AWS accounts and Regions in a single operation. StackSets is integrated with AWS Organizations, so you can take advantage of automatic deployments whenever an AWS account enters an organization.
With this launch, you can use StackSets to deploy CloudFormation stacks to AWS accounts in the Jakarta region. Furthermore, you can use StackSets feature in the Jakarta region to deploy CloudFormation Stacks to AWS accounts in other AWS Regions. For example, you can easily establish a AWS Config policy across multiple accounts in Jakarta with a single StackSet operation.
AWS Security Hub has released 36 new controls for its Foundational Security Best Practice standard (FSBP) to enhance your Cloud Security Posture Management (CSPM). These controls conduct fully-automatic checks against security best practices for AWS Auto Scaling, AWS CloudFormation, Amazon CloudFront, Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container Registry (ECR), Amazon Elastic Container Service (ECS), Amazon Elastic File System (EFS), Amazon Elastic Kubernetes Service (EKS), Elastic Load Balancing (ELB), Amazon Kinesis, AWS Network Firewall, Amazon OpenSearch Service, Amazon Redshift, Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), and AWS WAF. If you have Security Hub set to automatically enable new controls and are already using AWS Foundational Security Best Practices, these controls are enabled for you by default. Security Hub now supports 223 security controls to automatically check your security posture in AWS.
Amazon SageMaker Feature Store is a fully managed, purpose-built repository to store, update, search, and share machine learning (ML) features. The service provides feature management capabilities such as enabling easy feature reuse, low latency serving, time travel, and ensuring consistency between features used in training and inference workflows. A feature group is a logical grouping of ML features whose organization and structure is defined by a feature group schema. Until today, customers could add metadata tags only to feature groups which in turn enabled easy search and discovery of a feature group. To search for a specific feature however was more complicated. Customers needed to know which feature group the feature belongs and then scan for the relevant feature in the feature group, leading to additional overhead while searching for features..
This week, Amazon SageMaker Feature Store is announcing the ability to add custom metadata at a feature level and the ability to directly search for features in addition to feature groups. The custom metadata fields include description and parameters to help make features discoverable. You can use the UpdateFeatureMetadata API to add or update metadata for a feature, and use the DescribeFeatureMetadata API to view all metadata for a feature. You can also update and view feature metadata in SageMaker Studio, an integrated development environment for ML.
Using either the Feature Store user interface in SageMaker Studio or the Search API, you can search for features across feature groups within the same AWS account, and discover features relevant for your use case by performing a text based search of the feature metadata attributes.
Amazon QuickSight now supports APIs for QuickSight account creation. Administrators and developers can automate deployment of QuickSight accounts in their organization at scale. You can now programmatically create accounts with QuickSight Enterprise and Enterprise + Q editions.
Amazon GuardDuty has incorporated new machine learning techniques that are highly effective at detecting anomalous access to data stored in Amazon Simple Storage Service (Amazon S3) buckets. This new capability continuously models S3 data plane API invocations (e.g. GET, PUT, and DELETE) within an account, incorporating probabilistic predictions to more accurately alert on highly suspicious user access to data stored in S3 buckets, such as requests coming from an unusual geo-location, or unusually high volumes of API calls consistent with attempts to exfiltrate data. The new machine learning approach can more accurately identify malicious activity associated with known attack tactics, including data discovery, tampering, and exfiltration.
The new threat detections are available for all existing Amazon GuardDuty customers that have GuardDuty S3 Protection enabled, with no action required and at no additional costs. If you are not using GuardDuty yet, S3 protection will be on by default when you enable the service. If you are using GuardDuty, and are yet to enable S3 Protection, you can enable this capability organization-wide with one-click in the GuardDuty console or through the API.
AWS are happy to announce the general availability of the new streamlined deployment experience for .NET applications. With sensible defaults for all deployment settings, you can now get your .NET application up and running in just one click, or with a few easy steps - without needing deep expertise in AWS. You will receive recommendations on the optimal compute for your application, giving you more confidence in your initial deployments. You can find it in the AWS Toolkit for Visual Studio using the new “Publish to AWS” wizard. It is also available via the .NET CLI by installing AWS Deploy Tool for .NET.
You can deploy ASP.NET Core applications, long running services, scheduled tasks, and Web Assembly applications that are built with .NET Core 3.1 and above including the .NET 7 preview. At the time of this release, we support deployments to Amazon Elastic Container Service (Amazon ECS) using AWS Fargate compute engine, AWS App Runner, and AWS Elastic Beanstalk. We also support hosting Blazor WebAssembly applications in Amazon S3 using Amazon CloudFront as a content delivery network (CDN).
To get started in Visual Studio, install the latest version of the AWS Toolkit for Visual Studio from the Visual Studio Marketplace. The new wizard assumes a minimal experience-level with AWS services and offers convenient features, including one-click deployments. To learn more, visit our blog post and the Developer Guide for AWS toolkit for Visual Studio.
Amazon SageMaker Feature Store is a fully managed, purpose-built repository to store, update, search, and share machine learning (ML) features. The service provides feature management capabilities such as enabling easy feature reuse, low latency serving, time travel, and ensuring consistency between features used in training and inference workflows. A feature group is a logical grouping of ML features whose organization and structure is defined by a feature group schema. Until today, the features in a feature group were defined at the time of feature group creation, and the feature group schema was immutable.
This week, Amazon SageMaker Feature Store is announcing the ability to evolve the feature group schema and add new features into existing feature groups after it was created. Now you can leverage the same feature group over time as models evolve to use new features, without the burden of creating and managing new feature groups. You can add one or multiple features to an existing feature group using the UpdateFeatureGroup API. You simply specify the name of the feature group to update and the name and type of each feature to be added.
Amazon Location Service now supports quota management. Developers can create Amazon CloudWatch alarms that notify them when their usage of any API is close to their quota limit for that API. These alarms help developers ensure operational continuity, prevent service throttling, and protect from unintentional spend. Additionally, developers can use AWS Service Quotas to view, manage, and request quota increases, all in one user interface. For example, an eCommerce website can create a CloudWatch alarm to get notified when they have reached 80% usage on each of the Amazon Location APIs. When the alarm is initiated, they can request a quota increase to help scale their workloads, prevent their website from experiencing outages, and prevent a poor customer shopping experience.
Amazon Location Service is a location-based service that helps developers easily and securely add maps, points of interest, geocoding, routing, tracking, and geofencing to their applications without compromising on data quality, user privacy, or cost. With Amazon Location Service, you retain control of your location data, protecting your privacy and reducing enterprise security risks. Amazon Location Service provides a consistent API across high-quality location based service data providers (Esri and HERE), all managed through one AWS console.
Amazon SageMaker Feature Store is a fully managed, purpose-built repository to store, update, search, and share machine learning (ML) features. The service provides feature management capabilities such as enabling easy feature reuse, low latency serving, time travel, and ensuring consistency between features used in training and inference. Until today, SageMaker Feature Store monitoring was limited to consumed read and write units, which gave a limited view of the operational efficiency of the feature store.
This week, Amazon SageMaker Feature Store is announcing the availability of new monitoring metrics logged to Amazon CloudWatch, including number of API requests, errors, throttled requests, and latency of the service in processing operations. In addition, you can now track storage size for the online store over time. These metrics gives you a unified view of SageMaker Feature Store operational health to help troubleshoot issues, discover insights and keep your applications running smoothly. With Amazon CloudWatch, you can use the new metrics to create customized dashboards and set alarms that notify you or take actions when a specified metric reaches a threshold.
Amazon Relational Database Service (Amazon RDS) Performance Insights now allows you to choose retention periods for your performance history that range from one month up to 24 months. You can also use the RDS Performance Insights free tier, which includes seven days of performance data history and one million API requests per month. We have also adjusted the pricing model, resulting in reduced pricing of 24-month retention for most instance types.
RDS Performance Insights allows non-experts to measure database performance with a simple-to-understand dashboard that visualizes database load. With one click, you can add a fully managed performance monitoring solution to your Amazon Aurora clusters and Amazon RDS instances. RDS Performance Insights automatically gathers necessary performance metrics and visualizes them in a dynamic dashboard on the RDS console. You can identify your database’s top performance bottlenecks from a single graph.
To get started, log into the Amazon RDS Management Console and enable Amazon RDS Performance Insights when creating or modifying an instance of a supported Amazon RDS engine. Then go to the Amazon RDS Performance Insights dashboard to start monitoring performance. Existing RDS Performance Insights can change the retention window for their databases using the modify database flow.
Amazon OpenSearch Service now allows users to view default quota and applied quota information through Service Quotas. Quotas, also referred to as limits in AWS services, are the maximum values for the resources, actions, and items in your AWS account. Each AWS service defines its quotas and establishes default values for those quotas. Depending on your business needs, you might need to increase your service quota values. Service Quotas enables you to look up your service quotas and to request quota increase. AWS Support might approve, deny, or partially approve your requests.
With this new feature, you can use the Service Quotas console, the AWS Command Line Interface (AWS CLI), and the AWS SDK to view service quota information for Amazon OpenSearch Service. You can view your existing quotas such as Dedicated master instances per domain, Domains per region, Instances per domain, Instances per domain (T2 instance type), and Warm instances per domain. As of now, we support limit increase requests for two of these quotas-Domains per region and Instances per domain, through the Service Quotas console.
AWS Identity and Access Management (IAM) now enables workloads that run outside of AWS to access AWS resources using IAM Roles Anywhere. IAM Roles Anywhere allows your workloads such as servers, containers, and applications to use X.509 digital certificates to obtain temporary AWS credentials and use the same IAM roles and policies that you have configured for your AWS workloads to access AWS resources.
With IAM Roles Anywhere you now have the ability to use temporary credentials on AWS, eliminating the need to manage long term credentials for workloads running outside of AWS, which can help improve your security posture. Using IAM Roles Anywhere can reduce support costs and operational complexity through using the same access controls, deployment pipelines, and testing processes across all of your workloads. You can get started by establishing the trust between your AWS environment and your public key infrastructure (PKI). You do this by creating a trust anchor where you either reference your AWS Certificate Manager Private Certificate Authority (ACM Private CA) or register your own certificate authorities (CAs) with IAM Roles Anywhere. By adding one or more roles to a profile and enabling IAM Roles Anywhere to assume these roles, your applications can now use the client certificate issued by your CAs to make secure requests to AWS and get temporary credentials to access the AWS environment.
Anthos Clusters on bare metal
On July 1, 2022, Google released an updated version of the Apigee UI.
This release contains a new version of the Debug tab in the Apigee Proxy Editor. Following previous releases of new versions of the Overview and Develop tabs, this completes the initial release of the new Proxy Editor.
To view the new Debug tab, see Using Debug.
App Engine standard environment
The Java 17 runtime for App Engine standard environment is now generally available.
The PHP 8.1 runtime for App Engine standard environment is now generally available.
The Python 3.10 runtime for App Engine standard environment is now generally available.
Artifact Registry is now available in the
us-south1 region (Dallas, United States).
Azure workload identity federation is now available in preview for BigQuery Omni connections. This feature helps you secure data by allowing you to grant Google access to an application you manage in your Azure tenant so that neither you nor Google must manage application client secrets.
An updated version of JDBC driver for BigQuery is now available. This version includes a fix for an issue with connector returning stack overflow in some cases when executing complex long queries.
Cloud Data Loss Prevention
InfoType categories were added to built-in infoTypes.
To get a list of built-in infoTypes, call the
Cloud Functions now supports the following runtimes at the General Availability release level:
Cloud SQL for MySQL
Cloud SQL for MySQL now supports setting timezone names as values for the
time_zone parameter. Refer to the Cloud SQL documentation for a list of supported timezone names.
Dataproc support for the following images has been extended to the following dates:
New sub-minor versions of Dataproc images:
Deep Learning Containers
Microsoft Azure Releases And Updates
The Azure IoT Central REST API now supports to programmatically create enrollment groups, schedule jobs, configure application dashboards, and leverage the REST API for unassociated/unmodeled devices.
General availability: Application Insights standard test for synthetic monitoring
Standard tests are a single request test which also includes SSL certificate validity, proactive lifetime check, HTTP request verb, custom headers, and custom data associated with your HTTP request.
Reduce spending by storing rarely accessed data to Azure Archive Storage, now in new regions.
Migration to the cost-effective, secure, performant, and manageable new Azure Monitor agent is now made easy using migration tools and helpers.
Benefit from the latest PostgreSQL minor versions for Azure Database for PostgreSQL – Hyperscale (Citus).
Azure Functions now supports setting a retry policy for Azure Event Hubs and timer triggers.
Data history is an integration feature of Azure Digital Twins.
Azure Active Directory (Azure AD) authentication for Azure Monitor Application helps ensure that only authenticated telemetry is ingested in your Application Insights resources.
Standard tests are a single request test which also includes SSL certificate validity, proactive lifetime check, HTTP request verb, custom headers, and custom data associated with your HTTP request.
Container Insights now supports the Windows Server 2022 operating system.
Azure Monitor Agent (AMA) provides a secure, cost-effective, simplified, performant way to collect telemetry data from IaaS resources. It now supports installation and authentication at-scale using Managed Identity user-assigned mode.
You can now use jointly developed guidance from Red Hat and Microsoft to get JBoss EAP applications up and running quickly on Azure Red Hat OpenShift.
Create confidential VMs using Ephemeral OS disks for your stateless workloads.
You can now configure your backup policy to move all your eligible and recommended recovery points to vault-archive tier for Azure Virtual Machines, SQL Server/SAP HANA in Azure Virtual Machines.
The Azure Percept March update includes fixes related to security.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here: