Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday 22nd April 2022
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
Amazon Aurora Serverless v2, the next version of Aurora Serverless, is now generally available. Aurora Serverless v2 scales instantly to support even the most demanding applications, delivering up to 90% cost savings compared to provisioning for peak capacity.
Aurora Serverless is an on-demand, automatic scaling configuration for Amazon Aurora. Aurora Serverless v2 scales database workloads to hundreds of thousands of transactions in a fraction of a second. It adjusts capacity in fine-grained increments to provide just the right amount of database resources for an application’s needs. You don’t need to manage database capacity, and you pay for only the resources consumed by your application.
Aurora Serverless v2 provides the full breadth of Amazon Aurora capabilities, including Multi-AZ support, Global Database, and read replicas. Amazon Aurora Serverless v2 is ideal for a broad set of applications. For example, enterprises that have hundreds of thousands of applications, or software as a service (SaaS) vendors that have multi-tenant environments with hundreds or thousands of databases, can use Aurora Serverless v2 to manage database capacity across the entire fleet.
Sensitive data detection and processing in AWS Glue is now generally available. This feature uses pattern matching and machine learning to automatically detect Personal Identifiable Information (PII) and other sensitive data at both the column and cell levels during an AWS Glue job run. AWS Glue includes options to log the type of PII and its location as well as to take action on it.
Sensitive data detection in AWS Glue identifies a variety of PII and other sensitive data like credit card numbers. It helps customers take action, such as tracking it for audit purposes or redacting the sensitive information before writing records into a data lake. AWS Glue Studio’s visual, no-code interface lets users include Sensitive Data Detection as a step in a data integration job. It lets customers choose the type of personal information to detect as well as specify follow-on actions including redaction and logging. Customers can also define their own custom detection patterns for their unique needs.
Amazon QuickSight now supports 1-click public embedding, a feature that allows you to embed your dashboards into public applications, wikis, and portals without any coding or development. Once enabled, anyone on the internet can start accessing these embedded dashboards with to up-to-date information instantly, without server deployments or infrastructure licensing needed! 1-click public embedding helps you empower your end users with access to insights in seconds. To access this feature in preview please contact firstname.lastname@example.org.
1-click public embedding is available in Amazon QuickSight Enterprise Edition, in the following AWS Regions - US East (Virginia), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland) and Europe (London).
Amazon MQ now provides support for ActiveMQ 5.16.4. This update to ActiveMQ contains several fixes and enhancements compared to the previously supported version, ActiveMQ 5.16.3
Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. Amazon MQ customers can reduce their operational burden by using Amazon MQ to manage the provisioning, setup, and maintenance of message brokers. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can more easily migrate to AWS without having to rewrite code.
AWS encourage you to consider upgrading to ActiveMQ 5.16.4 with just a few clicks in the AWS Management Console. Because this is the a minor update, your broker will will be automatically upgraded if you have enabled automatic minor version upgrade. To learn more about upgrading, please see: Managing Amazon MQ for ActiveMQ engine versions in the Amazon MQ Developer Guide.
Amazon Kendra is an intelligent search service powered by machine learning, enabling organizations to provide relevant information to customers and employees, when they need it. Starting this week, AWS customers can use the Amazon Kendra Box connector to index and search documents from Box.
Critical information can be scattered across multiple data sources in an enterprise, including internal and external websites. Amazon Kendra customers can now use the Kendra Box connector to index documents stored in Box (HTML, PDF, MS Word, MS PowerPoint, and Plain Text) and search for information across this content using Kendra Intelligent Search.
The Amazon Kendra Box connector is available in all AWS regions where Amazon Kendra is available.
You can now schedule live content into a linear channel created using Channel Assembly with AWS Elemental MediaTailor. You could already re-use transcoded and packaged HLS and DASH streams from your existing video on demand (VOD) catalogs and now you can use live streams from an origin such as AWS Elemental MediaPackage as scheduled sources for a linear channel.
For VOD only channels, you can create a basic channel which is priced at the same existing Channel Assembly rates. When using VOD content and live sources, you need to create a standard channel configuration which has a higher per hour cost. Visit the MediaTailor pricing page for more details on basic and standard channel costs.
Using Channel Assembly with AWS Elemental MediaTailor, you can create linear channels that are delivered over-the-top (OTT) in a cost-efficient way, even for channels with low viewership. Virtual linear streams are created with a low running cost by using existing multi-bitrate encoded and packaged content which now can be either VOD or Live. You can also monetize Channel Assembly linear streams by inserting ad breaks in your programs without having to condition the content with SCTE-35 markers for VOD or Live sources, as the SCTE-35 ad break information is simply passed through.
You can now try Amazon Neptune for free with a 1-month free trial. First-time Neptune customers can get started with Neptune for free for 30 days, using up to 750 hours of the T3.medium instance, 10 million I/O requests, 1 GB of storage, and 1 GB of backup storage.
Amazon Neptune is a fast, reliable, and fully managed graph database service that helps customers build applications for fraud detection, identity resolution, knowledge management, and security posture assessment using highly connected datasets. With this free trial, you can use all the features that a T3.medium instance supports such as: multi-AZ deployments with up to 15 read replicas; high-throughput, low-latency queries in Gremlin, SPARQL, and openCypher; automated backups; database snapshots; CloudWatch monitoring, and more.
Amazon Elastic Kubernetes Service (Amazon EKS) now supports using the Amazon EKS console, AWS Command Line Interface (CLI), and EKS API to install and manage the AWS Distro for OpenTelemetry (ADOT) Operator. This launch enables a simplified experience for instrumenting your applications running on Amazon EKS to send metrics and traces to multiple monitoring services including AWS X-Ray, Amazon Managed Service for Prometheus, and Amazon CloudWatch.
ADOT is designed as a secure, production-ready, AWS-supported distribution of the OpenTelemetry project, which provides open source APIs, libraries, and agents to collect distributed traces and metrics for application and cluster infrastructure monitoring. The ADOT Operator is an implementation of the Kubernetes Operator. The ADOT Operator manages collection agents that receive, process, and export telemetry data in multiple data formats to open source and vendor-service backends. With this launch, the ADOT Operator can now be installed, managed, and updated directly through the EKS Console, AWS CLI and EKS API. You can see available add-ons and compatible versions in the EKS API, select the version of the add-on you want to run on your cluster, and configure key settings such as the IAM role used by the add-on when it runs. Using EKS add-ons you can go from cluster creation to running applications in a single command and easily keep tooling in your cluster up to date.
AWS Glue Studio Job Notebooks are now generally available, providing interactive, notebook-based job authoring in AWS Glue. They help simplify the process of developing data integration jobs. Job Notebooks also provide a serverless, built-in interface for AWS Glue Interactive Sessions, another new feature that allows customers to run interactive Apache Spark workloads on demand.
AWS Glue Studio Job Notebooks need minimal setup so developers can get started quickly, and feature one-click conversion of notebooks into AWS Glue data integration jobs. They also support live data integration directly from the notebook, fast startup times, and built-in cost management.
AWS Glue Interactive Sessions are now generally available. They provide a new interface into AWS Glue's highly scalable Serverless Spark environment. They support interactive data integration job development, data exploration, and on-demand distributed data processing for customers' own applications.
AWS Glue is a serverless data integration service for both data engineers and data analysts. Interactive Sessions let them process data interactively using the Jupyter-based notebook or IDE of their choice. Sessions start in seconds and have built-in cost management, as well as the option to use either Glue version 2.0 or 3.0. As with AWS Glue jobs, customers pay for only the resources they use.
This week, AWS are excited to announce general availability of Amazon SageMaker Serverless Inference in all AWS Regions where SageMaker is generally available (except the AWS China regions). With SageMaker Serverless Inference, you can quickly deploy machine learning (ML) models for inference without having to configure or manage the underlying infrastructure. When deploying your ML models, simply select the serverless option and Amazon SageMaker automatically provisions, scales, and turns off compute capacity based on the volume of inference requests. With SageMaker Serverless Inference, you pay only for the compute capacity used to process inference requests (billed by the millisecond) and the amount of data processed; you do not pay for idle time. SageMaker Serverless Inference is ideal for applications with intermittent or unpredictable traffic.
Since the preview launch at re:Invent 2021, we have added support for Amazon SageMaker Python SDK, which offers abstractions to simplify model deployment, and support for Model Registry, which allows you to integrate your serverless inference endpoints with your MLOps workflow. AWS have also increased maximum concurrent invocations per endpoint limit to 200 (from 50 during preview), allowing you to use SageMaker Serverless Inference for high-traffic workloads.
Three new managed data identifiers have been added to Amazon Macie to expand its capabilities for discovering and identifying the locations of HTTP Basic Authentication Headers, HTTP Cookies, and JSON Web Tokens present in Amazon Simple Storage Service (Amazon S3). Knowing if and where these types of data are present in your S3 storage helps you to better plan the data security, governance, and privacy needs of your organization.
Amazon Macie also enhanced its existing managed data identifiers for identifying Passports, Mailing Addresses, and US Social Security Numbers (SSNs). This enhancement expands keyword support for discovering occurrences of SSNs and Passports, and the Macie pattern identification system now detects SSNs across a wider array of formats and delimiters. Additionally, the Amazon Macie machine learning models have been updated to improve accuracy in discovering mailing addresses in S3 objects. The updated models use additional checks to validate city names, ZIP codes, and Postal Codes to produce more accurate results.
This week, AWS announced the general availability of AWS Amplify Studio Figma-to-React code capabilities, giving frontend developers a faster workflow for building full-stack apps. These new capabilities add to existing Amplify Studio backend creation and management capabilities, helping developers accelerate UI development. Typically, there is a lot of back and forth between frontend developers and designers, which can lead to suboptimal end-user experiences because of compromises made to ship on time. With Amplify Studio, developing UI as per design is as easy as “import from Figma, export to code, and extend code with custom logic.”
Amplify Studio uses Figma designs to automatically generate pixel-perfect React components that can be wired to your backend in a few clicks. Figma-to-React capabilities were launched in preview at re:Invent 2021, and new features have since been added to include interactivity, an enhanced experience to modify child UI element in code, and new theming capabilities.
This week AWS announced the general availability of 10 new data source connectors for Amazon Athena. With Athena, you can query data stored in relational, non-relational, object, and custom data sources without the need for ETL scripts to pre-process or copy data. This release expands the number of data sources you can query with Athena and helps analysts, data engineers, data scientists, and developers unlock business value from data stored in databases running on-premises or in the cloud.
You can now use Athena to query and surface insights from: SAP HANA (Express Edition), Teradata, Cloudera, Hortonworks, Snowflake, Microsoft SQL Server, Oracle, Azure Data Lake Storage (ADLS) Gen2, Azure Synapse, and Google BigQuery. For a complete list of supported data sources, see Using Athena Data Source Connectors.
With Athena, you can use your SQL knowledge to support a wide range of analytics use cases: you can run interactive queries on data stored in multiple systems of record, create unified datasets to enable self-service business intelligence, prepare input features for use in machine learning model training, build analytics solutions on top of on-premises data, and more.
Amazon Textract is a machine learning service that automatically extracts text, handwriting, and data from any document or image. Textract now provides you the flexibility to specify the data you need to extract from documents using the new Queries features within Analyze Document API. You do not need to know the structure of the data in the document (table, form, implied field, nested data) or worry about variations across document versions and formats. Queries leverages a combination of visual, spatial, and language cues to extract the information you seek with high accuracy.
Traditional OCR solutions struggle to extract data accurately from most unstructured and semi-structured documents because of significant variations in how the data is laid out across multiple versions and formats of these documents. You need to implement custom post processing code or manually review the information extracted from these documents . You also need to parse through the entire OCR output to extract the information you need for your business processes. With Queries, you will be able to specify the information you need in the form of natural language questions (e.g., “What is the customer name”) and receive the exact information (e.g., ”John Doe”) as part of the API response. Queries also lets you assign an alias to each question, making it easy to integrate the output with your downstream systems. Additionally, Queries is pre-trained on a large variety of unstructured, semi-structured, and structured documents. Some example include paystubs, bank statements, W-2s, loan application forms, mortgage notes, vaccine and insurance cards.
With this release, developers can add location features like searching for points of interests (POIs), addresses, or coordinates, and display this data as customizable markers on a map or as a list. Amplify Geo Android also provides developers with map styling capabilities through its APIs that allows developers to tweak their map experience to match their apps’ theme. Amplify Geo also allows developers to create map resources via the Command Line Interface (CLI) tool. This tool helps developers provision necessary cloud services required to implement common mapping use cases.
Amazon DevOps Guru now provides Proactive Insights that helps you to increase availability, improve performance and optimize utilization of your application resources. Proactive monitoring is key to flagging potential issues with your applications and infrastructure early, enabling you to respond quickly and reduce costly downtime. Amazon DevOps Guru uses machine learning (ML) to analyze application resources, configurations and application metrics to identify potential future operational issues to prevent them from impacting your users and increase application uptime.
Amazon DevOps Guru Proactive Insights summarizes the metric anomaly, related events and recommends remediation actions to help mitigate the issue. Amazon DevOps Guru also identifies opportunities to optimize resource utilization by your application to reduce operating costs. For example, consider a Lambda based application with an API Gateway endpoint. The Lambda function has invocations beyond the currently provisioned function concurrency. This leads to continuous spillover of the requests causing cold starts, and consequently a degraded latency and potentially higher costs. Amazon DevOps Guru detects this issue and proactively recommends increasing Lambda function provisioned concurrency.
Auto Scaling in AWS Glue Apache Spark jobs is now generally available. AWS Glue 3.0 can now dynamically scale resources up and down based on the workload. With Auto Scaling, you no longer need to worry about over-provisioning resources for jobs, spend time optimizing the number of workers, or pay for idle workers.
This week, AWS announced the general availability of AWS IoT TwinMaker, a service that makes it easier for developers to create digital twins of real-world systems such as buildings, factories, production lines, and equipment. Customers are increasingly adopting digital twins to make better operational and strategic decisions in industries such as smart buildings, manufacturing, construction, energy, power & utilities, and more. With AWS IoT TwinMaker you now have the tools you need to build digital twins to help you monitor and improve your industrial operations.
The Amazon Chime SDK lets developers add intelligent real-time audio, video, and screen share to their web and mobile applications. The Amazon Chime SDK has achieved FedRAMP High authorization for AWS GovCloud (US) Regions. U.S. government agencies and contractors can now use Amazon Chime SDK to build applications for workloads that require FedRAMP High authorization.
Amazon Chime SDK continues to be available in Northern Virginia and Oregon Regions with FedRAMP Moderate. In addition, Amazon Chime SDK is compliant with PCI DSS, and SOC. Amazon Chime SDK is also a HIPAA eligible service.
Amazon Personalize now gives you more control over your resources by allowing you to stop and start recommenders. Amazon Personalize enables developers to improve customer engagement through personalized product and content recommendations – no ML expertise required. A recommender is a resource that is optimized for specific use cases, such as “Frequently bought together” for Retail and “Top picks for you” for Media and Entertainment. Stopping the recommender when not in use moves it to an inactive state without having to delete the recommender. Starting moves it to an active state, resuming where you left off without having to recreate the recommender resource. For example, during testing phase, you might need to get recommendations only for a few days. You can stop the recommender when you don’t need it, and then start the recommender again at any time to resume recommendation requests. There are no usage charges for a stopped recommender.
You can now set a default instance warm-up time for all scaling activities, health check replacements, and other replacement events in the Auto Scaling instance lifecycle. Amazon EC2 Auto Scaling is a service that allows you to automatically scale and manage logical groups of instances, known as Auto Scaling groups, that serve your application. EC2 Auto Scaling does this by monitoring various metrics, such as CPU utilization and application demand, to determine if an instance needs to be replaced, removed from, or added to your Auto Scaling group. Setting the default instance warm-up time parameter can simplify your Auto Scaling group configuration by ensuring that any scaling and replacement policies are aware of the time your instances typically take to be ready to serve demand.
Previously, you could only set a warm-up time for select scaling and replacement events — instance refresh, target tracking, and step scaling policies — and these needed to be configured individually. Now, by setting a default warm-up time parameter that applies to the whole Auto Scaling group, you can easily ensure that all instance scaling and replacement events — instance refresh, manual and dynamic scaling policies, scheduled actions, and health check replacements — use the same warm-up time to aggregate metrics to Amazon CloudWatch and determine your group capacity. Specifically, if your instances tend to spend a known amount of time to get ready to serve traffic after they are launched (e.g., to pre-load application data), setting the default instance warm-up time can help you avoid inadvertent scaling or replacement events in the following ways. First, instances won’t be counted toward your Auto Scaling group’s desired capacity until the warm-up time elapses. Second, instances’ CloudWatch metrics won’t be used to start scaling or replacement actions until after the warm-up time elapses.
You can now customize your AWS Identity and Access Management (IAM) policies that control access to your Amazon Neptune resources, like Neptune clusters or instances, with AWS global condition context keys. You can use AWS global condition context keys, which are specified in the Condition element of an IAM policy, to allow or disallow access to Neptune resources based on the set conditions.
For example, you can create a policy statement with the aws:SourceIp condition key to limit access to specific source IP addresses or ranges of IP addresses. You can also create a policy statement using the aws:SecureTransport condition key to limit access to requests sent over a Secure Sockets Layer (SSL) connection. The policy statement is effective only when the specified conditions are true.
AWS Migration Hub now helps you orchestrate the migration of applications to AWS with the new Migration Hub Orchestrator feature. The scope of large migration projects generally involves selecting migration tools, step-by-step planning, and tracking the migration process across different tools and teams. Migration Hub Orchestrator provides predefined and customizable workflow templates that offer a prescribed set of migration tasks, migration tools, and automation opportunities. With Orchestrator, you can customize the templates, automate the migration of your applications, and track your progress in one place.
AWS Step Functions expands integration with the AWS SDK by expanding support for over 20 new AWS SDK integrations and over 1000 new AWS API actions.
AWS Step Functions is a low-code, visual workflow service that developers use to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using over 10,000 API actions from over 200+ AWS services. With this enhancement you can now integrate with new services such as AWS Panorama, a machine learning appliance and software development kit that brings computer vision to on-premise internet protocol cameras, improving supply chain visibility or monitoring congestions at airports.
AWS Key Management Service (AWS KMS) lets you create KMS keys that can be used to generate and verify Hash-Based Message Authentication Code (HMACs). HMACs are a powerful cryptographic building block that incorporates secret key material within a hash function to create a unique keyed message authentication code. HMAC KMS keys can only be generated and used within the FIPS 140-2 validated HSM security boundary in AWS KMS. This architecture can minimize the risk of these secret keys being compromised, in contrast to using plaintext HMAC keys in local application software.
This week, AWS announced the general availability of openCypher query language support with Amazon Neptune. Customers can now use openCypher with Amazon Neptune, giving them more choices to build or migrate graph applications to a highly available, secure, and fully managed graph database.
Customers like openCypher’s syntax, which is inspired by SQL, because it provides a familiar structure to compose queries for graph applications. With this week’s launch, Amazon Neptune now provides customers with the widest array of query language support including openCypher, Gremlin, and W3C SPARQL. Customers can use the openCypher and Gremlin query languages together over the same property graph data. Support for openCypher is compatible with the Bolt protocol, thus enabling customers to continue to run applications that use the Bolt protocol to connect to Neptune.
AWS Security Hub has released support for cross-Region security scores and compliance statuses to enable a more complete view of your security posture across all of your accounts and Regions. Last year, Security Hub added support for cross-Region aggregation of findings. This release extends Security Hub's capabilities to now also support cross-Region security scores and compliance statuses, if you have set up an aggregation Region. The security scores for each standard and compliance statuses for each control in your aggregation Region will reflect a composite view across your linked Regions. Your security scores and compliance statuses in your administrator account and in your aggregation Region will reflect a composite view across all of your accounts and Regions.
Amazon Relational Database Service (Amazon RDS) now supports highly available configurations with Multi-AZ deployments for PostgreSQL and for MySQL on AWS Outposts in all commercial regions. Amazon RDS Multi-AZ on Outposts deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads.
When you provision a Multi-AZ database instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance on a second AWS Outposts connected to a different Availability Zone (AZ). Each Outposts runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic fail-over to the standby, so that you can resume database operations as soon as the fail-over is complete.
AWS Controllers for Kubernetes (ACK) controllers for Amazon EKS, Amazon ECR, Dynamo DB, Amazon S3, AWS Application Autoscaling, and AWS API Gateway v2 are now generally available.
ACK lets you define and use AWS service resources directly from Kubernetes. With ACK, you can take advantage of AWS managed services for your Kubernetes applications without needing to define resources outside of the cluster or run services that provide supporting capabilities like databases within the cluster.
Since the last update in January 2022, AWS CloudFormation Registry has expanded to include support for 35 new resource types (refer to the complete list below) between January and March 2022. A resource type includes schema (resource properties and handler permissions) and handlers that allow API interactions with the underlying AWS or third-party services. Customers can now configure, provision, and manage the lifecycle of these newly supported resources as part of their cloud infrastructure through CloudFormation, by treating them as infrastructure as code. Furthermore, we are pleased to announce that 1 new AWS service - AWS Billing Conductor added CloudFormation support on the day of launch. CloudFormation now supports 170+ AWS services spanning over 900 resource types, along with over 40 third-party resource types.
Customers can now centrally discover the schema associated with these 35 new resource types on the CloudFormation Registry. With the addition of these resource types to the Registry, customers can also benefit from the resource import feature of CloudFormation. For example, if you create an AWS Billing Conductor Pricing Rule through the AWS Management Console or the Command Line Interface, you can bring that resource into CloudFormation’s management using the resource import feature.
For feedback on the resources for which you want CloudFormation support, please refer to the aws-cloudformation-coverage-roadmap.
Now you can configure, provision, and manage the following 35 resource types with CloudFormation.
Amazon Redshift now offers new enhancements for Audit Logging, which enables faster delivery of logs for analysis by minimizing latency while also adding Amazon CloudWatch as a new log destination. With this release, customers can choose to stream audit logs directly to Amazon CloudWatch, which enables customers to perform real-time monitoring.
Amazon Redshift provides customers the ability to generate audit logs to help meet security, compliance and diagnostic needs. AWS ran an internal test which showed the new enhancements to Audit Logging reduces the latency associated with delivering log data to Amazon S3 from up to 24 hours to less than 2 hours. By adding Amazon CloudWatch as a log destination, the latency of logs delivery is further reduced to less than 2 minutes. You can enable audit logging to Amazon CloudWatch via the AWS Management Console, API, or CLI. If you change the log destination from Amazon S3 to Amazon CloudWatch, you can still query the log data in the Amazon S3 buckets where it resides, and you will still be able to get your logs in the Amazon S3 bucket by using CloudWatch export to Amazon S3 feature.
Google Cloud Releases and Updates
AI Platform Training
Pre-built PyTorch containers for PyTorch 1.11 are available for training. You can use these containers to train with CPUs, GPUs, or TPUs.
Anthos Clusters on VMWare
Anthos clusters on VMware 1.10.3-gke.49 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.10.3-gke.49 runs on Kubernetes 1.21.5-gke.1200.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.10, 1.9, and 1.8.
Anthos Config Management
Added support for using Fleet Workload Identity to authenticate to Git repositories in Cloud Source Repositories. To learn more, see Grant Config Sync read-only access to Git.
Added a new
--timeout flag to the
nomos bugreport command. This flag configures the timeout for connecting to the cluster.
ConfigSync ignores the hidden directories
.gitlab, and the hidden file
App Engine Standard Env Python
Cloud Bigtable is available in the
europe-west8 (Milan) region.
Cloud Data Fusion
Cloud Load Balancing
Backend subsetting for internal HTTP(S) load balancers improves performance and scalability by assigning a subset of backends to each of the proxy instances.
This feature is in Preview.
The following new region is now available:
Cloud Spanner regional instances can now be created in Milan (
Cloud SQL for MySQL
Support for europe-west8 region (Milan).
Cloud SQL for PostgreSQL
Support for europe-west8 region (Milan).
Cloud SQL for SQL Server
Support for europe-west8 region (Milan).
Generally available: Milan, Italy
europe-west8-a,b,c region has launched with general-purpose E2, N2, and N2D VMs available in all three zones.
See VM instance pricing for details.
Preview: You can now customize the number of visible CPU cores.
spec.networkInterface.networkIp field in
Config Controller is now supported in region
Filestore is now available in Santiago, Chile (
southamerica-west1 region) for Basic HDD and Basic SSD instances.
Storage Transfer Service
Storage Transfer Service now provides more options for when to overwrite files that already exist in the destination. The new
overwriteWhen field provides three options, that apply to all transfers, including those to or from file systems.
NEVERprovides defense in depth for archival cases, where data is not intended to be overwritten. Users no longer need to rely on a retention policy to protect their data.
DIFFERENTuses ETags and checksum values to only overwrite a file if the contents have changed.
ALWAYSoverwrites any existing files with the same name. Avoids
LISToperations on the destination when transferring into Cloud Storage.
You can now use a pre-built container to perform custom training with PyTorch 1.11.
Microsoft Azure Releases And Updates
VNet integration and private endpoints options now configurable when creating a web application.
Bring simplicity and flexibility to licensing when you purchase unlimited Windows Server guest licenses for your Azure Stack HCI cluster through your Azure subscription.
Segregate your backup data in different storage accounts with Archive across-accounts copy.
Azure Static Web Apps preview deployments now support stable URLs.
Reduce spending by storing cold data to Azure Archive Storage, now in new regions.
New Grafana integrations with Azure Monitor include the ability to pin Azure Monitor visualizations from Azure Portal to Grafana dashboards and new out-of-the-box Azure Monitor dashboards.
Have you tried Hava automated diagrams for AWS, Azure and GCP. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure or GCP accounts. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here: