Here's a cloud round up of all things Hava, GCP, Azure and AWS for the week ending Friday 29th July 2022.
This week at Hava we've rolled out a few under the hood performance tweaks which will make the generation and diagram loading process a lot quicker.
To stay in the loop, make sure you subscribe using the box on the right of this page.
Of course we'd love to keep in touch at the usual places. Come and say hello on:
AWS IoT SiteWise is now available in US East (Ohio) and Canada (Central) AWS Regions, extending the footprint to 12 AWS Regions.
AWS IoT SiteWise is a managed service that makes it easy to collect, store, organize and monitor data from industrial equipment at scale to help you make better, data-driven decisions. You can use AWS IoT SiteWise to monitor operations across facilities, quickly compute common industrial performance metrics, and create applications that analyze industrial equipment data to prevent costly equipment issues and reduce gaps in production. This allows you to collect data consistently across devices, identify issues with remote monitoring more quickly, and improve multi-site processes with centralized data. With AWS IoT SiteWise, you can focus on understanding and optimizing your operations, rather than building costly in-house data collection and management applications.
You can now run OpenSearch and OpenSearch Dashboards version 1.3 onAmazon OpenSearch Service. This version includes several new features and improvements around observability, SQL and PPL, Alerting and Anomaly Detection. You can upgrade your domain seamlessly to OpenSearch version 1.3 from any of the previous OpenSearch versions, or from Elasticsearch versions 6.8 or 7.x directly, using the OpenSearch Service console or APIs.
OpenSearchis a community-driven, open source search and analytics suite derived from Apache 2.0 licensed Elasticsearch 7.10.2 & Kibana 7.10.2. It consists of a data store, search engine (OpenSearch), and a user interface for visualization (OpenSearch Dashboards) . See ourfrequently asked questionsabout OpenSearch.
AWS launched OpenSearch 1.0 in September 2021 and renamed the service to Amazon OpenSearch Service. As part of OpenSearch 1.0 launch in OpenSearch Service, AWS maintained backward compatibility to ensure you can upgrade from legacy Elasticsearch versions (until v7.10) to OpenSearch seamlessly. For a summary of changes that they did as part of OpenSearch 1.0 launch and the renaming, please seedocumentation. AWS have added several new features and improvements as part of various OpenSearch versions since then, such as transforms, data streams, notebooks in OpenSearch Dashboards, cross-cluster replication, improvements to anomaly detection, k-NN, and observability. With OpenSearch 1.3 they are adding the following improvements and new features:
Observability: Users can now create custom Observability Applications to view the availability status of all their systems by providing a single view of system health, empowering developers and IT Ops to resolve issues faster and with fewer escalations. Some of the new features include the ability to get a unified view of logs, traces, and metrics with Application analytics dashboards, the ability to use correlation ID (based on the OpenTelemetry specification) to tie events together when viewing events and in-context visualisations, improvements to event analysis capabilities such as the ability to monitor a live tail of logs during an event, and view surrounding events to get a correlated picture of a metric. Refer to observability documentation for more information.
PPL and SQL Improvements: PPL now supports run-time fields, which gives users the ability to define their schema when querying their index (instead of formatting during writing), allowing them to improve indexing time, and provides better presentation flexibility. In addition, we have several improvements to SQL and PPL capabilities such as support for ORDER BY and IN clause, ability to query multiple indices using comma separated values using PPL, and ability to change datatypes using CAST function in PPL. For a complete list of PPL and SQL improvements, see the OpenSearch 1.3 release notes. Refer to the PPL commands documentation and SQL queries documentation for more information.
OpenSearch 1.3 is now available on Amazon OpenSearch Service across 26 regions globally.
AWS Backup now supports copying Amazon S3 backups across AWS Regions and accounts
AWS Backup for Amazon S3 now enables you to copy your Amazon S3 backups across AWS Regions and AWS accounts. With backups of Amazon S3 in multiple AWS Regions, you can maintain separable, protected copies of your backup data to help meet the compliance requirements for data protection and disaster recovery. In addition, backups across AWS accounts provide an additional layer of protection against inadvertent or unauthorized actions.
AWS Backup is a policy-based, fully managed, cost-effective solution that enables you to centralize and automate data protection across 15 AWS services (spanning compute, storage, and databases) and third-party applications. Together with AWS Organizations, AWS Backup enables you to centrally deploy policies to configure, manage, and govern your data protection activity. You can now use the AWS Backup console, API, or CLI to create copies of your Amazon S3 backups across AWS Regions and accounts as part of your backup plan schedule or on demand. Using your AWS Organizations management account, you can designate accounts in AWS Organizations as secondary backup accounts, allowing backups to be copied only to trusted accounts.
AWS were excited to announce the support for encryption at rest for datasets and machine learning (ML) models on Amazon SageMaker Canvas using customer managed keys with AWS Key Management Service (KMS). Amazon SageMaker Canvas is a visual point-and-click interface that enables business analysts to generate accurate ML predictions on their own — without requiring any machine learning experience or having to write a single line of code. SageMaker Canvas makes it easy to access and combine data from a variety of sources, automatically clean data, and build ML models to generate accurate predictions with a few clicks.
With this new capability, SageMaker Canvas now provides you flexibility and control to use your own keys to encrypt the file systems on the instances used to train models and generate insights, and the model data in your Amazon S3 bucket. You can create, import, rotate, disable, delete, define usage policies for, and audit the use of your encryption keys. This adds an additional layer of security to protect your data and your ML models.
Encryption with customer managed keys is supported for imported datasets, ML model artifacts, and batch predictions for regression, multi-class classification, and binary classification models, with support for time series forecasting models coming later. Encryption support with Amazon SageMaker Canvas is available in all AWS regions where SageMaker Canvas is supported.
Amazon OpenSearch Service now supports Amazon Elastic Block Store (Amazon EBS) volume type gp3 (General Purpose SSD), in addition to the existing gp2, Magnetic and PIOPS (io1) volumes. You can use gp3 volumes on our latest generation T3, R5, R6g, M5, M6g, C5 and C6g instance families. Amazon EBS gp3 enables customers to provision performance independent of storage capacity, provides better baseline performance, at a 9.6% lower price point per GB than existing gp2 volumes on OpenSearch Service. In addition, with gp3 you now get denser storage on R5, R6g, M5, M6g instance families, which can help you to further optimize your costs.
AWS Customers can scale IOPS (input/output operations per second) and throughput without needing to provision additional block storage capacity. Customers pay only for the resources that they need. gp3 volumes deliver a baseline performance of 3,000 IOPS and 125MB/s throughput at any volume size. In addition, OpenSearch Service provisions additional IOPS and throughput for higher volumes for optimal performance. When the application needs more performance, customers can scale up to 16,000 IOPS and 1,000 MB/s throughput for an additional fee. With this release, we have also increased the maximum volume size supported per instance for gp3 by 100% when compared with gp2 for R5, R6g, M5, M6g instance families.
AWS ParallelCluster 3.2 release is now generally available. AWS ParallelCluster is a fully supported and maintained open source cluster management tool that makes it easy for scientists, researchers, and IT administrators to deploy and manage High Performance Computing (HPC) clusters in the AWS cloud. HPC clusters are collections of tightly coupled compute, storage, and networking resources that enable customers to run large scale scientific and engineering workloads.
Significant feature enhancements in AWS ParallelCluster 3.2 include:
Multiple file systems mounts. Customers can now mount up to 20 Amazon FSx and 20 Amazon EFS existing file systems to a single cluster.
Support for additional Amazon FSx file systems. Customers can now mount existing Amazon FSx for OpenZFS, Amazon FSx for NetApp ONTAP, and Amazon FSx for Lustre Persistent_2 file systems to your cluster. The ParallelCluster cluster configuration file now provides additional sections to configure these file systems.
Memory-aware job scheduling with Slurm. Customers can now leverage Slurm’s memory-based scheduling functionality to specify memory requirements on job submission, and control placement of compute instances that match those memory requirements.
Extending flexibility to dynamically update clusters. Customers can now dynamically update job queue properties such as AMI without having to stop and start their clusters. For example, through a simple CLI command, you can hot swap an AMI while controlling the cycling of compute nodes.
AWS ParallelCluster is available at no additional cost, and you pay only for the AWS resources needed to run your applications. Learn how to launch an HPC cluster using AWS ParallelCluster here.
For more detail you can find the complete release notes for the latest version of AWS ParallelCluster here.
AWS Control Tower has updated its Region deny guardrail to include additional AWS global service APIs to assist in retrieving configuration settings, dashboard information, and support for an interactive chat agent. The Region deny guardrail, ‘Deny access to AWS based on the requested AWS Region', assists you in limiting access to AWS services and operations for enrolled accounts in your AWS Control Tower environment. The AWS Control Tower Region deny guardrail helps ensure that any customer data you upload to AWS services is located only in the AWS Regions that you specify. You can select the AWS Region or Regions in which your customer data is stored and processed.
Additions to the Region deny exemptions list include select APIs for AWS Chatbot, Amazon S3 Storage Lens, and Amazon S3 Multi Region Access Points. To see a full list of API exemptions, please see the Region deny guadrail policy . The new Region deny guardrail is available when you update your AWS Control Tower landing zone to version 3.0.
The Region deny feature complements existing Region selection and Region deselection features in AWS Control Tower. Together, these features help you to address compliance and regulatory concerns, while balancing the costs associated with expanding into additional Regions. You can select restricted Regions during the AWS Control Tower set up process, or in the Landing zone settings page. To learn more, see Configure the Region deny guardrail. For a full list of AWS Regions where AWS Control Tower is available, see the AWS Region Table.
AWS Global Accelerator now offers dual-stack accelerators that enable you to route IPv6 traffic to Regional Application Load Balancer endpoints. Starting this week you can get the availability, security and performance benefits of AWS Global Accelerator for both IPv4 and IPv6 traffic while routing traffic towards Application Load Balancer endpoints.
With dual-stack accelerators, you can route IPv6 traffic, together with IPv4 traffic, towards dual-stack Application Load Balancer endpoints.
With dual-stack accelerators, you will get two static IPv6 addresses, in addition to the two static IPv4 addresses. You can choose a dual-stack accelerator when you set up a new accelerator, or you can update an existing IPv4-only accelerator to dual-stack, using standard AWS tools, including the AWS Management Console, AWS SDK, or the AWS CLI. When you update an existing accelerator, there's no change to the IPv4 specifications of the accelerator. You can monitor your IPv6 traffic in addition to the IPv4 traffic with existing Global Accelerator CloudWatch metrics.
AWS Control Tower now helps reduce redundant AWS Config configuration items by limiting recording of global resources to home Regions only. Previously, AWS Control Tower configured AWS Config to record global resources in all Regions. Since global resources are not tied to a specific AWS Region, changes to global resources are identical across Regions. Limiting recording for global resources (such as IAM users, groups, roles, and customer managed polices) means redundant copies of global resource changes are no longer stored in every Region.
This update brings resource recording into conformance with AWS Config best practices. A full list of global resources is available in AWS Config documentation.
Existing AWS Control Tower landing zones can adopt this change by first updating to the latest landing zone version, then re-registering each Organizational Unit.
Accounts that are not enrolled with AWS Control Tower will be unaffected by this change. You can enroll accounts in AWS Control Tower through single account enrollment or extended governance. After enrolling new accounts or updating your existing accounts, global resources will only be recorded in the home Region selected during AWS Control Tower landing zone set up.
Amazon WorkSpaces now offers an API to create a new WorkSpace Image from a WorkSpace instance. Previously, this functionality was available only through the Amazon WorkSpaces console. After this launch, you can apply all the applications and operating system updates on a WorkSpace and use this API to create a new Image.
Once the new image is created, you can test it before updating your production bundles or sharing the image with other AWS accounts. With this launch you can fully automate your WorkSpaces CI/CD pipelines and keep your WorkSpaces images up-to-date as per your regulatory standards.
This API is now available in all AWS Regions where Amazon WorkSpaces is available.
You can now create up to 10,000 Amazon S3 Access Points per region per account to manage granular access permissions across your different applications. In addition, access points now support Amazon SageMaker, Amazon Redshift, and Amazon CloudFront, helping you use access point aliases directly with your applications as a replacement for S3 bucket names.
S3 Access Points help you more easily configure the right access controls for your shared datasets, simplifying access management for multiple applications. Each access point has its own policy that defines which requests and VPCs are allowed to use the access point.
With up to 10,000 access points, you can now easily scale access management to thousands of use cases. For example, you can create access points with tailored read or write access for each team within your organization, or limit access to a bucket through access points that are restricted to a VPC.
Each of these S3 Access Points has an access point alias automatically generated that you can use to access your S3 data with AWS services such as Amazon EMR or Amazon Redshift. For example, with the added support for Amazon SageMaker Feature Store, your data scientists can manage Machine Learning (ML) features for their ML models using access points that give them access to the required data sets, without needing bucket policies.
You can use S3 Access Points with AWS services such as Amazon EMR, Amazon Sagemaker, Amazon Redshift, and Amazon CloudFront at no additional cost in all AWS Regions, excluding the Amazon Web Services China (Beijing) Region, operated by Sinnet and Amazon Web Services China (Ningxia) Region, operated by NWCD.
You can now configure fine grained access control for data plane actions when using AWS Identity and Access Management (IAM) to connect to Amazon Neptune.
Amazon Neptune is a fast, reliable, and fully managed graph database service that helps customers build applications for fraud detection, identity resolution, knowledge management, and security posture assessment using highly connected datasets.
Starting with Neptune’s engine release 188.8.131.52, you can provide fine grained access to users accessing Neptune data plane APIs with IAM for performing graph-data actions such as reading, writing, and deleting data from the graph, and non graph-data actions such as starting and monitoring NeptuneML activities and checking the status of ongoing data plane activities. For example, you can create a policy with ‘read only’ access for data analysts who do not need to manipulate the graph data, a policy for ‘read and write’ access to developers using the graph for their applications, and a policy for data scientists who need access to NeptuneML commands.
Amazon Web Services (AWS) announces expansion of AWS Ground Station to the Asia Pacific (Singapore) Region. This is the 11th AWS Ground Station antenna location connected to the AWS Global Network. AWS Ground Station is a fully managed service that lets customers control satellite communications, process satellite data, and scale satellite operations. Global expansion to Singapore enables increased opportunities for satellite operators to connect with their satellites and process their space workloads. An additional mid-latitude AWS Ground Station antenna location reduces the time between contacts for Low-Earth Orbit (LEO) satellites and offers increased utility for customers whose operations require payload downlink. Governments, businesses, and universities can benefit from more timely satellite data to make precise, data-driven decisions.
With AWS Ground Station, you pay only for the antenna time that you use. Cross-region data delivery is included in our pricing, enabling customers to either stream satellite data from any of our antennas to Amazon EC2 for real-time processing or instead directly store data in Amazon S3. Additionally, customers can easily integrate their space workloads with other AWS services in near real-time using Amazon’s low-latency, high-bandwidth global network. For example, customers who downlink terabytes of data daily can easily access AWS services such as Amazon SageMaker to quickly derive useful information.
AWS AppSync is a fully managed service that makes it easy to create and manage GraphQL and Pub/Sub APIs, allowing developers to securely access, manipulate, and combine data from one or more data sources via a single API endpoint. With GraphQL, developers write resolvers that fetch data from backend data sources such as Amazon DynamoDB, AWS Lambda, HTTP APIs, and more. To “resolve” a GraphQL query at run-time, AppSync evaluates the resolver code with the contextual information about the query (e.g.: the context). AppSync resolvers are written in the Velocity Template Language (VTL) and support flexible integrated utilities that allow developers to parse (e.g.: $util.parseJson), convert (e.g.: $util.toJson), generate (e.g.: $util.autoId and $util.autoUlid), and log data (e.g.: $util.log).
This week, AWS are releasing a new API command for AWS AppSync, EvaluateMappingTemplate, that allows developers to evaluate their resolver and function mapping templates. Previously, this functionality was only available in the AWS AppSync console. Developers can now access this functionality remotely by using the latest version of the AWS CLI, or by using the latest version of the AWS SDKs. Developers can leverage the EvaluateMappingTemplate command to write unit tests that verify the behavior of their resolvers in their favorite testing frameworks.
Amazon Neptune now supports Global Database, allowing a single Neptune database to span multiple AWS Regions to provide disaster recovery in case of region-wide outages and enable low-latency global reads for applications with a global footprint. Neptune Global Database is available in the US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (London), and Asia Pacific (Tokyo) regions.
Amazon Neptune is a fast, reliable, fully managed graph database service that makes it easy to build and run applications that work with highly connected datasets. Neptune Global Database uses fast, storage-based replication across regions with latencies typically less than one second, using dedicated infrastructure with no impact to your workload’s performance. In the unlikely event of a regional degradation or outage, one of the secondary regions can be promoted to full read/write capabilities. You can have up-to five secondary regions with global database, and each secondary region can have up-to 16 replica instances.
Amazon Connect now allows agents to view Contact Lens transcripts, detected issues, and matched categories in the Contact Control Panel (CCP) and the Salesforce CTI Adapter. At the end of a customer call, an agent will see an unredacted call transcript they can reference and copy over needed information into their customer or case notes. The transcript will display contact category labels and issues detected by Contact Lens once the call ends. In addition, if an agent receives a transferred call, they will see a transcript of the prior agent’s conversation with the customer so that they can understand the context of that interaction without needing to repeat themselves.
This feature is available in all AWS regions where Contact Lens is offered and can be turned on via Security Profiles.
AWS Outposts rack is now supported in AWS Asia Pacific (Jakarta) Region. AWS Outposts rack is a fully managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or co-location space for a truly consistent hybrid experience.
Organizations from startups to enterprise and the public sector in and outside of Indonesia can now connect their Outposts to the AWS Asia Pacific (Jakarta) Region. AWS Outposts rack allows customers to run workloads that need low latency access to on-premises systems locally while connecting back to the AWS Asia Pacific (Jakarta) Region for application management. Customers can also use Outposts rack and AWS services to manage and process data that needs to remain on premises to meet data residency requirements. This regional expansion provides additional flexibility in the AWS Regions that customers’ Outposts can connect to.
Starting this week, AWS customers can use the Organizational Units (OUs) account groupings feature within AWS Organizations when creating their billing groups in the AWS Billing Conductor (ABC) console. For customers who are new to ABC and interested in segmenting, computing, and viewing their cost and usage data by OU, the new point-in-time OU import capability reduces the level of effort needed to achieve account parity between OUs and ABC billing groups.
Amazon Polly is a service that turns text into lifelike speech. This week, AWS are excited to announce the general availability of Kajal, a new bilingual neural text to speech (TTS) voice supporting Hindi and Indian English.
TTS voices simplify the way you can create, implement, update, and maintain your speech-enabled applications and products. You can use Amazon Polly to enhance the user experience and improve the accessibility of your text content with the power of voice. Common use cases include interactive voice response (IVR) systems, audiobooks, newsreaders, eLearning content, and virtual assistants.
Amazon Polly offers a bilingual TTS voice supporting Hindi and Indian English. It’s a standard TTS voice called Aditi. With the just released voice Kajal you can, similarly to Aditi, synthesize speech in both English and Hindi, and the voice can switch between the two languages even within the same sentence. With releasing Kajal AWS are not only extending the voice selection for Hindi and Indian English but also enabling neural voice experiences for two additional languages.
AWS Step Functions expands its AWS SDK integrations with support for 3 more AWS Services and 195 more AWS API actions which brings the total to 223 AWS Services and 10,000+ API Actions.
AWS Step Functions is a low-code, visual workflow service that developers use to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using direct integrations with 223 AWS Services. Now with support for Amazon Pinpoint API 2.0, an outbound and inbound marketing communications service, you can directly manage your SMS and voice setup as part of a workflow.
With support for AWS Billing Conductor, you can customize billing data and reporting as part of your workflow in a way that aligns with your business logic or business units. We have also added support for Amazon GameSparks, a managed service providing backend feature tools for building, running, and scaling games. In addition to these new AWS Services, you can now call additional API actions from already supported services such as AWS Glue, which is a serverless data integration service that makes it easy to discover, prepare, and combine data.
The enhancements to AWS SDK integrations make it easy to build workflows that use a variety of AWS services using AWS Step Functions. These enhancements are now generally available in the following regions: US East (Ohio and N. Virginia), US West (Oregon), Canada (Central), Europe (Ireland and Milan), Africa (Cape Town) and Asia Pacific (Tokyo). The remaining commercial regions where Step Functions is available will be available in the coming weeks
AWS are excited to announce specialization categories for the AWS Level 1 MSSP Competency. These six new specialized managed security services for the Level 1 MSSP Competency help customers discover partner solutions validated by AWS security experts to provide 24x7 monitoring and response services that include and extend beyond AWS’s Level 1 Managed Security Services (Level 1 MSS) baseline. AWS introduced the Level 1 MSS baseline detailing ten foundational capabilities for MSSP partners to align their managed services to in August 2021, along with the Level 1 MSSP Competency, establishing an industry-first quality standard for customers to measure their security operations to.
The six new specialization categories, each including the ten foundational Level 1 MSS baseline services, are: identity behavior monitoring; data privacy event management; modern compute security monitoring for containers and serverless technologies; managed application security testing; digital forensics and incident response support; business continuity and ransomware readiness to recover from potentially disruptive events. Customers can confidently get the help they need for a holistic managed security service tailored for their uniquely challenging environments from validated Level 1 MSSP Competency Partners with these additional areas of specialization.
AWS Security Hub has added two new integration partners to help customers with their cloud security posture monitoring.
Integrations with Fortinet and JFrog bring Security Hub to 81 partner and AWS service integrations. Fortinet’s FortiCNP now receives findings from Security Hub and combines them with Fortinet product findings and resource context. This in turn allows FortiCNP to produce actionable insights and prioritize security insights based on risk score to reduce alert fatigue and accelerate remediation. JFrog sends findings to Security Hub from Xray, which is a universal application security Software Composition Analysis (SCA) tool that enables a secure software supply chain through continuous scanning of binaries for license compliance and security vulnerabilities.
Security Hub is available globally and is designed to give you a comprehensive view of your security posture across your AWS accounts. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, including Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, AWS Systems Manager Patch Manager, AWS Config, AWS Health, AWS IAM Access Analyzer, as well as from over 65 AWS Partner Network (APN) solutions.
You can also continuously monitor your environment using automated security checks based on standards, such as AWS Foundational Security Best Practices, the CIS AWS Foundations Benchmark, and the Payment Card Industry Data Security Standard. In addition, you can take action on these findings by investigating findings in Amazon Detective or AWS Systems Manager OpsCenter or by sending them to AWS Audit Manager or AWS Chatbot. You can also use Amazon EventBridge rules to send the findings to ticketing, chat, Security Information and Event Management (SIEM), response and remediation workflows, and incident management tools.
AWS Config now supports compliance scores as an enhancement to conformance packs. A compliance score is a percentage-based score that helps you quickly discern the level to which your resources are compliant for a set of requirements that are captured within the scope of a conformance pack. A conformance pack is a collection of AWS Config rules and remediation actions that can be easily deployed as a single entity in an AWS account or AWS Region, or across an organization in AWS Organizations.
A compliance score is calculated based on the number of rule-to-resource combinations that are compliant within the scope of a conformance pack. For example, a conformance pack with 5 rules applying to 5 resources has 25 (5x5) possible rule-resource combinations. If 2 resources are not compliant with 2 rules, the compliance score would be 84%, indicating that 21 out of 25 rule-resource combinations are currently in compliance. Further, compliance scores are emitted to Amazon CloudWatch metrics, which allows for tracking over time. Compliance scores offer a consistent measurement to track remediation progress, perform comparisons across different sets of requirements, and see the impact a specific change or deployment has on your compliance posture.
Amazon Detective now helps to analyze, investigate, and identify the root cause of security findings or suspicious control plane activity on Amazon Elastic Kubernetes Service (Amazon EKS) clusters. Amazon Detective uses Amazon EKS audit logs to automatically extract new entities, such as EKS clusters, container pods, and user accounts, and then builds a profile for each of the entities based on their activity history. Detective then layers the entity profiles with Amazon GuardDuty Kubernetes Protection findings that are created when potential threats or suspicious behavior are identified on your Amazon EKS clusters.
This new Detective capability can assist you to more quickly answers questions such as: which Kubernetes API methods were called by a Kubernetes user account showing signs of compromise, which pods are hosted in an Amazon Elastic Compute Cloud (Amazon EC2) instance that was included in a Amazon GuardDuty finding, or which containers were spawned from a potentially malicious container image.
Amazon EKS audit logging provides audit and diagnostic logs that make it easier for you to secure and run your Amazon EKS clusters.
Starting this week, you can enable Amazon EKS audit logs as a new data source in Amazon Detective with one-click in the AWS Management Console. Amazon Detective automatically analyzes these logs to monitor anomalous actions, identify security issues as they occur within your Amazon EKS cluster, and help you answer questions like: What are the details about a security event? When did it happen? Who initiated it? To further simplify your security investigation, clicking on Amazon GuardDuty Kubernetes Protection findings in the Amazon GuardDuty console starts a guided investigative experience that can assist you in identifying the root cause of the finding, evaluating the potential impact on other resources, and delivering contextual details that can help your application and operations teams respond to the situation quicker. To read more about Amazon Detective support for Amazon EKS, see the Amazon Detective User Guide.
The first 30 days of enabling EKS audit logs as a data source in Detective are available at no additional charge for existing Detective accounts. For new accounts, EKS audit logs as a data source is automatically enabled, and is part of the 30-day Amazon Detective free trial. During the trial period, you can see what the estimated cost of running the service will be after the trial period ends in the Detective Management Console.
AWS Security Hub now automatically receives Amazon GuardDuty Malware Protection findings. Amazon GuardDuty Malware Protection delivers agentless detection of malware on your Amazon Elastic Cloud Compute (EC2) instance and container workloads. This integration between Security Hub and GuardDuty expands the centralization and single pane of glass experience in Security Hub by consolidating your malware findings alongside your other security findings, allowing you to more easily search, triage, investigate, and take action on your security findings. GuardDuty Malware Protection findings within Security Hub also contain an investigation link that allows you to quickly dive deeper to investigate the finding in Amazon Detective.
Available globally, AWS Security Hub gives you a comprehensive view of your security posture across your AWS accounts. With Security Hub, you now have a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, and AWS IAM Access Analyzer, as well as from over 65 AWS Partner Network (APN) solutions. You can also continuously monitor your environment using automated security checks based on standards, such as AWS Foundational Security Best Practices, the CIS AWS Foundations Benchmark, and the Payment Card Industry Data Security Standard.
You can also take action on these findings by investigating findings in Amazon Detective and by using Amazon CloudWatch Event rules to send the findings to ticketing, chat, Security Information and Event Management (SIEM), Security Orchestration Automation and Response (SOAR), and incident management tools or to custom remediation playbooks.
You can enable your 30-day free trial of AWS Security Hub with a single-click in the AWS Management console. Please see the AWS Regions page for all the regions where AWS Security Hub is available.
AWS WAF now supports setting sensitivity levels for SQL injection (SQLi) rule statements, giving you greater control over how AWS WAF evaluates requests to your applications for SQLi attacks.
A SQLi attack involves inserting malicious SQL code into web requests to extract data from or cause harm to your database. AWS WAF offers a SQLi rule statement that detects SQLi signatures in the web request. Today, AWS WAF is introducing two sensitivity level settings for SQLi rules: HIGH and LOW. Sensitivity levels allow you to define how aggressively the SQLi rule statement is enforced. All existing SQLi rule statements will default to LOW sensitivity, which will not change your existing rule evaluation logic. The HIGH setting uses additional SQLi signatures to detect more SQLi attacks and is the recommended setting. Note that with this setting WAF will aggressively block SQLi patterns which can generate more false positives.
You can start using SQLi sensitivity levels by creating a new rule or configuring an existing rule using the custom rule creation wizard and selecting a sensitivity level. When a request is evaluated by the SQLi rule, AWS WAF will apply the SQLi rule according to the sensitivity level you configured. WAF logs now also include a ‘sensitivitylevel’ field for easier identification and tracking. AWS WAF uses web ACL capacity units (WCUs) to measure the operating resources required to run your rules. High-sensitivity SQLi rules consume 30 WCUs, while low-sensitivity SQLi rules will continue to consume 20 WCUs. There is no additional cost to using the sensitivity level setting for SQLi rules, but standard service charges for AWS WAF still apply.
You can start using sensitivity levels for SQLi rules in all regions and for all supported services, including Amazon CloudFront, Application Load Balancer, Amazon API Gateway, and AWS AppSync. AWS WAF is a web application firewall that helps protect your web application or API from common web exploits and malicious bots. For detailed information, see the AWS WAF developer documentation. See the AWS WAF Pricing page for pricing details. AWS Firewall Manager is a security management service that enables you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. Firewall Manager supports configuring sensitivity levels for SQL injection rules.
Amazon RDS Proxy, a fully managed, highly available database proxy for Amazon Relational Database Service (RDS), now support for Amazon RDS for MariaDB databases running on major versions 10.3, 10.4, or 10.5. With Amazon RDS Proxy, customers can make applications more scalable, more resilient to database failures, and more secure.
Amazon RDS Proxy sits between your application and the database to pool and share established database connections, improving database efficiency and application scalability. In case of a failure, Amazon RDS Proxy automatically connects to a standby database instance while preserving connections from your application and reduces failover times for Amazon RDS for MariaDB multi-AZ databases by up to 66%. With Amazon RDS Proxy, database credentials and access can be managed through AWS Secrets Manager and AWS Identity and Access Management (IAM), eliminating the need to embed database credentials in the application.
You can enable Amazon RDS Proxy for your Amazon RDS for MariaDB database in just a few clicks on the Amazon RDS console and connect to the database by pointing your application to the Amazon RDS Proxy endpoint. Read our 10-minute tutorial, or view our documentation to get started.
Amazon RDS Proxy is available for Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL compatibility, Amazon RDS for MariaDB, Amazon RDS for MySQL, and Amazon RDS for PostgreSQL.
Amazon GuardDuty Malware Protection is now available, in Amazon GuardDuty, to help detect malicious files residing on an instance or container workload running on Amazon Elastic Compute Cloud (Amazon EC2) without deploying security software or agents. Amazon GuardDuty Malware Protection adds file scanning for workloads utilizing Amazon Elastic Block Store (EBS) volumes to detect malware that can be used to compromise resources, modify access permissions, and exfiltrate data.
Malicious files that contain trojans, worms, crypto miners, rootkits, bots, and the like can be used to compromise workloads, repurpose resources for malicious use, and gain unauthorized access to data. Existing customers can enable the GuardDuty Malware Protection feature with a single click in the GuardDuty console or through the GuardDuty API. When threats are detected, GuardDuty Malware Protection automatically sends security findings to AWS Security Hub, Amazon EventBridge, and Amazon Detective. These integrations help centralize monitoring for AWS and partner services, automate responses to malware findings, and perform security investigations from the GuardDuty console. With the launch of Amazon GuardDuty Malware Protection there are eight new threat detections:
The first 30 days of GuardDuty Malware Protection are available at no additional charge for existing GuardDuty accounts. For new accounts, GuardDuty Malware Protection is part of the 30-day Amazon GuardDuty free trial. During the trial period you can see the estimated cost of running the service after the trial period ends in the GuardDuty Management Console.
GuardDuty optimizes your costs by only scanning for malware after GuardDuty detects suspicious behavior associated with malware. GuardDuty Malware Protection is available in all AWS regions where GuardDuty is available, excluding the AWS GovCloud (US), AWS China (Beijing) region, operated by Sinnet, and AWS China (Ningxia) region, operated by NWCD. To receive programmatic updates on new Amazon GuardDuty features and threat detections, subscribe to the Amazon GuardDuty SNS topic.
AWS are pleased to announce a new capability in Amazon Macie that allows for one-click, temporary retrieval of up to 10 examples of sensitive data found in Amazon Simple Storage Service (Amazon S3) by Amazon Macie. This new capability enables you to more easily view and understand which contents of an S3 objects were identified to be sensitive, so you can review, validate, and quickly take action as needed. All sensitive data examples captured with this new capability are encrypted using customer-managed AWS Key Management Service (AWS KMS) keys and are temporarily viewable within the Amazon Macie console after being retrieved.
Previously, you could only see the locations of the sensitive data discovered by Amazon Macie. To review the sensitive data, you had to manually go back to the original dataset using the location information provided by Amazon Macie. This additional step in the workflow slowed down security investigations. Using the one-click temporary retrieval of sensitive data capability, you can now more quickly confirm and act on Macie findings as needed. You can enable this new capability in the AWS Management Console or with a single API call.
Getting started with Amazon Macie is fast and easy with one-click in the AWS Management Console or with a single API call. In addition, Macie has multi-account support using AWS Organizations, which makes it easier for you to enable Macie across all of your AWS accounts.
Once enabled, Macie automatically gathers a complete S3 inventory at the bucket level and automatically and continually evaluates every bucket to alert if buckets are publicly accessible, unencrypted, or shared or replicated with AWS accounts outside of a customer’s organization. Then, Macie applies machine learning and pattern matching techniques to the buckets you select to identify and alert you to sensitive data, such as names, addresses, credit card numbers, or credential materials. Identifying sensitive data in S3 can help you comply with regulations, such as the Health Insurance Portability and Accountability Act (HIPAA) and General Data Privacy Regulation (GDPR).
Amazon Macie comes with a 30-day free trial for S3 bucket level inventory and evaluation of access controls and encryption. Sensitive data discovery is free for the first 1 GB per account per region each month with additional scanning charged according to the Amazon Macie pricing plan. Amazon Macie also provides estimated costs per sensitive data discovery job in the console before you submit the job for processing.
AWS Single Sign-On (AWS SSO) is now AWS IAM Identity Center. It is where you create, or connect, your workforce users once and centrally manage their access to multiple AWS accounts and applications. You can create user identities directly in IAM Identity Center, or you can connect your existing identity source, including Microsoft Active Directory and standards-based identity providers, such as Okta Universal Directory or Azure AD.
You can choose to manage access just to AWS accounts, just to cloud applications, or to both. Your users can utilize their existing credentials for one-click access to their assigned AWS accounts, AWS applications, like Amazon SageMaker Studio, and other standards-based cloud applications, like Salesforce, Box, and Microsoft 365.
For current AWS SSO customers, there is no change to how you centrally manage access to multiple AWS accounts or applications. The name change reflects the service capabilities, foundation in AWS Identity and Access Management (IAM), and role as the central place to manage access across AWS. For customers who are new to IAM Identity Center, it is the recommended front door into AWS for your workforce. If you already use IAM, you can configure IAM Identity Center to run alongside it and gradually shift to the centralized sign-in and access management capabilities of IAM Identity Center.
IAM Identity Center builds on the per-account access management capabilities of IAM and the multi-account governance capabilities of AWS Organizations. This foundation enables IAM Identity Center to manage workforce sign-in and fine-grained access to all accounts in an AWS Organization, as well as the flexibility to be administered safely from a member account in the AWS Organization.
AWS Transfer Family now supports the Applicability Statement 2 (AS2) protocol, complementing existing protocol support for SFTP, FTPS, and FTP. Customers across verticals such as healthcare and life sciences, retail, financial services, and insurance that rely on AS2 for exchanging business-critical data can now use AWS Transfer Family’s highly available, scalable, and globally available AS2 endpoints to more cost effectively and securely exchange transactional data with their trading partners. Exchanged data is natively accessible in AWS for processing, analysis, and machine learning, as well as for integrations with business applications running on AWS.
You can configure AS2 endpoints by setting up your own profile as well as your trading partners’ using industry standard AS2 Identifiers and importing certificates and keys. Next, pair up your profile with your trading partners and specify the storage location for inbound and outbound messages. Once messages and Message Disposition Notifications (MDN) are successfully delivered, AS2 specific metadata is extracted from the message and MDNs and stored separately so it can be used for indexing and searching, reducing time to troubleshoot or identify operational issues.
AWS Marketplace Vendor Insights helps streamline the complex third-party software risk assessment process by enabling sellers to make security and compliance information available through AWS Marketplace. A unified web-based dashboard gives governance, risk, and compliance (GRC) teams access to security and compliance information, such as data privacy and residency, application security, and access control. The dashboard also provides evidence backed by AWS Config and AWS Audit Manager assessments, external audit reports (such as ISO 27001 and SOC2 Type 2), and software vendor self-assessments. Vendor Insights serves buyers who need help to efficiently validate that third-party software meets their business compliance needs. Vendor Insights also serves sellers who want to showcase their strong security posture, while reducing the operational burden from responding to buyer requests for risk assessment information.
Using Vendor Insights can help buyers reduce assessment lead time to a few hours by allowing buyers to access the vendor’s validated security profile, saving months of effort from questionnaires and back-and-forth with vendors. Using Vendor Insights notifications also helps buyers remove the need for periodic reassessments. Vendor Insights provides ongoing visibility and alerts about the vendor’s security hygiene, such as if a compliance certification expires.
AWS Wickr is an end-to-end encrypted enterprise communication service that allows secure collaboration across messaging, voice and video calling, file sharing, and screen sharing. The service is now in preview. AWS Wickr helps organizations address evolving threats and regulations by combining security and administrative features designed to safeguard sensitive communications, enforce information governance policies, and retain information as required. Encryption takes place locally, on the endpoint. Every call, message, and file is encrypted with a new random key, and no one but intended recipients—not even AWS—can decrypt them.
Information can be selectively logged to a secure, customer-controlled data store for compliance and auditing purposes. Users have full administrative control over data, which includes setting permissions, configuring ephemeral messaging options, and defining security groups. AWS Wickr integrates with additional services such as Active Directory (AD), single sign-on (SSO) with OpenID Connect (OIDC), and more. You can quickly create and manage AWS Wickr networks through the AWS Management Console. AWS Wickr also allows you to securely automate your workflows using Wickr Bots.
Amazon Relational Database Service (Amazon RDS) for MariaDB now supports R5b database (DB) instances. R5b DB instances support up to 3x the I/O operations per second (IOPS) and 3x the bandwidth on Amazon Elastic Block Store (Amazon EBS) compared to the x86-based memory-optimized R5 DB instances. R5b DB instances are a great choice for IO-intensive DB workloads.
You can launch new R5b DB instances with a single click in the Amazon RDS Management Console or via a single command in the AWS Command Line Interface (AWS CLI). If you want to upgrade your existing Amazon RDS DB instance to R5b, you can do so on the Modify DB Instance page in the Amazon RDS Management Console or via the AWS CLI. Amazon RDS R5b DB instances are supported on MariaDB versions 10.3 and higher.
This week, AWS are making it easier for customers to view and update primary contact information on their AWS accounts using the AWS Command Line Interface (CLI) and AWS SDK. We previously released the Accounts SDK that enables customers to programmatically manage billing, operations, and security contacts for their accounts. Starting today, customers can use the same SDK to additionally update their primary contact information saving them the time and effort of doing it through the management console.
Additionally, for customers using AWS Organizations, Organization administrators can now centrally manage primary contact information for all member accounts using the management account.
The ability to update primary contact information on AWS accounts is available at no additional charge in all commercial AWS Regions and Amazon Web Services China (Beijing) Region, operated by Sinnet and Amazon Web Services China (Ningxia) Region, operated by NWCD.
Amazon DocumentDB (with MongoDB compatibility) now allows you to create clones to enable fast creation of a new cluster that uses the same DocumentDB cluster volume and has the same data as the original.
Database cloning is faster than restoring a snapshot and requires no additional space at the time of creation. With this launch you can, for example, create a cloned DocumentDB cluster from a provisioned DocumentDB cluster to get quick access to production data for development and testing. You can use the clone to verify database changes, test different parameters, and run analytic queries on production data without providing direct access to the production database or impacting its performance. You only pay for additional storage if you make data changes in the cloned DB cluster.
AWS Fault Injection Simulator (FIS) now supports ChaosMesh and Litmus experiments for containerized applications running on Amazon Elastic Kubernetes Service (EKS). Using the new Kubernetes custom resource action for AWS FIS, you can control ChaosMesh and Litmus chaos experiments from within an AWS FIS experiment, enabling you to coordinate fault injection workflows among multiple tools. For example, you can run a stress test on a pod’s CPU using ChaosMesh or Litmus faults while terminating a randomly selected percentage of cluster nodes using AWS FIS fault actions.
To get started running ChaosMesh and Litmus chaos experiments from AWS FIS, simply log in to AWS FIS in the AWS Management Console and start creating a new experiment template. Next, select the aws:eks:inject-kubernetes-custom-resource action type and provide the Kubernetes parameters such as API version, namespace, and custom resource details. Then, specify the EKS cluster you want to target. Because AWS FIS is a fully managed service for running fault injection experiments on AWS, you can also apply IAM permissions to manage which users and roles can start experiments, set Amazon CloudWatch alarms to stop an experiment automatically if a predefined threshold is met, and write experiment outputs to Amazon CloudWatch Logs or S3 buckets.
Anthos Clusters on VMware
Anthos clusters on VMware 1.11.2-gke.53 is now available. To upgrade, see Upgrading Anthos clusters on VMware. Anthos clusters on VMware 1.11.2-gke.53 runs on Kubernetes 1.22.8-gke.204.
The supported versions offering the latest patches and updates for security vulnerabilities, exposures, and issues impacting Anthos clusters on VMware are 1.12, 1.11, and 1.10.
Anthos Service Mesh
Version 1.14 is now available for managed Anthos Service Mesh and is rolling out to the Rapid Release Channel.
The managed Anthos Service Mesh channels are now mapped to the following versions:
Apigee API Hub
On July 27, 2022 Apigee hub released a new version of the software.
Inverse trigonometric SQL functions are now generally available (GA). These functions include:
The new Migrate section in the BigQuery documentation helps you migrate to BigQuery. This includes high-level guidance with a migration overview, an introduction to free-to-use tools that help you with each phase of migration, and platform-specific migration guides.
Cloud Load Balancing
Cloud Load Balancing introduces the internal regional TCP proxy load balancer. This is an Envoy proxy-based regional layer 4 load balancer that enables you to run and scale your TCP service traffic behind an internal regional IP address that is accessible only to clients in the same VPC network or clients connected to your VPC network.
The internal regional TCP proxy load balancer distributes TCP traffic to backends hosted on Google Cloud, on-premises, or other cloud environments.
You can now collect Couchbase logs and metrics from the Ops Agent, starting with version 2.18.2. For more information, see Monitoring third-party applications: Couchbase.
You can now add user-defined labels to public and private Uptime checks. For more information, see Create public uptime checks.
You can now collect Aerospike metrics from the Ops Agent, starting with version 2.18.2. For more information, see Monitoring third-party applications: Aerospike.
You can now collect Couchbase logs and metrics from the Ops Agent, starting with version 2.18.2. For more information, see Monitoring third-party applications: Couchbase.
You can now collect Vault metrics from the Ops Agent, starting with version 2.18.2. For more information, see Monitoring third-party applications: Vault.
Query Optimizer version 5 is generally available. Version 4 remains the default optimizer version in production.
Cloud SQL for PostgreSQL
The following PostgreSQL minor versions and extension versions are now available:
If you use maintenance windows, then you might not yet have these versions. In this case, you'll see the new versions after your maintenance update occurs. To find your maintenance window or to manage maintenance updates, see Find and set maintenance windows.
Added information about checking the
LC_COLLATE value for your databases before performing a major version upgrade of the databases for your Cloud SQL for PostgreSQL instance. For more information, refer to the Cloud SQL documentation.
Preview: You can now merge or split your existing hardware resource commitments to create new upsized or downsized commitments. For more information, see Merge and split commitments.
Generally available: Use the Cloud console, the gcloud tool, or the API to configure a VM to shut down when a Cloud KMS key is revoked. For more information, see Configure VM shutdown on Cloud KMS key revocation.
Generally available: When you create VMs in bulk, you can now use the following new values with the
ANY: Use this value to place VMs in zones to maximize unused zonal reservations.
BALANCED: Use this value to place VMs uniformly across zones.
Config Connector version 1.90.0 is now available.
Fixed issue where
spec.layer7DdosDefenseConfig field in
ComputeSecurityPolicy was not being reflected onto underlying resource.
Added support for
Added support for the
Added support for
state-into-spec: absent to
spec.iap.oauth2ClientIdRef field to
spec.egressPolicies.egressTo.externalResources field to
spec.externalDataConfiguration.connectionId field to
spec.includeBuildLogs field to
spec.cacheKeyPolicy.cdnPolicy.includeNamedCookies field to
spec.internalIpv6Range fields to
spec.maxPortsPerVm field to
spec.advancedOptionsConfig field to
spec.sslPolicyRef field to
spec.monitoringConfig.managedPrometheus field to
spec.sqlServerUserDetails field to
spec.schemaSettings field to
status.pscConnectionStatus fields to
status.managedZoneId fields to
Added support for "reconcile resource immediately once its dependency is ready" feature for
The UI for dataset entry detail pages now includes a section that lets you see what entries are included in that dataset. Look for the new Entry list section when browsing dataset entries in Data Catalog.
Eventarc is available in the following regions:
us-east5(Columbus, Ohio, North America)
us-south1(Dallas, Texas, North America)
Google Cloud Deploy
You can now have Google Cloud Deploy generate a
skaffold.yaml configuration file for you when you create a release, based on a single Kubernetes manifest which you provide. This configuration file is suitable for learning and onboarding.
You can now view and compare Kubernetes and Skaffold confguration files for releases, using Google Cloud Console.
Google Cloud VMware Engine
Resource creation of named objects now enforce naming requirements that match other Google Cloud products like Compute Engine. New resources must use names that are 1-63 characters long, comply with RFC 1035, and consist of lowercase letters, digits, and hyphens. For example, "privatecloud-123".
GKE node system configuration now supports setting the cgroup mode to use the cgroupv2 resource management subsystem.
Microsoft Azure Releases And Updates
The 22.07 release includes reduced connect-to-cloud time for the OS, resulting in lower energy use; best practices guidance for production-ready applications; and optimized manufacturing scripts.
Reduce job failures by using elastic pool storage.
DCsv3 and DCdsv3-series virtual machines (VMs) now supports trusted launch feature.
You now have the option to take the phone as a device guided tour when creating a new application in Azure IoT Central.
VM Apps is a service that simplifies management, sharing, and global distribution of application packages at scale.
You can now configure your Azure Stream Analytics job to write to a SQL table that hasn't yet been created or see schema mismatch detection for an existing SQL table.
Using direct shared gallery, a feature of Azure Compute Gallery, you can now share VM images directly with other subscriptions and tenants.
Azure Stream Analytics is a fully managed, real-time analytics service designed to help you analyze and process fast moving streams of data.
All Azure Windows VMs provisioned in Azure Global Cloud from July 2022, are activated via azkms.core.windows.net, which points to two new KMS IP addresses, 184.108.40.206 and 220.127.116.11
Append organizational metadata to your technical assets by creating and applying managed attributes in the Microsoft Purview data catalog
Add rich text formatting to asset and term descriptions in the Microsoft Purview data catalog.
Cost Details API is now generally available for use by EA and MCA customers.
Have you tried Hava automated diagrams for AWS, Azure, GCP and Kubernetes. Get back your precious time and sanity and rid yourself of manual drag and drop diagram builders forever.
Hava automatically generates accurate fully interactive cloud infrastructure and security diagrams when connected to your AWS, Azure, GCP accounts or stand alone K8s clusters. Once diagrams are created, they are kept up to date, hands free.
When changes are detected, new diagrams are auto-generated and the superseded documentation is moved to a version history. Older diagrams are also interactive, so can be opened and individual resources inspected interactively, just like the live diagrams.
Check out the 14 day free trial here: