Whether you are building a new application or migrating legacy on premise workloads to AWS it's good to know what compute services are available and which ones might be a good choice for your specific application or workload.
At the heart of AWS you’ll have a choice of three main compute categories being compute instances like EC2, ECS or EKS Containers and Serverless Lambda functions.
Each compute category serves a specific function and using configurable options can be used to handle a variety of application workloads within the AWS cloud. Each offering comes with differing levels of infrastructure interaction ranging from full control of the underlying machines and operating systems through to completely hands free fully managed infrastructure when you never see the instances your application is running on.
Compute - EC2 Instances
Compute instances are deployed with specified CPU and Memory sizes and unlike traditional on premise servers, compute instances are able to access CPU and Memory from multiple physical data center devices simultaneously.
Amazon Elastic Compute Cloud (EC2) provides virtual machines capable of performing the same tasks as an on premise server. You can select an EC2 instance with differing sizes of CPU, Memory, GPU and network bandwidth. The more resources an instance has, the more it will cost to run per month, however you have the ability to match instance capabilities with workloads.
The elastic nature of EC2 means you can dynamically scale compute power when needed and then scale back in when the extra power is no longer required.
EC2 instances are essentially virtual computers in the cloud that emulate physical hardware. You choose the CPU, Memory, Operating System and storage and then manage all the patching and security. While AWS maintains the underlying physical hardware and network infrastructure the virtual machine instance can only be accessed by you. You are only charged once the instance enters a ‘running’ state. If an EC2 instance is stopped there is no charge.
Container services allow the deployment of pre packaged applications and all the dependent resources like databases, configuration files and the like. Containers are run on host operating systems use the host os kernel to run an isolated workloads. The container has it’s own namespace, root file systems and operating system which can run on any underlying environment that supports containers.
Containers are designed to quickly and reliably deploy applications or micro services irrespective of the environment hosting them.
Amazon EKS (Elastic Kubernetes Service) is another AWS managed container option. Kubernetes is an open source system for automating the deployment and scaling of containerized applications. Anything that currently runs on Kubernetes will run on EKS.
Amazon ECS (Elastic Container Service) allows you to containerize workloads that run on clusters of Amazon EC2 instances. Using ECS you don’t need to install or run any other cluster management software.
ECS and EKS container images contain the code and the environment for your application, this will include things like system libraries, configurations, settings and dependencies required for your application to run. Because everything is packaged into a container, you can move the application from one host to another without affecting the ability of the application to run correctly. There should be nothing in the host that the application requires to run as it is all wrapped up inside the container.
Containers can by used to run legacy applications in the cloud without having to change any code
An ECS container can also run on AWS Fargate making it serverless.
Serverless is the category that allows you to build and run applications and workloads without needing to configure servers or compute instances.
AWS Lambda is the service that allows you to run code without describing supporting server resources. You simply upload your code as a function and call the function when required. AWS takes care of the backend processing without you needing to provision anything.
Technically serverless doesn’t mean there are no servers, they obviously exist, but you never need to see or know anything about them. They are managed and maintained by AWS and used only when your code is executed.
Lambda for instance only generates a charge when code is running. While a function sits unused, there is no charge. So instead of provisioning a server or EC2 instance for a batch processing task (like an end of month accounting procedure) that sits unused for 29 days a month, the code could be deployed as a Lambda function and executed once a month provided the task completes in under 15 minutes.
In terms of AWS, this is what is referred to as Serverless.
Lambda functions can be triggered in response to events within the AWS ecosystem or manually triggered from calls within mobile or web applications. For instance if Amazon S3 detects a new file being written into an S3 bucket that contains candidate resumes, a Lambda function could be triggered to update a candidates database and notify HR via email
As a bonus, when you utilize serverless for workloads, you have no operational overhead maintaining cloud infrastructure, you can concentrate on development and deployment of business applications.
AWS ECS containers can be deployed on AWS Fargate which removes the need to provision servers or clusters of EC2 instances. Fargate will run and scale containerised ECS workloads automatically.
EC2 Compute vs Containers vs Serverless
The choice of which category in which to build and run an application will very much depend on the needs of your application. Each category has its own strengths and weaknesses.
EC2 provides granular control of your application infrastructure. There are over 500 instance types providing everything from basic processors to the latest cutting edge high performance CPUs as well as a vast array of storage, memory and networking options.
You have the control to choose the instance type to suit your application workload.
An EC2 instance is a virtualized server. Unlike physical on-premise servers an EC2 instance can be provisioned and deployed in minutes. You can spin up an instance, deploy and test your application and then delete it again when you are finished. This means you can benchmark different instance types and configurations to find the right cost/performance balance.
You can also scale instance resources as your workload demand changes without impacting your application.
With EC2 you can quickly build a new server and get started quickly, you can scale that server’s capacity up and down as needed on a platform that provides 99.99% uptime. EC2 preconfigured instance types are available pre-optimized for compute, storage, memory or general purpose computing.
Instances can be instantiated based on the level of availability required, you can choose a reserved instance dedicated to your application or cheaper on-demand or spot instances that make use of unused AWS server capacity at a discount.
Because the EC2 instance is fully managed by you, you have root level access and need to manage things like patching the operating system just like you would a physical server in your own premises.
EC2 is a good choice for applications that run all the time.
Portability is one of the main advantages of containers. Packaging up your application code, config and dependencies into a single object that can be deployed on any supporting platform or operating system.
Containerised workloads tend to be lightweight and able to be rapidly deployed on multiple cloud or self hosted platforms, which lends itself to rapid development cycles.
This allows you to easily move applications through different environments like development, testing, staging and finally production while you control all the required resources inside the container don't rely on the underlying infrastructure.
Containers run without the start up latency of spinning up a new instance that you would experience starting an EC2 or Lambda function which makes Containers a good choice for micro services or applications that are broken up into smaller functions.
In terms of scalability, containers have no size constraints, so can be as small or large as required and have no time-out limits and initiate instantly when called.
If you build serverless applications, you are completely removed from the underlying infrastructure. There are no instances or operating systems to manage or maintain as this is all handled by AWS Lambda, so you can focus on the application code and nothing else.
Serverless can assist with rapid development, in that there is no requirement to build underlying infrastructure and is also a good choice for event driven applications that need fast response times. Lambda functions can fire as a result of other AWS service events or called from an application in response to detected state changes.
Because there is no provisioned infrastructure, there is no charge while the code is not running. You only pay for the time the code is running and Lambda free tier includes one million free requests per month and 400k GB-Seconds of compute time per month.
Serverless is great for micro services or short-lived applications that can complete tasks in under 15 minutes. If an application needs more than 15 minutes to run, then Lambda can become more expensive than other solutions.
Lambda will automatically scale underlying infrastructure when your application needs more resources like during peak traffic times and scale back in as demand decreases and runs across multiple availability zones to ensure your functions can run in the event of an availability zone. Lambda will also launch as many copies of a function as required to meet incoming requests.
Which Compute Category to Choose?
In terms of which of the three compute service categories to choose, it will really depend on your current situation. Selecting the services that will produce the quickest results to deliver business outcomes will depend on where you are now.
If you are a start up and are starting from scratch a serverless approach might be the best way to go as you can develop modular functions to handle each new task or feature and deploy quickly without worrying about underlying infrastructure.
If you have a monolithic legacy application that you need to move to the cloud without having to rewrite large sections of complex code, then one of the container methodologies might be the best route.
If you have compute intensive applications that frequently require more than 15 minutes to execute tasks, then selecting an appropriate EC2 instance and manually configuring a virtual network with your choice of operating system may be the path to take.
Of course there’s nothing stopping you building applications that use a combination of all three approaches.
Whichever route you take, you can visualize your built network using Hava. Connect your AWS account and Hava will scan and diagram your network and produce interactive diagrams showing all the resources or container workloads that are running.
Once drawn, your diagrams are automatically updated when changes are detected so you always have up to date diagrams on hand. Superseded diagrams are retained in version history for comparison and audit purposes.
You can take a free trial of Hava using the button below.