AWS is an extremely powerful tool to have in your arsenal. If you’re a developer, you’re likely already well aware. You’re also well aware that AWS can also be difficult to manage. We often tell our AWS development partners that the infrastructure available to you is a rocket ship 🚀. Before it can take you to the stars and beyond, the ship is many different components that need to be assembled. While there’s plenty of power and possibility, bringing this to fruition can be complicated.
Whenever launching into a new venture, it’s best to follow best practices in order to ensure your overall success. With that in mind, we wanted to present Connectria’s best practices for developers for AWS management. Connectria’s full 24/7 onshore NOC and SOC are some of our most loved resources. Along with that, we also have a dedicated team of AWS architects, engineers, and security and compliance experts. Together, our teams are committed to helping you get the most out of your cloud infrastructure with AWS managed services.
Traditionally, IT infrastructure requires manual reaction to a variety of events. However, with AWS you have the ability to improve your system’s stability and efficiency through automation.
There are many methods to ensure the resilience, scalability, and performance of your environment. We suggest you consider introducing one or more of the following types of automation into your application architecture:
1. Serverless Management & Deployment
By adopting serverless patterns, you shift the operational focus to the automation of the deployment pipeline. While AWS manages the underlying services, scale, and availability, you can utilize AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy to support the automation of the deployment of these processes.
2. Infrastructure Management & Deployment
Focusing on infrastructure automation can provide multiple benefits:
- Use Auto Scaling to ensure you have the desired number of healthy EC2 instances running across multiple Availability Zones. Additionally, you want to maintain application availability and scale your capacity up or down automatically for various Amazon solutions. This can include Amazon DynamoDB, Amazon ECS, Amazon EKS, and Amazon EC2 depending on demand and per your definitions.
- Upload your application code and automatically handle all of the details. This can include monitoring, auto-scaling, resource provisioning, and load balancing with AWS Elastic Beanstalk.
- Simplify your operating model and ensure optimum environment configuration with AWS Systems Manager. You can automatically apply OS patches, execute arbitrary commands, and collect software inventory. You can also create a system image to configure Windows and Linux operating systems.
3. Amazon CloudWatch
Need to automatically recover an impaired EC2 instance? or receive an Amazon Simple Notification Service (Amazon SNS) message? You can accomplish this by defining perimeters with an Amazon CloudWatch alarm. That SNS message can also:
- Automatically perform a POST request to an HTTP or HTTPS endpoint
- Launch the execution of a subscribed Lambda function
- Enqueue a notification message to an Amazon SQS queue
With Amazon Cloud Watch Events, you can receive a near real-time stream of system events that describe changes within AWS resources. By setting simple rules, you can route each type of system event to one or more targets, such as Kinesis streams, SNS topics, and Lambda functions.
Utilizing Golden Images
Launching your resources via the traditional “bootstrap approach” results in slow start times. It can also leave you dependent on third-party repositories or configuration services. This can be cumbersome and frustrating, especially in auto-scaled environments. In these environments, you need to be able to quickly launch additional resources in near real-time in response to demand changes.
However, with certain types of AWS resources, you can speed up your launch time by using a snapshot. This captures the particular state of that resource and is known as a “golden image.” The types of AWS resources include Amazon RDS DB instances, Amazon Elastic Block Store (Amazon EBS) volumes, and EC2 instances
For example, you can customize an EC2 instance and save the configuration by creating an Amazon Machine Image (AMI), aka the golden image. You can then launch as many EC2 instances from the AMI that you need and your customizations will be included in each instance.
When it comes to launching a new test environment, you can utilize AMI to prepopulate its database by instantiating it from a specific Amazon RDS golden image instead of having to import the data from an SQL script.
However, whenever you need to change the configuration of the instance, you have to create a new golden image, so you’ll want to utilize a versioning convention in order to properly manage them over time.
Docker & Containers
Another time-saver that is popular among developers is an open-source technology known as Docker. Docker allows you to build and deploy distributed applications inside software “containers” by packaging a piece of software within a Docker image.
A Docker image is a standardized unit for software development and contains everything that the software needs in order to run, including:
- System tools
- System libraries
Use the ECS Container Registry to deploy and manage multiple containers across a cluster of EC2 instances. Use the following:
- Amazon Elastic Container Service (Amazon ECS)
- AWS Elastic Beanstalk
- AWS Fargate
You can also easily deploy, manage, and scale containerized applications with Kubernetes and Amazon Elastic Container Service for Kubernetes (Amazon EKS).
Arguably the best practice approach is to utilize an amalgamation hybrid of both the traditional bootstrapping action and the golden image. In this scenario, some parts of your configuration are captured in a golden image while others are configured dynamically.
Typically, items that introduce external dependencies or don’t often change would be a part of your golden image, while those that differ between your various environments or do change often would be set up dynamically through bootstrapping actions.
If you’re frequently releasing new versions of your application, it’s likely quite impractical for you to create a new AMI for each version. Additionally, you don’t want your database hostname configuration hardcoded to your AMI. This is because it would then be different from your production and test environments.
Service Discovery & Implementation
Your applications that deploy as a set of smaller services depend on those services having the ability to interact with one another but since each of those services can be running on multiple resources, you need to utilize a way to address each service.
In order to achieve this, you’ll need a way of implementing service discovery but you should remember, service discovery is the glue between the components, so it’s important that it is highly reliable.
A simple way to achieve service discovery for an Amazon EC2 hosted service is through Elastic Load Balancing (ELB). Here, you can consume a service through a stable endpoint because each load balancer gets its own hostname. Combine this with DNS and private Amazon Route 53 zones and the particular load balancer’s endpoint can be abstracted and modified at any time.
Another way to achieve service discovery is to allow retrieval of the endpoint IP addresses and port number of any service. You can do this by using a service registration and discovery method. Auto naming, such as available with Amazon Route 53, makes it easier to provision instances for microservices. It also allows you to automatically create DNS records based on your configurations.
Sometimes, redundant manual tasks seem to only exist in order to torment us. However, redundancy in your AWS management can be one of the most important best practices that you implement. Eliminating single points of failure by utilizing multiple resources for the same tasks, redundancies can reduce downtime due to resource failures.
To utilize redundancies to your greatest advantage, you can implement them in either active or standby mode.
With active redundancy, requests are distributed to multiple redundant compute resources and if one of them fails, the rest just absorb a larger share of the workload.
Often used for stateful components such as relational databases, when a resource fails in standby redundancy, functionality is recovered by a failover process on a secondary resource. The failover generally requires time before it actually completes, however, and the resource remains unavailable throughout this period.
Overall, when there is a resource failure, active redundancy can achieve better usage and affect a smaller population compared to standby redundancy.
When You Need More Than Tips
AWS can be challenging to implement and administer, especially when skilled resources are hard to find. Connectria is an advanced consulting partner and audited managed service provider for AWS. Our broad suite of managed services helps optimize your AWS environment so that you get the most out of your investment.
Contact us below to connect with a Connectria Solutions Architect. They can provide more information on how our managed AWS services can support you and your clients.