Blog March 7, 2019

Built to Last | Best Practices in AWS Management for Developers

If you’re a developer building solutions that will be deployed on Amazon Web Services (AWS), you’re likely already well aware that AWS is an extremely powerful tool to have in your arsenal. You’re also well aware by now that AWS can also be a bit difficult to manage. To liken a metaphor, we often tell our AWS development partners that the infrastructure available to you is a rocketship that can take you to the stars and beyond…but first the rocketship shows up on your lawn in 1,000 different pieces and no instruction manual for assembly. Yes, there’s plenty of power and possibility but actually getting all that you can out of those tools can be complicated.

Whenever diving into a new venture, especially when it comes to developers like yourself, utilizing tips and best practices is utterly essential in order to ensure your overall success. With that in mind, we wanted to present Connectria’s best practices for AWS management.

Automation

As opposed to the traditional IT infrastructure where you have to manually react to a variety of events, quite often with AWS you have the ability to improve both your system’s stability and the efficiency of your organization through automation.

When it comes to ensuring that you have more resilience, scalability, and performance, you should definitely consider introducing one or more of the following types of automation into your application architecture.

  1. Serverless Management & Deployment — By adopting serverless patterns, you shift the operational focus to the automation of the deployment pipeline. While AWS manages the underlying services, scale, and availability, you can utilize AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy to support the automation of the deployment of these processes.
  2. Infrastructure Management & Deployment — Focusing on infrastructure automation can provide multiple benefits:
    1. Use Auto Scaling to help ensure that you have the desired number of healthy EC2 instances running across multiple Availability Zones and additionally, that you maintain application availability and scale your capacity up or down automatically for Amazon DynamoDB, Amazon ECS, Amazon EKS, and Amazon EC2 depending on demand and per your definitions.
    2. Upload your application code and automatically handle all of the details, such as: monitoring, auto scaling, resource provisioning, and load balancing with AWS Elastic Beanstalk.
    3. Simplify your operating model and ensure optimum environment configuration with AWS Systems Manager and you can automatically apply OS patches, execute arbitrary commands, collect software inventory, and create a system image to configure Windows and Linux operating systems.
  3. Amazon CloudWatch — Automatically recover an impaired EC2 instance or receive an Amazon Simple Notification Service (Amazon SNS) message whenever indicated by your defined perimeters with an Amazon CloudWatch alarm. That SNS message can also automatically perform a POST request to an HTTP or HTTPS endpoint, launch the execution of a subscribed Lambda function, or enqueue a notification message to an Amazon SQS queue. With Amazon Cloud Watch Events, you can receive a near real-time stream of system events that describe changes within AWS resources and by setting simple rules, you can route each type of system event to one or more targets, such as Kinesis streams, SNS topics, and Lambda functions.

Utilizing Golden Images

Launching your resources via the traditional “bootstrap approach” results in slow start times and leaves you dependent on third-party repositories or configuration services. This can be cumbersome and frustrating, especially in auto-scaled environments where you need to be able to quickly launch additional resources in near real-time in response to demand changes.

However, with certain types of AWS resources such as Amazon RDS DB instances, Amazon Elastic Block Store (Amazon EBS) volumes, and EC2 instances, you can speed up your launch time by using a snapshot of a particular state of that resource, known as a “golden image.”

For example, you can customize an EC2 instance and save the configuration by creating an Amazon Machine Image (AMI), aka the golden image. You can then launch as many EC2 instances from the AMI that you need and your customizations will be included in each instance.

When it comes to launching a new test environment, you can utilize AMI to prepopulate its database by instantiating it from a specific Amazon RDS golden image instead of having to import the data from an SQL script.

However, whenever you need to change the configuration of the instance, you have to create a new golden image, so you’ll want to utilize a versioning convention in order to properly manage them over time.

Docker & Containers

Another time-saver that is popular among developers is an open-source technology known as Docker. Docker allows you to build and deploy distributed applications inside software “containers” by packaging a piece of software within a Docker image.

A Docker image is a standardized unit for software development and contains everything that the software needs in order to run, including:

  • Code
  • System tools
  • Runtime
  • System libraries
  • Etc

Use the ECS Container Registry with the following and you can deploy and manage multiple containers across a cluster of EC2 instances and build golden Docker images:

  • Amazon Elastic Container Service (Amazon ECS)
  • AWS Elastic Beanstalk
  • AWS Fargate

You can also easily deploy, manage, and scale containerized applications with Kubernetes and Amazon Elastic Container Service for Kubernetes (Amazon EKS).

Joining Forces

Arguably the best practice approach is to utilize an amalgamation hybrid of both the traditional bootstrapping action and the golden image. In this scenario, some parts of your configuration are captured in a golden image while others are configured dynamically.

Typically, items that introduce external dependencies or don’t often change would be a part of your golden image, while those that differ between your various environments or do change often would be set up dynamically through bootstrapping actions.

If you’re frequently releasing new versions of your application, it’s likely quite impractical for you to create a new AMI for each version. Additionally, you don’t want your database hostname configuration hardcoded to you AMI because it would then be different from your production and test environments.

Service Discovery & Implementation

Your applications that deploy as a set of smaller services depend on those services having the ability to interact with one another but since each of those services can be running on multiple resources, you need to utilize a way to address each service.

In order to achieve this, you’ll need a way of implementing service discovery but you should remember, service discovery is the glue between the components, so it’s important that it is highly reliable.

A simple way to achieve service discovery for an Amazon EC2 hosted service is through Elastic Load Balancing (ELB) where you can consume a service through a stable endpoint because each load balancer gets its own hostname. Combine this with DNS and private Amazon Route 53 zones and the particular load balancer’s endpoint can be abstracted and modified at any time.

Another way to achieve service discovery is to allow retrieval of the endpoint IP addresses and port number of any service by using a service registration and discovery method. Auto naming, such as available with Amazon Route 53, makes it easier to provision instances for microservices and allows you to automatically create DNS records based on your configurations.

Redundancy

While redundant manual tasks seem to only exist in order to torment us, redundancy in your AWS management can be one of the most important best practices that you implement. Eliminating single points of failure by utilizing multiple resources for the same tasks, redundancies can reduce downtime due to resource failures.

To utilize redundancies to your greatest advantage, you can implement them in either active or standby mode.

With active redundancy, requests are distributed to multiple redundant compute resources and if one of them fails, the rest just absorb a larger share of the workload.

Often used for stateful components such as relational databases, when a resource fails in standby redundancy, functionality is recovered by a failover process on a secondary resource. The failover generally requires time before it actually completes, however, and the resource remains unavailable throughout this period.

Overall, when there is a resource failure, active redundancy can achieve better usage and affect a smaller population compared to standby redundancy.

When You Need More Than Tips

AWS can be challenging to implement and administer, especially when skilled resources are hard to find. Connectria is an advanced consulting partner and audited managed service provider for AWS and our broad suite of managed services helps optimize your AWS environment so that you get the most out of your investment.

For more information on how Connectria’s managed AWS services can benefit you or how you can offer these services to your own clients, contact us and a Connectria Solutions Architect will reply as quickly as possible.

Related Resources

 
What’s the Difference Between HIPAA and HITECH?
HIPAA is a regulation that’s gets talked about a lot. But there are other industry regulations that healthcare providers – as well as those that…
 
Size Isn’t Everything – How Smaller VARs are Driving Big Business
Value added resellers, or “VARs”, play an important role in the information technology ecosystem. As the name implies, a VAR takes a product like software…
 
Why Multi-Cloud Strategy Beats Single Cloud Almost Every Time
Our economy is an increasingly digital one, which not only means more pressure on infrastructure, but also higher user demands when it comes to things…