Top tricks to optimize the cost of AWS

CloudOptimo Team
Top-tricks-to-optimize-the-cost-of-AWS.jpg

Public clouds such as Amazon AWS provide lots of services and options for enterprises to scale their IT infrastructure and allows you to match your workload and performance requirements. These solutions are cost-effective, elastic, on-demand, mature, and secure compared to their counterparts of yesteryears. It has freed IT managers from forecasting the hardware requirements for the next couple of months. This has reduced huge cost savings already as one always tends to over procure to make sure that peak demand is served well. The pay-as-you-go model allows the instances to be turned off easily when not in use. On-demand elastic services of Public cloud providers such as Amazon AWS ensures that over procurement is avoided and helps preserves cash for other business opportunities. These environments have also provided a highly secure standard environment that businesses can rely upon as a standard.

But being spoilt for choices comes with a cost. Often at times, IT Managers start losing track of services that they have opted for. With such a vast array of services and features, your spending quickly turns into a complex, jumbled web of charges. Many of these charges could be avoided if one keeps a watchful eye. Without proper analysis and understanding, your monthly cloud bills can be far higher than it should be.

In this post, we are trying to analyze such services that often go undetected and ignored in Amazon Web Services.

1. Oversized instances, too many instances or running idle instances:

While migrating your application to public clouds, the first choice that you must make is the kind of instances you need to use in your application cluster. Many companies do not exactly have the data of CPU, Memory, or Network usage they might need to make the correct choice. This often runs into a risk of using oversized or more than the required number of instances. The best strategy to follow in this case is to do load testing of your application on a single instance of a certain type and then estimate the resources accordingly. This is often elaborate exercise you must undertake keeping in view of the long term.

Instance types that are optimized for compute, memory, or storage are always expensive compared to their general or standard versions. However, in certain cases, it makes sense to go for them. E.g. Using EBS optimized instances that allow you to utilize SSDs is a good choice in case you are using them for database or applications that requires heavy disk usage or requires a lot of memory to scale. Many times, it results in boosting performance and thereby reducing the lower number of instances.

However, what kind of instances to run is a very specific question to which only your organization can answer. It is necessary that you measure first using load test first and then again calibrate later once your application moves into production.

CloudOptimo helps you reduce this wastage by applying scale up and scale down rules that could be tuned further to optimize your deployment cost.

Oversized instances cost you money, too many instances cost you money, and running idle instances also cost you money. It pays off to be watchful about this.

2. Always using On-Demand Instances:

You should be aware of different pricing models AWS provides that could help you bring down your costs further. So, let’s go back to basics. There are 3 types of pricing models available with AWS.

  • On-demand instances:

These are default, click and spin type of instances. These are billed at an hourly rate depending upon the instances type, region, and platform by default. It is the simplest but also the most expensive way of using Amazon EC2.

As per our experience, it is better to start your deployment with On-Demand instances when you first migrate to AWS. It allows you to fine-tune your instance types and required number of instances. Once you get to know the average number of instances you are going to use, you can explore other pricing models.

  • Reserved Instances:

Reserved instances can be termed simply as On-Demand instances which are purchased upfront for a term (1 year or 3 years) for which AWS provides a heavy discount.

After running your application cluster using On-demand instances for some time, you might have an idea about the number and types of instances you will typically run. If the application is going to run 24x7x365 without significant changes in the number of instances, it is better to go for reserved instances. However, always be aware that as you are paying upfront for a term (1 year or 3 years), it is necessary that you should opt for the only average expected a number of instances to be reserved instances. It is more expensive to reserve instances if you are not going to use them for full capacity.

This brings us to our next type of instances – Spot instances. Spot instances are the cheapest types of instances often resulting in an 80%-90% cost reduction.

  • Spot instances:

Infrastructure providers like AWS are running at large scale. This also implies spare capacity at scale. Spot instances enable you to bid on unused EC2 instances, which can lower your Amazon EC2 costs significantly. As per the trend, Spot instances are available at 80%-90% discounted rates which brings down the cost of running IT infrastructure using AWS by whopping margins. The hourly price for a Spot instance (of each instance type in each Availability Zone) is set by Amazon EC2 and fluctuates depending on the supply of and demand for Spot instances. Your Spot instance runs whenever your bid exceeds the current market price.

Spot instances are the best choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot instances are well-suited for data analysis, batch jobs, background processing, and optional tasks. It also goes well with Applications that are stateless and horizontally scalable.

The catch here is that AWS can reclaim Spot instances when the Spot price rises above your bid price, when the demand for Spot instances rises, or when the supply of Spot instances decreases. When Amazon EC2 marks a Spot instance for termination, it provides a Spot instance termination notice, which gives the instance a two-minute (2 min) warning before it terminates. This poses a challenge for applications that require high availability and consistent performance.

CloudOptimo allows you to run your IT infrastructure using Spot instances in a highly available manner. It sets bids using its ML-driven price prediction engine and readjusts the cluster as the price changes. It also has a proprietary Spot termination engine knowing much in advance which spots run the risk of being terminated and migrates them. If you’re planning to deploy infrastructure that’s a little more complex and requires Elastic Load Balancers and Auto Scaling, you will probably need to draw on a combination of both On-Demand and Spot Instances, calculating the relative AWS costs. Doing this right requires some careful planning. To ensure smooth and consistent performance, CloudOptimo allows you to run your cloud in a hybrid mode wherein you can choose to run some percentage of total instances as On-Demand instances.

Please visit CloudOptimo for more details.

3. Using expensive data storage such as Amazon S3 or EBS for data archives instead of Amazon Glacier

What is S3 and what is it used for?

Amazon S3 is a highly available storage service that is widely used for storing frequently accessed data like documents, images, videos, log files, etc. This storage solution is designed for rapid retrieval. It is designed for use cases demanding low latency and frequent access. Once the assets grow over a period, S3 becomes a costly storage option to the enterprises for rarely accessed items. Such rarely accessed items can be moved to a durable, stable archive system for saving costs. Some of the use cases for archives are:

  • Media asets like news footage, movies, HD content can grow to tens or hundreds of petabytes over the years. Old Archived footage sometimes can become valuable based on current global events and access is needed only during that time.
  • Enterpries need to archive data like email, legal records, financial documents, etc. for complying with their regulatory and business needs. They are needed only during audits.
  • Organizaions like libraries, historical societies, non-profits, governments are increasing their efforts to preserve valuable but aging digital content that is no longer readily available. These archive efforts can grow to petabytes over time.

What is Amazon Glacier?

Amazon Glacier is a low-cost, online cold storage/ data archival solution on a pay-as-you-go model. It’s like Amazon S3 but almost 10 times cheaper almost providing storage for as little at $0.01 per Gigabyte per month. Amazon brought this product to market to backup data which you don’t access very often for a much cheaper rate. This is not supposed to be used to retrieve files too often.

The catch here is that if you wish to retrieve files, it takes 3 to 5 hours to download. This isn’t supposed to be used for quick archival and retrieval scenarios.

Since archives do not require frequent access or rapid retrieval, you can use Amazon S3 to access data in real-time and move rarely accessed data to Amazon Glacier.

Moving Data from Amazon S3 to Amazon Glacier automatically:

How would you like to have the best of both worlds? How about rapid retrieval of fresh data stored in S3, with automatic, policy-driven archiving to lower cost Glacier storage as your data ages, along with easy, API-driven or console-powered retrieval?

Amazon exactly provides the same solution. You can use Amazon Glacier as a storage option for Amazon S3.Details are available at AWS blog.

4. Unused Elastic IP Addresses

IP is a limited resource. AWS allows you to use one free Elastic IP Address(EIP) per instance. However, AWS charges you a small hourly charge when the instance is stopped or an unattached network interface or when EIP is not detached from the stopped instance, to promote optimal use of IP addresses.

So, many times, your DevOps engineer might assume that stopping the instance will release EIP automatically but it is not the case. You need to release them on purpose. Please put it into your DevOps working manual to ensure that these small costs do not bite you and EIP usage remains optimal.

5. Old / Orphan Snapshots

You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Backup is a very critical thing to ensure fast recovery in case failure occurs. However, it is necessary to understand that EBS Snapshots should be saved in moderation and if possible old snapshots should be archived using Amazon Glacier as mentioned in the above-mentioned points.

Amazon S3 is cheaper than general-purpose SSD. However, snapshot charges might be more than you would imagine. Backups are incremental but the initial snapshot is taken for the entire volume. Regular subsequent snapshots might require as much capacity as that of the first snapshot. Therefore, monthly cost savings of deleting old or orphan snapshots can be almost equal to the savings that would incur by deleting the original EBS volume.

It is a good and recommended practice to tag snapshots well and delete old/ unnecessary/ orphan snapshots if not required. A simple habit of tagging snapshots with creation date and what is contained or what purpose it is used for will go a long way to save your cost.

6. Unused Keys

It might come as a surprise to many but AWS charges for Customer Master Key(CMK) you create in AWS Key Management Service whether you use it with KMS-generated key material or key material imported by you, costs $1/month until you delete it.

This could add significant costs for a large organization that generates keys for every employee. It is a good idea to put this item in your DevOps checklist once a month to remove keys associated with employees who have left or who no longer requires access.

7. Data Transfer Costs

AWS doesn’t charge anything if the data transfer is happening within the AWS network in most cases. However, AWS charges significantly when data is transferred out. This could be important to consider when you are planning to migrate your application to AWS. Therefore, it becomes important to understand how AWS charges for bandwidth.

Application hosted over public IP will probably be the most significant part of your bandwidth bill. Moreover, you might have to pay when the data transfer happens between different AWS services. Data transfer between different regions also must pay the same cost. Rehosted applications that are migrated simply from a private cloud provider or some other cloud provider to AWS might not be attuned to take full advantage of all AWS services and features. It is worthwhile to consider using CDN providers such as AWS Cloudfront or Cloudflare to ensure that static content is served from nearshore and data follows the cheapest path.

Companies such as Google provide tools such as Pagespeed Insights or Mobile-Friendly Test that could work as pointers to optimizations that you should be doing. Loading the content faster also improves user experience and increases user stickiness.

Hybrid clouds wherein your application utilizes multiple hosting services along with AWS also runs into risk incurring higher charges for bandwidth which could be avoided if careful planning is done. It becomes important to consider bandwidth charges that your application might incur during the migration phase.

8. Underutilised reserved instances

Reserved instances look like a good bet if you have an accurate forecast that a certain minimum number of instances will be required for your organization. However, it becomes imperative that you run your cluster using On-Demand for the initial few days and tie that up with the growth your application might see. If done wrong, your organization will end with idle resources. In case it happens, it is better to see if you can put them to good use elsewhere. In case those instances cannot be reused, AWS provides a marketplace for unused RIs.

CloudOptimo helps you identify underutilization of RIs which could later be sold at the AWS RI marketplace but please remember that you might lose some money here.

Tags
CloudOptimoOn-Demand InstancesReserved InstancesSpot InstancesAWS Cost OptimizationAWS Pricing ModelsIT Infrastructure Management
Maximize Your Cloud Potential
Streamline your cloud infrastructure for cost-efficiency and enhanced security.
Discover how CloudOptimo optimize your AWS and Azure services.
Request a Demo