As an organization, we are all responsible for using resources efficiently to provide real business value. The unit economics needs to make sense. We have compiled a list of things you can do to check your AWS spending is in control.
Cloud computing is a game-changer, providing organizations with enhanced flexibility, rapid scalability, and cost reduction opportunities – as long as it is carefully considered and implemented. The ease and speed of provisioning servers, databases, load balancers, and containers in the cloud can sometimes lead to a loss of control and unexpected cost escalation, catching organizations by surprise.
As the leading Cloud Service Provider, AWS is likely to be the choice for many organizations, with a high probability that your organization is already utilizing its services in some capacity.
Below are 15 rules that can help you to reduce and keep a check on your AWS bill:
- Monitor resource usage: Monitor the number and usage of resources you have running by type, team, account, region, etc., and map it back to the owning teams. Use resource tags liberally to categorize. You can use EC2 usage Reports to evaluate and optimize computing spending. However, tagging needs organization-wide support, effort, and discipline – easier said than done. Further, tagging has limitations and may not be feasible or possible in all scenarios, resources, or service types. Products like CloudZero can help you map cost allocation using custom DSL.
- Periodically audit for unused resources: Take inventory and check if you need all the resources and services you are running or have created. This includes EC2 instances, RDS databases and instances, ELBs, Snapshots, ECS tasks, etc. Evaluate when your Dev, Test, QA, and Staging environments are not required.
- Enable billing alerts: Enable monitoring of estimated AWS charges at 25%, 50%, and 75% of your expected monthly budget. That way, you’ll quickly be alerted when something gets out of control. [Create a billing alarm to monitor your estimated AWS charges].
- Re-evaluate and right-size: Applications and workloads constantly change; what you sized and designed six months back may not be optimum now. Re-evaluate the instance types and number of instances used for each application. Instance types vary in price by orders of magnitude. Choose carefully. Monitor your application performance by CPU, memory, and disk to spot excess capacity and the opportunity to downsize the instance type.
- Look out for logging usages: VPC flow logs, Route 53 logs, and CloudTrail logs are necessary for debugging, measuring, and auditing purposes. However, it would be best to have an established retention policy in place. Over some time, the logs accumulate and start adding up toward your bill. By default, these logs never expire. Keep an eye on these. We highly recommend that you set the Cloudwatch log retention setting.
- Bill under a single AWS Organization account: If you have multiple AWS accounts, bill them under the same AWS Organization structure and save by leveraging volume discounts.
- Leverage Spot instances: Spot instances are a cost-effective option offered by cloud providers like AWS, where unused compute capacity is made available at significantly lower prices than on-demand instances. These instances are ideal for non-critical and fault-tolerant workloads, allowing users to save on computing costs while taking advantage of available resources. However, spot instances are subject to potential termination with short notice if the market price exceeds the bid price, making them suitable for workloads that can tolerate interruptions. A recommended use case is to use spot instances to run batch jobs that do not require permanent instances. Leverage Spot instance advisor to avail cheapest instances in your region [AWS Spot instance advisor]. Also, some companies specialize in this area, like Xosphere.
- Leverage Autoscaling: We highly recommend using Autoscaling to expand and contract the compute capacity based on application demand. For example, you can reduce your AWS costs significantly by automatically downsizing the number of instances/containers in case the CPU utilization goes down, say when running workloads behind a Load Balancer. Choosing smaller instance types in Auto Scaling Group may be better, giving better cost savings due to better resource granularity. However, please check if your application will work on smaller instances. We also recommend you consider moving to technologies like Kubernetes that help with better resource utilization and scale as your application scales.
- Run AWS Trusted Advisor regularly (once every three months) for excess capacity and security issues.
- Keep up to date with the new additions to services and products: AWS continually introduces new generations of services and products, delivering cost-efficient and enhanced offerings to its users. You can reduce costs by upgrading without any impact on application performance. For example, upgrading from gp2 to gp3 volume gives you a direct discount of 20% without any impact on throughput and IOPS if your throughput is less than or equal to 125 MiB/s and IOPS is less than or equal to 3000 IOPS – it’s a no brainer. Also, the new generation EC2 instances are cheaper than the older generation EC2 instances.
- Consider network topology cost implications: AWS networking costs can balloon and surprise you. Engineering teams must decide upfront on network architecture cost implications during the design phase. Which region/availability zone do your workloads sit in? Which region are your bulk of users in? What is the expected egress/ingress? What data is going to be transferred from/to the internet? What data is transferred between workloads? What data is transferred between workloads and AWS Services? We recommend adding the networking costs assessment as a required item for design approval. Once the application is built, it will be very difficult and expensive to change the design later.
- Leverage AWS Compute Discounts: AWS offers a variety of compute discounts aimed at helping businesses optimize their costs while maintaining the required computational resources. You can assess the compute requirements for your company and plan on committing using Reserved Instances (RIs), Savings Plans, Spot Instances, and Dedicated Host Reservations. All the discounts come with a certain level of commitment and limited flexibility. Evaluate where the business is going for the next few years and choose the right plan carefully.
- Leverage cheaper S3 storage class alternatives: From a storage perspective, you can use Glacier to manage archived and older data. Within S3, various options exist: you can opt for reduced availability setups like S3 One Zone for data that can be readily reproduced and explore options like S3 Intelligent-Tiering to optimize storage costs effectively with practically zero impact on performance, durability, and availability. Check out the S3 storage classes.
- Monitor API Gateway API calls: As your applications evolve, the number of APIs supported grows, adding significant costs. Periodically review the API usage and assess ROI to the business. Deprecate and delete unnecessary APIs that do not add value to your product or offer – fewer things for engineers to maintain!
- Optimize EBS usage: Allocating too much volume means paying for unused storage. Why do that? Increasing or decreasing the volume size is relatively straightforward based on your needs. You could automate scaling EBS size or try solutions like Zesty Disk and Lucidity Autoscaler.
AWS stands out as one of the most extensively utilized cloud providers, consistently driving down service costs by capitalizing on economies of scale. Nevertheless, it’s crucial to recognize that not all applications, workloads, or organizations are ideally suited for cloud environments. It’s imperative to discern when migrating out of the cloud is appropriate and to determine whether an application aligns better with an on-premise setup or a SaaS provider.
Remember that Cloud cost optimization is not a one-time fix but an ongoing and iterative process. It involves continuously refining and adjusting your cloud resources and strategies to balance cost and performance. Having said that, it is a very important and rewarding journey.