In the following month AWS has announced some FinOps related announcement which for some can become a real game changer. These announcements were presented in AWS re:Invent, and related to couple of services on the cloud. It all provides price discounts or advanced analytic tools and can be use in day-to-day activities by technical team or FinOps analysts. In this article I will try to summarize some of the announcements, will try to explain it to become clearer and would explain what the pros and cons in terms of FinOps and cost optimization.
EBS Snapshot charges is usually an incremental problem which is sometimes difficult to control. The problem with its cost is that AWS charges only for the change between the snapshots, and as a result, deletion of old snapshot might not be reflected as the saving we thought we would receive. Till now, if we wish to save the snapshot for cheaper price, we used S3 cheaper tiers (like glacier), but than you would pay for data transfer and should take all kind of other considerations before preforming it.
Now AWS releases new feature - Amazon EBS Snapshots Archive. With this new option, you can save a point-in-time copy of the data for lower prices ($0.0125 per GB-month of stored data and $0.03 per GB retrieved) and save 75% compared to standard snapshot. As always, you should make sure you save the right data to this tier because there are two pricing categories which are different from the standard one – as mentioned above there is a retrieval fee, and there is a minimum retention period of 90 days. Means that even if you delete or retrieved the snapshot before, you will pay for 90 days.
S3 – throughout the years there were many changes in S3. In terms of cost optimization, the creation of varieties of tiers and classes helped to reduce the cost of the storage. Companies that manage its storage on the right tier since day one, can enjoy lower prices. The catch is that S3 pricing is complicated and contains too many details – storage costs with minimum duration, early deletion cost, etc.
Now AWS announces on three changes in S3; Amazon S3 Glacier storage class became Amazon S3 Glacier Flexible Retrieval. Except from the name change, AWS has reduced the storage price by 10% and that from now on Bulk data retrievals and requests are free of charge. If your data isn’t accessed daily but you should be able to retrieve it in a short period of time, this could be a good option. What you should notice is that AWS charge for a minimum storage duration of 90 days, means that it will be best suited to object that needed once a quarter or more but won’t needed immediately. If it is even less accessed – you should use the cheapest class - S3 Glacier Deep Archive.
What if you wish to keep the data on lower class but enjoy the ability to retrieve it immediately in milliseconds? You should use the brand new S3 Glacier Instant Retrieval storage class. This class’s price is a bit higher than the flexible one ($0.004 per GB vs $0.0036 per GB), but it’s the lowest archive with the option of immediate retrieval. Although it gives the flexibility of millisecond retrieval, it also has a minimum storage duration of 90 days.
Customers that find it hard to control It’s S3 costs will find the intelligent tier as the one that fit the most. Nowadays there are more than 8 storage classes with different pricing structure, all has pros and cons. AWS gives the ability to take advantage of its algorithm and handle it automatically. S3 Intelligent – Tiering used to have four tiering, and now AWS announced a new tier which is the biggest change in my point of view – the creation of Archive Instant Access tier under the S3 intelligent tiering. It has the same storage pricing as S3 Glacier Instant Retrieval storage class but without any retrieval fee. In addition, AWS will learn the patterns of the data you save, and according to it, will transfer the objects between the tiers. You can enjoy the automation and cost reduction without doing anything.
Another service that AWS changes and adding a new class to help reduce its costs is DynamoDB Standard-IA. This class should be used only for tables that you don’t read or write to often, as the price for it is higher than the standard table but can sure be used for data that need to be available for lower prices and doesn’t change a lot. The biggest change here is the ability to switch between the classes without any impact to the data. So, you can start with standard and check the read and write metrics. Later on, if the data isn’t accessed – switch to IA and save ~60% of the costs.
In the article above I was trying to highlight few of the announcements of AWS re:Invent that related to FinOps and cost optimization. I’ve only gave the basics of it, and it is recommended to read more about any of the above, and test it before preforming any changes in the accounts. There were plenty of other announcements regarding new services and features, new EC2 instance types, etc. I recommend to follow on the link - https://aws.amazon.com/new/ or https://aws.amazon.com/about-aws/whats-new/2021/ in order to be up to date.