Azure Cosmos DB is one of the most impressive platform service that is schema-agnostic, fully managed, globally distributed, and multi-model database. Azure Cosmos DB offers scale, low-latency, and high throughput with API’s support of SQL, MongoDB, Cassandra, and tables. It also provides global distribution, which means you can scale and distribute it across different Azure regions. While designing a solution for the cloud, one of the critical considerations is how to make it cost-effective. In this post, let us quickly examine what offers Azure Cosmos DB provides to build a cost-effective solution.
The consumption cost of any database operations is measured by the request units per second ( RU/s). This is the request and response between your API/Application layer and Database layer. When we build a solution using Cosmos DB, we tend to choose the Capacity Mode as “Provisioned Throughput.” This provides us guaranteed low-latency and high availability, but it charges based on the constant rate, irrespective of there are transfections or not.
This makes it expensive in terms of cost, especially when we have a high throughput setup with maximum traffic assessment for an application, which remains a change and highly unpredictable. In this case, we must consider having Auto Scale Provisioned Throughput. With Auto Scale, setup you set the maximum RUs, Azure Cosmos DB automatically scales your throughput based on the workload usages within the range setup.
This enables very effective scale management for your transactions, especially for critical and unpredictable traffic patterns. Enabling autoscale helps to optimize your RU usage and cost, as it will scale down to the minimum of the RU range when it is not in use.
While Provisioned Throughput charges per hour with constant rates of throughput, having Auto Scale Provisioned Throughput enabled Azure charges for the highest throughput within the hr.Auto Scale Provisioned Throughput enables RU usage and cost optimization
Serverless Capacity Mode
What about using Cosmos DB for Development and Test? Do we really need to pay for constant throughput? Or What about a solution with a limited transaction for a specific period of time? What about for your solution, where data is cached and the transaction to Cosmos DB does not require a sustained throughput.
In such scenarios, where transaction processing happens occasionally, or you are building a small noncritical application, or want to have cost-optimized environments for development and testing, Serverless Capacity Mode is the right fit.
With Serverless capacity mode, you pay for consumption – ( RU Consumed and Storage Consumed) only.
You can choose the capacity mode to serverless while creating the Azure Cosmos DB instance in Azure Portal.
You can identify the capacity mode for your Azure Cosmos DB, from the details blade.
As of writing this blog post, Azure Cosmos DB serverless is currently in preview . Read more about Azure Cosmos DB serverless, please check out this link. Read more about Provisioned Throughput from here
To summarize, for any mission-critical application, production application with a fixed set of transactions and high volume constant traffic – go with provision Throughput. Consider to use Provision Throughput with Auto Scale when there is unpredictable traffic; that will optimize costs. Where transaction processing happens occasionally, or you are building a small noncritical application, or want to have cost-optimized environments for development and testing, Serverless Capacity Mode is the right fit.