Using DynamoDB in Production

A few weeks ago, we wrote about migrating our larger data stores from PostgreSQL to DynamoDB. We were hitting our first major growth spike and needed to scale very quickly; DynamoDB was exactly what we needed.

Overall the migration was very successful and our data pipeline is now much faster and easier to scale.

We ran into a few pitfalls along the way, and we wish we had better understood these limitations earlier in the process. Hopefully this post will help other engineering teams decide if DynamoDB is right for them.



Amazon does provide a very thorough DynamoDB User Guide, but it is pretty dense and difficult to comprehend; most of this information is contained in there and I highly recommend it for further reading.

Provisioned Throughput Errors and Hot Partitions

DynamoDB shards each table, meaning that data for a single table is actually stored across many smaller tables. These smaller tables are called “partitions”.

Each partition can handle a fixed number of reads and writes per second (I’m going to focus on writes specifically, but this applies to reads as well). DynamoDB achieves scalable write throughput by writing to multiple partitions at the same time — more shards can handle more writes simultaneously.

For example, say you have three partitions that can each handle 1 write/sec. Your provisioned write capacity is then 1+1+1 = 3 writes/sec. If your application is writing uniformly across all three partitions, each partition accepts 1 write/sec and everything runs smoothly.

However, this assumes uniform distribution of throughput over all partitions. If your writes happen to land on the same partition, that partition is unable to handle the load and some writes will fail. This is called a “Hot Partition” or “Hot Shard”, and it occurs when you write some database items more frequently than others, resulting in a non-uniform work load.

These errors manifest as ProvisionedThroughputExceeded errors and can be very difficult and frustrating to debug — especially since increasing capacity will not resolve the errors.

hot partition

Jeff Barr has a great blog post on working with and preventing non-uniform workloads. Non-uniform workloads can be very expensive. When we started using DynamoDB we had to plan our access patterns very carefully to ensure uniformity and cost efficiency.

Usage Spikes are Expensive!

DynamoDB charges for the amount of Provisioned Throughput allocated per table. Provisioned throughput dictates the rate at which data can be written, regardless of how data is actually written. You can select the provisioned capacity for any table, but there are a couple limitations you should be aware of.

The first is that provisioned throughput can only be decreased four times per day. This means constantly tweaking capacity to ensure cost efficiency is not an option. Instead you must pick your capacity decreases very carefully.

The second is that increasing provisioned capacity is not instant. In our experience, it can take up to 10 minutes for changes to take effect. This means that responding to sudden throughput spikes by increasing capacity will often not suffice; your database operations will still fail while additional capacity is being provisioned.

One workaround is to always maintain enough capacity to handle your largest spike — which can be very expensive. Another option is to predict usage spikes and increase capacity before they hit, but that can be tricky (or impossible) and you will likely require a fallback mechanism in case something goes wrong.

The best option we’ve found is to implement a write buffer that can handle the overflow throughput during usage spikes. These can be complicated to implement and, depending on the nature of your traffic spikes, can also be quite expensive.

Items have a Maximum Size

At the time we migrated, DynamoDB’s maximum item size was 256 KB; this meant no single Item could exceed 256 KB. Since then they’ve raised the limit to 400 KB, but it’s still something you will want to consider carefully.

Amazon suggests persisting larger items in S3 and storing keys in DynamoDB, in place of large items. Depending on your usage, this may or may not be viable – S3 writes can be (relatively) slow, and high throughput might not be possible.

In our case we rarely store items > 400KB so using S3 to store larger items works well. This also adds a layer of complexity to our data storage.

Scans and Queries are Expensive!

Scanning and querying DynamoDB tables counts against your provisioned read capacity, meaning costs are relative to the size of the number of items being scanned – not the number of results being returned.

This can get quite expensive if you need to query your data frequently. For those use cases, you might consider duplicating your DynamoDB tables in services like RedShift or Elastic MapReduce which are better suited for querying large datasets.

At sendwithus, we chose to not allow scans on our DynamoDB tables. Instead we denormalized our data and designed our tables to work without ever querying.

This took some time and a lot of effort to get right. It’s also very limiting whenever we want to manipulate large datasets.


Consistent Reads are Slow

By default reads are eventually consistent, meaning data read-after-write may be stale. DynamoDB has a “strongly consistent” read mode which offers a much higher chance of returning the latest data (not a guarantee though). Strongly consistent reads cost twice as much as eventually consistent reads, so you should use them sparingly.

We’ve also found that strongly consistent reads can be up to 3x slower. For us this was a serious performance concern and we ended up implementing a high-performance write cache to provide strong data consistency on top of DynamoDB.


DynamoDB is great, so long as you understand its limitations and can work around them.

In our case, the limitations forced a non-trivial amount of complexity into our apps. This was not ideal, but the complexity was easily out-weighed by the scalability of our new data pipeline.

Share this post
Tweet about this on TwitterShare on Facebook

3 responses to “Using DynamoDB in Production”

  1. Chris says:

    Interesting article, thanks for sharing! In the scans and queries are expensive section, how do you access data without querying? Did you mean that you built your structure without ever needing to scan? Is the resulting model more reminiscent of what you had in Postgres than what you expected when first migrating to a NoSQL solution?

    • Brad Van Vugt says:

      Hey Chris, yes sorry, we never need to scan. Everything is accessed by Hash + Range queries, and we found a schema that didn’t require scans to implement our current feature set. I’d like to update this post to reflect that.

      Our NoSQL models are actually pretty different from what we had in Postgres, but much simpler and fewer of them.

Leave a Reply

Your email address will not be published. Required fields are marked *