AWS Lambda, Save cost by choosing the bigger size!

Save cost by allocating larger memory - 3 min read

When optimizing AWS Lambda functions, developers often focus on right-sizing—allocating just enough memory to keep costs low. But what if bigger Lambda configurations could reduce your bill and improve performance? Let’s explore why "going big" might be the smarter play.

The Cold Start Problem

Lambda’s cold starts—delays when a function initializes—are a headache. While AWS has improved cold start times, memory allocation plays a key role: larger memory sizes = faster cold starts. Why? Higher memory tiers also boost CPU and network resources, letting Lambda initialize and execute code quicker. For latency-sensitive apps (like APIs), this can mean happier users.

How Lambda Pricing Works

A simplified model for Lambda pricing can be described as:

  • Compute time (per millisecond)

  • Memory allocated (per GB-second)

If you visit official pricing calculator, there are other factors as well such as ephimeral storage as well as CPU architecture (ARM or Graviton is cheaper as compared to X86 intel CPUs) but they are a discussion of another time. For now lets focus on the memory and compute time.

As explained earlier increasing the memory would give us more compute power and hence reduce the compute time. That means giving the smallest amount of memory possible would just increase the compute time and hence will cost more in terms of money as well as performance.

A case study by dashbird.io provides an opensource bench-marking model that shows how strategically increasing memory can help find the sweet spot for the cost vs memory size.

If we start from the smallest memory i.e. 128MB increasing the memory results in decrease in execution time and hence cost, until we reach 768MB.

So while it sounds against intuition, increasing the memory size would increase the cost in terms of memory but gain achieved in terms of reduction in execution times offsets that costs and delivers better performance.

Conclusion

While the sweet-spot could vary form workload to workload, generally being cheap on memory in AWS Lambda shall not give you the lowest possible cost.

In my projects I stay with at-least 512MB and for particular loads do the bench marking to find the configuration that gives me performance with least cost.