← Back to News

AWS Lambda jumps to 32 GB and 16 vCPUs

2026-04-06 WEB-HOSTING

AWS has expanded Lambda Managed Instances with a much larger resource envelope: functions can now use up to 32 GB of memory and as many as 16 vCPUs. AWS is also exposing configurable memory-to-vCPU ratios, which means teams can tune functions more directly for CPU-heavy or memory-heavy workloads instead of accepting a fixed profile. This matters because it pushes Lambda further up the stack for serious backend and batch jobs. Workloads like media processing, large-scale data transformation, simulation, and high-throughput internal APIs can now stay in a serverless model longer before teams need to move into more operationally heavy compute platforms.

Key Updates

Lambda Managed Instances now support up to 32 GB of memory and a choice of 16, 8, or 4 vCPUs at that ceiling, depending on the selected memory-to-vCPU ratio. AWS is explicitly positioning the feature for compute-intensive workloads that previously ran into the old 10 GB and roughly 6 vCPU ceiling. The launch also keeps the existing serverless operational model, so teams get more raw compute without taking on instance management.

What Developers Need to Know

This gives teams more room to keep bursty or event-driven heavy jobs inside Lambda instead of splitting systems across Lambda for orchestration and EC2 or containers for execution. For platform engineers, the real benefit is architectural simplification: fewer moving parts for pipelines that need high parallelism or fast autoscaling. It also opens the door to revisiting workloads that were previously dismissed as too large for Lambda.

How to use it or Next Steps

Teams should profile workloads that were previously just outside Lambda’s practical limits, especially data pipelines, CPU-heavy transforms, and latency-sensitive backends. The best next step is to benchmark a representative function at the new ratios and compare runtime, concurrency behavior, and total cost against the current EC2 or container implementation. If the numbers work, this release could simplify architecture while preserving the operational advantages of serverless.

Read Original Post →