Key Updates
Lambda Managed Instances now support up to 32 GB of memory and a choice of 16, 8, or 4 vCPUs at that ceiling, depending on the selected memory-to-vCPU ratio. AWS is explicitly positioning the feature for compute-intensive workloads that previously ran into the old 10 GB and roughly 6 vCPU ceiling. The launch also keeps the existing serverless operational model, so teams get more raw compute without taking on instance management.
What Developers Need to Know
This gives teams more room to keep bursty or event-driven heavy jobs inside Lambda instead of splitting systems across Lambda for orchestration and EC2 or containers for execution. For platform engineers, the real benefit is architectural simplification: fewer moving parts for pipelines that need high parallelism or fast autoscaling. It also opens the door to revisiting workloads that were previously dismissed as too large for Lambda.
How to use it or Next Steps
Teams should profile workloads that were previously just outside Lambda’s practical limits, especially data pipelines, CPU-heavy transforms, and latency-sensitive backends. The best next step is to benchmark a representative function at the new ratios and compare runtime, concurrency behavior, and total cost against the current EC2 or container implementation. If the numbers work, this release could simplify architecture while preserving the operational advantages of serverless.