← Back to News

CoreWeave Posts Strong MLPerf Results

2026-04-06 COREWEAVE

CoreWeave published new MLPerf Inference v6.0 results showing strong performance on its AI cloud. The company highlights both hardware gains and software-stack improvements, including better throughput on Blackwell Ultra and lower latency from routing optimizations. The post reinforces CoreWeave's position as a cloud built specifically for production-scale AI workloads rather than generic hosting.

Key Features or Updates

CoreWeave reports up to 29% higher throughput on its Blackwell Ultra GPU system versus prior hardware in GPT-OSS-120B testing. It also says a CUDA upgrade produced a 9% throughput gain on identical hardware. A separate routing optimization reduced time-to-first-token P99 latency by 31% without retraining the model, which shows progress at both the hardware and runtime layers.

Impact on Cloud Costs & Architecture

Performance gains like these can improve inference economics by raising throughput per dollar of GPU spend. For teams running model-serving workloads, that can translate into lower cost per request and more stable latency under load. Architecturally, it shows that inference optimization is no longer just about buying faster chips; the software stack matters just as much.

Next Steps

If you operate production inference, benchmark your current provider against the kinds of workloads CoreWeave is optimizing. The practical question is whether the performance gains survive your own traffic mix and model sizes. For teams choosing an AI cloud, this is a reminder to compare real latency and throughput, not just headline GPU specs.

Read Original Post →