Cerebras CS-3

Wafer-Scale AI Acceleration System C DevOps & Hardware

Basic Information

  • Company/Brand: Cerebras Systems
  • Country/Region: USA (Sunnyvale, California)
  • Official Website: https://www.cerebras.ai/
  • Type: Wafer-Scale AI Acceleration System
  • Founded: 2016

Product Description

The Cerebras CS-3 is an AI acceleration system powered by the third-generation Wafer Scale Engine (WSE-3). The WSE-3 is the world's largest AI chip, measuring 46,255 mm² (the size of an entire wafer), featuring 4 trillion transistors and 900,000 AI-optimized cores, manufactured using TSMC's 5nm process. A single system delivers 125 PFLOPS of peak AI performance, capable of training Llama2-70B in a single day. CS-3 clusters can easily scale to 24 trillion parameter models.

Core Features/Characteristics

  • WSE-3 Chip: 46,255 mm² (full wafer size)
  • 4 trillion transistors
  • 900,000 AI-optimized cores
  • 44GB on-chip SRAM
  • 125 PFLOPS peak AI performance
  • TSMC 5nm process
  • Single logical device scalable to 24 trillion parameter models
  • 2x performance of WSE-2 (same power consumption and price)

Price

  • Estimated $2-3 million/system (official pricing not disclosed)

Market Dynamics

  • September 2025: Completed $1.1 billion Series G funding, valued at $8.1 billion
  • January 2026: Signed a $10+ billion compute delivery contract with OpenAI (750MW computing capacity, through 2028)

Target Users

  • Research institutions training large-scale AI models
  • AI labs requiring rapid iteration of large models
  • Hyperscale data center deployments
  • National-level AI infrastructure projects

Competitive Advantages

  • World's largest AI chip, eliminating inter-chip communication bottlenecks
  • 44GB on-chip SRAM provides extremely high memory bandwidth
  • Linear scalability: Multiple CS-3 systems work as a single device
  • Trains 70B parameter models in a single day
  • Simple programming model, no complex multi-GPU parallelism strategies required

Relationship with OpenClaw Ecosystem

The Cerebras CS-3 represents the extreme high-end of AI computing. While individual users may not directly use CS-3, Cerebras' cloud services can provide ultra-fast model training and inference for OpenClaw. For large OpenClaw deployments requiring custom model training, CS-3 can significantly reduce training cycles. Cerebras' collaboration with OpenAI may also indirectly benefit OpenClaw users leveraging OpenAI models.

Sources

External References

Learn more from these authoritative sources: