Summary:
Cerebras Systems withdrew its IPO filing just days after securing a $1.1 billion Series G funding round
CEO Andrew Feldman clarified the withdrawal was due to significant business improvements not reflected in original filings
The company's inference service launched a month before IPO announcement has become a much larger business than anticipated
Cerebras' dinner-plate-sized AI accelerators with 40GB SRAM outperform rivals Nvidia and AMD, achieving 3,000 tokens/second speeds
Major customer wins include AWS, Meta, IBM, and Mistral AI, reducing dependence on single customer G42 which previously accounted for 87% of revenue
Company is expanding to four new US datacenters plus international facilities with fresh $1.1 billion funding
The Sudden IPO Withdrawal
Just days after announcing a $1.1 billion Series G funding round, AI chip startup Cerebras Systems pulled its S-1 IPO filing without explanation, sparking widespread speculation in the tech community.
CEO Breaks the Silence
Amid growing concerns of an AI bubble, Cerebras CEO Andrew Feldman took to LinkedIn to address the situation directly. "On Friday, Cerebras Systems withdrew our S-1. We didn't explain why - that was a mistake," he wrote, while emphasizing that he still has every intention of taking the company public.
The Inference Revolution
Founded in 2015, Cerebras develops dinner-plate-sized AI accelerators that were initially pitched as an alternative to GPUs for AI model training. While training remains part of their business, the company's focus has dramatically shifted to inference services - a capability they launched barely a month before announcing their intention to go public in September last year.
Technical Superiority Driving Growth
The real game-changer has been Cerebras' unique architecture. Each accelerator is equipped with more than 40 GB of SRAM, a class of memory etched directly into the compute die that makes the HBM used by rivals Nvidia and AMD look glacial by comparison. This technological advantage has enabled the company to offer API access to models at speeds topping 3,000 tokens per second for models like OpenAI's gpt-oss-120B - far faster than what's possible using traditional GPUs.
Major Customer Wins
These capabilities have helped Cerebras secure an impressive roster of high-profile customers, including:
- Mistral AI
- AWS
- Meta
- IBM
- Cognition
- AlphaSense
- Notion
- Perplexity
This represents a dramatic shift from just a year ago, when the company's IPO filings revealed that the United Arab Emirates' G42 accounted for 87% of revenues in the first half of 2024 - a concentration that raised significant concerns among US government officials.
Rapid Infrastructure Expansion
To support this growing customer base, Cerebras announced this spring that it was expanding operations to four new datacenters across the US, plus facilities in Montreal and France. With another $1.1 billion in fresh funding, the company now plans to further expand its US manufacturing and datacenter capacity.
The Real Reason Behind the Withdrawal
As Feldman explained, the rapid business transformation simply wasn't reflected in the original S-1 filings. "Given that the business has improved in meaningful ways we decided to withdraw so that we can re-file with updated financials, strategy information including our approach to the rapidly changing AI landscape," he wrote, clarifying that the move doesn't reflect any shift in strategy.
The company remains committed to going public, with Feldman confirming: "We will re-file our S-1 when the updated materials are ready."






Comments