Today, Cerebras became a public company. This moment represents far more than a company milestone.
This IPO is a defiant confirmation that a once fringe idea is now a widely-held conviction: Cerebras has broken the core bottleneck to the AI revolution and become the defining force in the next great wave of economic and societal progress.
For years, the story of AI has been about its limitless potential: Accelerating medical discoveries, reinventing industries, and amplifying human capability.
The vision was never the problem; the mechanics of executing it were.
As models and ambitions grew, iteration cycles lagged, costs exploded, and entire clusters are needed to train a single model. We’d built the most powerful catalyst for advancement in human history but were trying to harness it with infrastructure never designed for it.
Andrew Feldman, Sean Lie and the early Cerebras team set out to break this barrier with wafer-scale integration — an engineering challenge the industry had spent 75 years saying could not be done. It was 2016, before the true AI boom, before the rise of LLMs, before GPU shortages made the news. But AI compute demand was already doubling 3.5 months while Dennard scaling was running out.

It was also the year we founded Eclipse. We didn't know exactly how much more power AI would require, but knew it couldn’t be met with conventional methods. When we co-led Cerebras’ Series A in 2016, we didn’t see another chip company. We saw a conviction that computing itself had to be rebuilt around intelligence for AI to become the transformational force we had been telling ourselves it would be.
A decade ago, backing Cerebras was an audacious bet against the consensus — a wager that the industry's defining assumption about scale was the very thing holding it back.
The Foundational Innovation
Andrew frames Cerebras as the infrastructure layer for the reasoning era. The internet required fiber; mobile required towers; AI reasoning requires inference at massive scale — at the speed and scale to transform entire economies.
Cerebras isn't adjacent to Eclipse’s mission to reinvent physical industries. It's the foundational innovation that makes every other advancement possible.
When they unveiled the first Wafer-Scale Engine in 2019, it was a statement. While the rest of the compute industry scaled out (more GPUs, more clusters, more networking overhead) Cerebras scaled up.

Shortly after the unveiling, Eclipse Partner Emeritus Pierre Lamond (a Fairchild Semiconductor veteran and Eclipse co-founding partner) traced the lineage from Fairchild to Gene Amdahl to Feldman. He published a blog that closed with a six-word thesis: "Cerebras will define the AI generation."
That maps to Eclipse’s broader thesis: The entire techno-industrial stack — manufacturing, logistics, supply chain, and energy generation, storage, and recycling — must be rebuilt, with compute and power existing as equally critical problems to solve. Cerebras is the rare company that touches every layer of that thesis, and has become the engine behind the next generation of intelligent, resilient industries.
“Cerebras will define the AI generation.”
A pioneer of the semiconductor industry and Eclipse co-founding Partner
Pierre Lamond
Progress at Scale
The WSE wasn’t about peak performance in isolation. It fundamentally changed the rate at which systems could learn. This is the difference between making progress and making progress fast enough to matter.
This is why Cerebras has been adopted by leading institutions across healthcare, pharmaceuticals, and scientific research. Customers have trained models in days that previously took weeks on GPU clusters, achieved hundreds of times faster performance on drug-discovery and genomics workloads, and pushed large-scale scientific simulations to orders of magnitude higher throughput than GPU benchmarks.
This impact scales. Cerebras partnered with G42 to bring high-performance AI infrastructure to the Middle East, defining the category of sovereign AI. OpenAI chose the company to bring high-speed inference to hundreds of millions of users. AWS is deploying Cerebras systems within its own data centers.
These are the AI-powered futures we have been describing for years.


Built for What’s Next
The constraint on progress is not a lack of intelligence. It is the gap between what we can imagine and what we can compute. Cerebras is one of the first serious attempts to close that gap.
What started as a radical bet is no longer a bet. It’s a lead — one that deepens with use, with scale, with time.
This moment won’t be remembered as an IPO. It will be remembered as the point where the underlying system changed, and everything built on top of it began to accelerate.

We've been with the team since the earliest prototypes: Through test wafers, the hunt for manufacturing partners, standing up the first assembly line, shipping the software, building data centers, and negotiating deals with their landmark customers. We’ve been there as they’ve moved from experiment to default intelligence infrastructure, and we’ll be here for everything that comes next.
To Andrew, to the Cerebras team, to every engineer who chose the harder problem: Congratulations. By building the impossible, you have ushered in a new era of possibility.
This is just the beginning.
Follow Eclipse on LinkedIn or sign up for Eclipse’s Newsletter for the latest on rebuilding essential industries.