Sequoia Capital-backed AI startup Cerebras Systems has introduced WSE-3, an upgraded version of its large-scale chips, boasting double the performance at the same price. It features four trillion transistors delivering 125 petaflops of computing power, manufactured using China-based TSMC’s 5nm process.
Despite the enhanced performance, power consumption remains consistent, addressing a crucial concern in AI processing. Cerebras emphasizes the efficiency of systems built around its chips for AI applications, particularly in training processes.
Notably, Cerebras plans to collaborate with Qualcomm by incorporating its AI 100 Ultra chips into WSE-3 systems, enhancing inference capabilities. This move aligns with the company's strategy to offer comprehensive AI solutions beyond just chip sales.
These chips, resembling dinner plates, aim to rival Nvidia's hardware, commonly used in AI development, including applications like ChatGPT, eliminating the need for assembling numerous chips.
“When we started on this journey eight years ago, everyone said wafer-scale processors were a pipe dream. We could not be more proud to be introducing the third-generation of our groundbreaking wafer scale AI chip,” said Andrew Feldman, CEO and co-founder of Cerebras. “WSE-3 is the fastest AI chip in the world, purpose-built for the latest cutting-edge AI work, from a mixture of experts to 24 trillion parameter models. We are thrilled to bring WSE-3 and CS-3 to market to help solve today’s biggest AI challenges.”