Technology 3 min read

AI Hardware Startup Unveils the World's Largest AI Accelerator

Image courtesy of Shutterstuck

Image courtesy of Shutterstuck

With the ever-accelerating development of artificial intelligence, or we might call it the AI boom, comes the challenge of creating hardware accelerators that fit AI needs.

Many chipmakers and startups are working on a new generation of hardware architectures optimized for AI applications, such as machine learning, deep neural networks, and data science.

AI Accelerator is an AI-optimized chip architecture designed to suit deep neural network for highly-specific tasks like computer vision, robotics, and the Internet of Things (IoT).

Google, Intel, Microsoft, Huawei, Amazon, Apple, IBM, and other companies and startups all have their own AI accelerator projects. And, there’s no universal AI accelerator equivalent to Intel’s x86 CPU with desktop computers.

But now, a new player enters the scene with the largest AI accelerator chip ever made.

This AI Accelerator Dwarves any Other Chip on the Market

The burgeoning AI chip market is about to be disrupted in a literally big way!

Based in Los Altos, California, Cerebras Systems is a startup specializing in AI hardware that has unveiled its monster AI accelerator, which is as large as a computer keyboard.

Called Wafer Scale Engine (WSE), Cerebras AI chip is over 56x larger than the biggest GPU existing on the market, measuring 8.5 by 8.5 inches (46,225 mm2).

An average PC has 2 billion transistors and 4 to 6 processor cores. That will give you an idea of Cerebras’ AI accelerator’s power.

CPUs could have about 30 computing cores. The biggest conventional GPUs ever built includes about 21 billion transistors and up to 5,000 cores for the most powerful ones.

The WSE chip has 1.2 trillion transistors, 18 gigabytes of memory, and 400,000 computing cores, delivering 3000x more on-chip memory and 33,000x more bandwidth.

Cerebras presented its giant AI chip at Hot Chips 31: A Symposium on High-Performance Chips, which was hosted at Stanford University, one of the semiconductor industry’s top tech conferences.

“With vastly more silicon area than the largest graphics processing unit, the WSE provides more compute cores, tightly coupled memory for efficient data access, and an extensive high bandwidth communication fabric for groups of cores to work together,” says Cerebras.

Cerebras manufactured the AI accelerator chip using a relatively old 16nm process technology but didn’t reveal how much it would cost.

You won’t see such chip, as big as an iPad, in your typical PC. It is for complex AI systems and AI applications that usually involve the processing of huge amounts of data.

Already, Cerebras has shipped the WSE chip to a small number of customers who would have ample time to try it.

There could be one issue that WSE takers should take into consideration, and that’s the cooling systems and the whole infrastructure demands. An advantage of small chips is that they’re easier to cool and consume less power.

Read More: Intel’s Next-Generation AI Chip Processes Data 1000 Times Faster

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Zayan Guedim know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.

Profile Image

Zayan Guedim

Trilingual poet, investigative journalist, and novelist. Zed loves tackling the big existential questions and all-things quantum.

Comments (0)
Most Recent most recent
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.