Cerebras’ third-generation wafer-scale engine (WSE-3) is the fastest AI processor on Earth. It surpasses all other processors in AI-optimized cores, memory speed, and on-chip fabric bandwidth.
This chip is a BEAST!!
"It's 57 times larger," [Cerebras CEO Andrew] Feldman said, comparing the WSE-3 against Nvidia's H100. "It's got 52 times more cores. It's got 800 times more memory on chip. It's got 7,000 times more memory bandwidth and more than 3,700 times more fabric bandwidth. These are the underpinnings of performance."
Feldman said its CS-3 computer with WSE-3 can handle a theoretical large language model of 24 trillion parameters, which would be an order of magnitude more than top-of-the-line generative AI tools such as OpenAI's GPT-4, which is rumored to have 1 trillion parameters. "The entire 24 trillion parameters can be run on a single machine," Feldman said.
This chip is a BEAST!!
"It's 57 times larger," [Cerebras CEO Andrew] Feldman said, comparing the WSE-3 against Nvidia's H100. "It's got 52 times more cores. It's got 800 times more memory on chip. It's got 7,000 times more memory bandwidth and more than 3,700 times more fabric bandwidth. These are the underpinnings of performance."
Feldman said its CS-3 computer with WSE-3 can handle a theoretical large language model of 24 trillion parameters, which would be an order of magnitude more than top-of-the-line generative AI tools such as OpenAI's GPT-4, which is rumored to have 1 trillion parameters. "The entire 24 trillion parameters can be run on a single machine," Feldman said.
Raycast
Retexts
Ringly.io
Insou AI