Merry Christmas Jonathan Ross (Groq s Founder)! $20B will buy a lot of holiday cheer!
Today, Groq announced that it has entered into a non-exclusive licensing agreement with Nvidia for Groq s inference technology. The agreement reflects a shared focus on expanding access to high-performance, low cost inference.
As part of this agreement, Jonathan Ross, Groq s Founder, Sunny Madra, Groq s President, and other members of the Groq team will join Nvidia to help advance and scale the licensed technology.
Groq will continue to operate as an independent company with Simon Edwards stepping into the role of Chief Executive Officer.
GroqCloud will continue to operate without interruption.
An LPU Inference Engine, with LPU standing for Language Processing Unit™, is a new type of end-to-end processing unit system that provides the fastest inference at ~500 tokens/second.
This alpha demo lets you experience ultra-low latency performance using the foundational LLM, Llama 2 70B (created by Meta AI), running on the Groq LPU™ Inference Engine.