Groq Chat

Groq Chat

An LPU inference engine

5.0
41 reviews

543 followers

A new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component to them, such as AI language applications (LLMs)
This is the 2nd launch from Groq Chat. View more
Groq®

Groq®

Hyperfast LLM running on custom built GPUs
An LPU Inference Engine, with LPU standing for Language Processing Unit™, is a new type of end-to-end processing unit system that provides the fastest inference at ~500 tokens/second.
Groq® gallery image
Groq® gallery image
Groq® gallery image
Groq® gallery image
Groq® gallery image
Free
Launch Team
AppSignal
AppSignal
Get the APM insights you need without enterprise price tags.
Promoted

What do you think? …

Avi Basnet
This seems extremely interesting- I’m curious what you’ve seen to be the biggest use case for this LLM?
Johan Steneros
It is fast, that is for sure. Where can I get more information about the chips and hardware? Is there a GPU cloud service?
Johan Steneros
@amrutha_killada1 oh, thank you. Will dig in.
Peter Schout
Congratulations! speed/accuracy is incredible, no wonder NVDA took a dip 😯
Cuong Vu
Groq is a promising product, and I believe your detailed insights could attract even more supporters, helping people better understand its value.
Ian Nance
Man, that IS fast...Already loving it : )
Borja Soler
this will be incredible for the future of LLMs and all the products benefiting from them. super excited with all the new things that will come
Kien Nguyen
Congrat on the launch? Do you have any plan when to support custom training?
123
Next
Last