Javier Fuentes

CLIP Playground - OpenAI's CLIP model in your browser, like GPT-3 for images

CLIP Playground lets you test OpenAI's new model CLIP from your browser. CLIP is a GPT-3 like AI you can use to perform a variety of tasks that pair both images and text.

Add a comment

Replies

Best
Javier Fuentes
Hi everyone! 👋 A few weeks ago OpenAI (the creators of GPT-3) released this new amazing model called CLIP. CLIP is an AI model that understands the similarity between text and images, and just like GPT-3 there is no need to train it on your own data! This means that you can use it out of the box to solve a lot of computer vision problems. However, I couldn't find an easy way to test its capabilities, so I decided to create a simple web app myself. I wanted to give everyone (including myself) the opportunity to test what CLIP is capable of from the comfort of your browser. And I am impressed! You can use CLIP for a huge variety of problems. I give you some examples in the product on how to use it, but I am sure you will come up with other cool applications! Try it and let me know what you think!
Simone Romano
Cool stuff. Finally I don't have to necessarily use python on my laptop to test CLIP!
Javier Fuentes
@ialuronico Thanks Simone! 🙌
Erik Dunteman
This is great! Well done. Curious how you're using the model on the backend? To shamelessly plug, I have an API that scales if it's ever of interest: https://www.producthunt.com/post...
Javier Fuentes
@erikdoingthings Hey Erik! This is exactly what I used actually :D I talk about the details about how I built CLIP Playground here https://twitter.com/JavierFnts/s... Great job with the API btw! Really easy to set up and it works great.
Max Prilutskiy
I like the idea! 👍
Javier Fuentes
@prilutskiy Thanks Max! 🙌