All activity
Anandesh Sharmaleft a comment
Congrats on the launch, but I didn't get it, ain't you doing embedding in the backend from the VLM? you are basically transcribing each frame of video in a multi-threaded way and after you get text you are indexing those with the PTS of video. when user asks the question, it goes in the vector database and fetches the relevant frames based on the query with summary and timestamp. A human on the...

Memories.aiChatGPT for your video library, with unlimited video context
