Shivendra Soni

Shivendra Soni

32 points
All activity
As AI applications gain traction, the costs and latency of using large language models (LLMs) can escalate. VectorCache addresses these issues by caching LLM responses based on semantic similarity, thereby reducing both costs and response times.
Vector Cache
Vector CacheA Python Library for Efficient LLM Query Caching
Shivendra Sonileft a comment
Does it only work for US stocks? Any idea how I can adapt this for indian stock market?
Composer
ComposerBuild, backtest and execute trading algorithms with AI
Shivendra Sonileft a comment
Has this been disabled now ?
Company in a BoxStartup idea to leads in one click with GPT-3
Shivendra Sonileft a comment
This is so soothing :D
Drive & Listen
Drive & ListenDrive around cities while listening to their local radios