Taylor AI

Taylor AI

Fine-tune open-source LLMs in minutes

136 followers

Fine-tune open-source LLMs (including Llama2, Falcon, etc) in minutes. With Taylor AI, you can focus on experimentation & building better models, not on digging through python libraries or keeping up with every open-source LLM. And you get to own your models.
Taylor AI gallery image
Taylor AI gallery image
Taylor AI gallery image
Taylor AI gallery image
Taylor AI gallery image
Free Options
Launch Team
OS Ninja
OS Ninja
Explore and Learn Open Source using AI
Promoted

What do you think? …

Benjamin Anderson
Hey Product Hunt! 👋 This is Ben & Brian, and we’re excited to announce Taylor AI! 🧶 Taylor AI (YC S23) empowers enterprises to start fine-tuning open-source LLMs in seconds, so that data science and engineering teams can focus on building great products instead of worrying about GPUs, debugging Python libraries, and keeping up with every new LLM. We started Taylor AI because we saw companies rushing to adopt AI, but struggling to integrate one-size-fits-all chat models like GPT-4 with their proprietary data. Fine-tuning is a great way to customize models, but much of the tooling available for fine-tuning models today is buggy and hard to use. We want to make fine-tuning accessible to every developer and data scientist. With Taylor, you can: 🚀 Start a training run in seconds 👀 Fine-tune state-of-the-art open-source LLMs 🔧 Own your model 🧪 Benefit from cutting-edge techniques (QLoRA, sequence packing, etc.) 🔎 Focus on experimentation, not squashing Python bugs We’re excited for you to try Taylor AI out—every new user can launch up to 3 training jobs for free. Let us know if you have any questions or concerns by emailing us at contact@trytaylor.ai or using the chat on our website (that goes straight to us, not a bot!). What are you fine-tuning LLMs for? Comment below 👇
Best Of AI
Impressed by the impact your product is set to make. Congratulations on the launch!
Cameron Raymond
The future!
Lyondhür Picciarelli
Hey there. Why should I use this Taylor instead of H2O? Cheers.
Brian Kim
Hey @lyondhur - H2O LLM Studio is a great offering, but it serves a different purpose. H2O requires you to set up your own GPU infra, and you eat those costs if your training run doesn't work. You have to make lots of choices about your training setup, and it's easy to make a small error and mess it up. We handle provisioning compute and training just works out of the box. We're purpose-built and solely focused on fine-tuning. Lots of startups are building in fine-tuning as an afterthought, for us, it's our core product—which is why we had Llama-2 fine-tuning up and running the day after the model was released. We're also pretty focused on teams and collaboration. If you use LLM Studio, all your runs are siloed on your VM. We're building Taylor AI to be collaboration-first, so that you can share your models with your team and work together to train the best one to solve your problem. Finally, we're offering some advanced features not supported by many of our competitors, such as multi-turn conversational AI fine-tuning, which is crucial for enterprise customers that want to build chat models on proprietary data. Happy to chat further on this :)
Lyondhür Picciarelli
@brian_kim15 cheers for the reply. Just one correction, you do not need to use GPU at all with H2O if you don't want to, and the setup is pretty trivial. Best of luck with the project.
Benjamin Anderson
@brian_kim15 @lyondhur We've talked to a lot of users who want something different than what H2O offers. We're excited to build a solution that allows more people to get started training models to solve their problems fast.
Xiaohai Liu
Looks amazing guys! Good luck with the launch!
Benjamin Anderson
@xiaohai_liu Thank you!
Daniel Fang
Congrats on the launch Ben and Brian! This looks awesome
Zakria
It's absolutely fantastic! 🔥 Everything bundled together, from fine tuning to quantization and then deployment. I was somehow curious about the quantization process? Could you please specify the method you're using to quantize the model?
Benjamin Anderson
@zakria Depending on the model, we either use GGML to quantize it to 4-bit, or Ctranslate2 to quantize to 8-bit.
12
Next
Last