Tiny Tool Use by Bagel Labs

Tiny Tool Use by Bagel Labs

Achieve tool use with open-source LLMs, made simple

44 followers

Tiny Tool Use is a minimal, open-source library for LLMs to make reliable, auditable tool calls. Supports SFT, DPO, and synthetic data — all driven by simple JSON config. Fast setup, strong evals, and ready for real-world prototyping.
Tiny Tool Use by Bagel Labs gallery image
Tiny Tool Use by Bagel Labs gallery image
Tiny Tool Use by Bagel Labs gallery image
Free
Launch Team / Built With
OS Ninja
OS Ninja
Explore and Learn Open Source using AI
Promoted

What do you think? …

Kyrannio
👋 Hey Product Hunt! At Bagel Labs, we believe the future of advanced AI systems depends on their ability to reason with external tools, APIs, and data sources. But when we tried building with existing “tool-use” stacks, we ran into the same issues every time: brittle code, no reproducibility, and zero shared benchmarks. So we built Tiny Tool Use — a minimal, MIT-licensed open-source library that turns adapting LLMs for robust, auditable tool calls into a config-only workflow. No fragile scaffolding. No hidden state. Just one JSON file and a single CLI. 🚀 Why we built it Adapt open source LLMs in minutes, not days — from config to full fine-tune Make tool calls auditable via explicit schemas and benchmarked outputs Scale synthetic data generation via Teacher Mode — models generate, remix, and train on their own traces. Tool use is what makes models like o3 so great. 🛠️ What makes it different • Configuration-only pipelines — models, tools, datasets, hyper-params all live in JSON • One-flag training mode swap — "method": "sft" | "dpo" | "teacher_mode" • Synthetic data at scale — blend real + generated traces via "real_fraction" • Full evaluation suite — TensorBoard, Berkeley Function Calling Leaderboard (BFCL), and detailed metrics • Production-ready — LoRA adapters, GPU-agnostic trainer, model merge utilities 🔍 Evaluations that matter Every training run includes: • Tool selection accuracy • Format correctness • Execution success • Response quality And every run is logged with full provenance: config, seed, Git commit. 🏗️ What we’re proud of <400 lines of core logic — readable, hackable, extensible Full LoRA support from 0.6B to 70B+ Community contributions already landing (new tool schemas, eval reports) Seamless end-to-end: train, merge, benchmark, share Whether you're a researcher, builder, or just LLM-curious — we built Tiny Tool Use so anyone can take models from static text prediction to dynamic tool reasoning. We’re excited to launch today and can’t wait to see what you build with it. Check out the repo here! 👉 GitHub: https://github.com/bagel-org/bag...
Zeng

@kyrannio Congratulations Kiri! So happy to get the early access.

Kyrannio

@zeng Thanks, Zeng! We are so thrilled to have you!

Unity Eagle

Can’t wait to see it all in action so I think I will start reading all the documentation

Kyrannio

@unity_eagle woohoo! Sounds great, Unity! We are excited to see what you build :). Thanks for joining us!

Rufus JW.ORG

This launch is a dream come true for developers

Kyrannio

@rufus87078959 Thanks Rufus. We can’t wait to see what you build! :)

Erliza. P

Clean utility 🔧🧠 Simplifying tool use with open-source LLMs hits the sweet spot for devs and tinkerers.

Mike Chaves

This is the future of AI systems!

InfiniteArtai

Can't wait to start working with Bagel Labs tools and AI!

Rachit Magon

@kyrannio This is exactly what the ecosystem needed! The config-only approach is brilliant, it removes so much boilerplate complexity. Quick question, how does the Teacher Mode synthetic data generation compare to traditional fine-tuning in terms of model performance? Are you seeing better generalization?