Launching today
AutoBandStudio

AutoBandStudio

AI that turns your melody into a full arrangement

55 followers

Free Windows app that turns your melody into full arrangement with AI. Unlike tools generating songs from prompts, AutoBandStudio keeps your voice. You sketch the melody β€” AI fills in the rest. How it works: 1. Draw melody on piano roll 2. AI generates chords, drums, bass, guitar 3. Export as MIDI or audio Features: 🎹 AI chord prediction 🎸 Guitar generation (4 styles) πŸ₯ Editable drums βœ‚οΈ Regenerate sections πŸ’Ύ Save/load projects Free β€” Pro unlocks unlimited exports.
AutoBandStudio gallery image
AutoBandStudio gallery image
AutoBandStudio gallery image
AutoBandStudio gallery image
AutoBandStudio gallery image
Free Options
Launch tags:Musicβ€’Artificial Intelligence
Launch Team / Built With
Tines
Tines
The intelligent workflow platform
Promoted

What do you think? …

godgun
Maker
πŸ“Œ

Hey everyone! πŸ‘‹ I'm a solo developer working full-time as a C# desktop app developer. I built AutoBandStudio on nights and weekends over the past few months.

Why I made this: I've been playing in bands since college. I always had melodies in my head, but turning them into full songs was the hard part β€” especially the accompaniment. There are amazing AI music tools out there that generate entire songs from text prompts. I wanted something different: a tool where I create the melody, and AI just helps with the backing track. Think of it like sketching a drawing and having AI color it in.

Tech behind it: The chord prediction model is a Transformer Encoder built and trained in PyTorch. As a C# developer, diving into Python and deep learning for the first time was... an experience πŸ˜…

Where it stands now: 650+ downloads across 58 countries, 5.0 rating on Microsoft Store. v2.0 just launched with guitar generation, drum editing, and a Pro subscription.

I'd love to hear your thoughts β€” what would make this more useful? What's missing? Thanks for checking it out! 🎡

eric ng

Are you running the PyTorch model via ONNX Runtime locally, or is the inference happening on a remote server?

godgun
Maker

@eric_wck657821Β All inference runs locally via ONNX Runtime. The model is trained in PyTorch and exported to ONNX format. No server calls, no internet needed. works fully offline.

Thanks for the question!