Launched this week
Vectorless
Reasoning-native document intelligence engine
1 follower
Reasoning-native document intelligence engine
1 follower
A hierarchical, reasoning-native document intelligence engine. Replace your vector database with LLM-powered tree navigation.

## What is Vectorless?
Vectorless is a hierarchical, reasoning-native document intelligence engine written in Rust. Instead of chunking documents into flat vectors like traditional RAG systems, it preserves your document's structure and uses an LLM to navigate through it — like a human reading a table of contents and then diving into relevant sections.
No embeddings. No vector database. Just reasoning.
## Why I Built This
Traditional RAG frustrated me:
```
Traditional RAG:
Document → Chunk → Embed → Store in Vector DB → Similarity Search → Hope for the best
Problems:
- Structure is destroyed (chunk boundaries ignore headings/sections)
- "Similar" ≠ "Relevant" (semantic similarity doesn't mean contextual relevance)
- Vector DB infrastructure is heavy and expensive
- Hard to debug why certain chunks were retrieved
```
I wanted something that works more like how humans actually find information:
```
Vectorless:
Document → Parse into Tree → LLM Navigates Tree → Returns Relevant Content
Benefits:
- Structure preserved (sections, subsections, hierarchy)
- Reasoning-based navigation (understands context)
- Zero infrastructure (no vector DB, no embedding models)
- Debuggable (you can see the navigation path)
```
## How It Works
Think of it as navigating a document:
```
┌─────────────────────────────────────────────────────────┐
│ Document Tree │
│ │
│ 1. Introduction ──────────────────────── [skip] │
│ 2. Architecture ──────────────────────── [skip] │
│ 3. Getting Started ───────────────────── [skip] │
│ 4. Performance ───────────────────────── [explore] ←──│─ Pilot decision
│ 4.1 Benchmarks ────────────────────── [read] │
│ 4.2 Optimization ──────────────────── [skip] │
│ 5. API Reference ─────────────────────── [skip] │
└─────────────────────────────────────────────────────────┘
```
The LLM acts as a "pilot" that:
1. Reads the table of contents
2. Decides which sections to explore
3. Drills down into relevant content
4. Knows when to stop (sufficiency detection)