A plug-and-play smart docs solution in 3 lines.

Power Users
All your docs, one brain

AI driven document management
Drag-and-drop PDFs, Notion exports or code snippets and chat with them in your favourite AI assistant, IDE or Obsidian. No folders. No naming conventions.

Context aware search
Lightning semantic search powered by efficient embeddings

Flexible by default
Your documents are not locked in a big AI lab's storage. Use their models, connect your docs anywhere.
Drop-in RAG backend
From curl ⋯ to production

API Ready
Drop-in Auto-RAG Tool
Wire a single MCP tool pair—upload_document and ask_rag—into your chatbot or agent. We run Llama Index, Qdrant and R2 behind the curtain, so you never touch vector DBs or chunking code.

Billing
Pay-as-you-grow Billing
We meter your total storage and query traffic and invoice your app through Stripe. No need to bolt on your own usage tracker or payment flow.

Security
Secure Tenant Isolation
Auth0 JWTs keep every builder's knowledge base in a separate R2 bucket and Qdrant collection—your data never mingles with anyone else's.

Portability
Allow your users to port their own data
If users use easy-rag, they can authenticate and connect their data to your applications directly.
Everything you need
A complete RAG solution for modern applications
Everything you need to build powerful knowledge-based applications with minimal effort.
Zero-Ops Hosting
R2 for raw files, Qdrant Cloud for vectors, all provisioned automatically.
Smart Chunking
Sentence splitter + 1 kB overlap tuned for bge-small accuracy.
Live Streaming
Server-sent events deliver answers token-by-token with <200 ms TTFB.
Multi-Format Ingest
PDF, Markdown, DOCX, TXT and URLs—just send and forget.
Per-User Isolation
Auth0 IDs map 1-to-1 to vector collections; no data bleeding.
Pay-As-You-Grow
Free tier, then usage-based Stripe billing via the MCP worker.