A comprehensive, Flutter-based Retrieval-Augmented Generation application built to be open-source, modular, and easy to deploy. Full control over your data and infrastructure.
We named it RAG.WTF because we wanted to answer "WTF is the RAG?" with unprecedented simplicity—and make setup so simple you'll say "Wow, That's Fast!"
Runs entirely in the browser using SurrealDB WASM for a secure, serverless-optional experience.
Built with Melos, separating concerns into distinct packages for better maintainability.
Pre-configured for Ollama, OpenAI, Anthropic, Gemini, and more LLM providers.
User data can remain on the client machine, never being sent to a server.
Reduces server-side infrastructure needs, making it economical for personal use.
Built with Flutter for seamless deployment across web, mobile, and desktop.
The application follows a standard RAG pipeline, orchestrated across its modular packages for maximum flexibility and maintainability.
Upload documents through the UI, send to text-splitting service, chunk, and generate vector embeddings.
Text chunks and vector embeddings are stored locally or remotely in SurrealDB for fast retrieval.
Query vectorization, similarity search, and context-aware answer generation using your chosen LLM.
Choose your setup based on your needs. From web application to full local deployment.
Production-ready Single-Page Application
Maximum privacy & control
Connect with developers, share your projects, and get help from the RAG.WTF community.
Comprehensive guides, tutorials, and API reference - Coming Soon!