GitLore

GitLore

RAG-Powered Repository Intelligence Engine

Retrieval Stages

10

Languages

9

Frontends

3

Architectural fluidity meets engineering precision.

Your repository already contains the answers — buried in thousands of commits, PR discussions, and code changes that no one remembers. GitLore makes that institutional knowledge queryable, eliminating hours of manual git-log diving and Slack archaeology. It indexes your entire repository (commits, source files, PRs, and static call graphs) into a local LanceDB vector database so you can ask why code exists, who built it, and what broke.

Every question passes through a 10-stage retrieval pipeline: intent classification (5 intents), parallel vector search across 3 data streams, symbol discovery, intent-weighted reranking, smart file expansion, cross-symbol expansion, structural context injection, elastic dual-stream budgeting, two-pass synthesis with file-system agency, and context reordering. The pipeline acts as a context distillation layer — only the most targeted snippets reach the LLM, dramatically improving answer accuracy over naive RAG.

Built as a monorepo with a framework-agnostic core engine consumed by three frontends: an Electron 33 desktop app with streaming chat, Mermaid diagram generation, and adjustable TopK; a VS Code extension with sidebar chat and automatic index updates on git events; and a Commander-based CLI. Battle-tested on Express.js (12,889 commits, 116 source files, 2,443 PRs, 11,404 call graph edges) — proving it scales to real production repositories.

Engineered for the edge.

01

TypeScript

End-to-end type-safe monorepo with shared core engine.

02

LanceDB

4-table on-device vector store with HNSW-SQ indexing.

03

tree-sitter

AST-aware chunking across 9 languages for function-level context.

04

Electron + React

Desktop app with streaming chat, diagrams, and repo indexing.

Key Capabilities

10-Stage Retrieval Pipeline

Intent classification, parallel vector search, symbol discovery, cross-symbol expansion, structural context injection, elastic budgeting, and two-pass synthesis — dramatically improving answer accuracy over naive RAG by ensuring only the most relevant context reaches the LLM.

Privacy-First Architecture

Embeddings and vector search run 100% locally via Transformers.js and LanceDB. Only top-K distilled snippets reach the LLM — raw diffs and source files never leave your machine, making it safe for proprietary codebases.

Three Frontends, One Core

Framework-agnostic @gitlore/core engine consumed by an Electron desktop app, VS Code extension with auto-index on git events, and a Commander CLI — giving developers the same intelligence wherever they work.

TYPESCRIPTLANCEDBTRANSFORMERS.JSHNSW-SQOLLAMA / OPENAI SDKREACTVS CODE API