Browse and search the AI agent directory
450 agents found
MCP Server for adding bookmarks in openai RAG
📚 从零开始的大语言模型原理与实践教程
Semantic code search for AI coding assistants. Local Qdrant, multi-repo, no API keys.
MCP server for Obsidian Smart Connections. Semantic search using your vault's embeddings.
Local FAISS vector database for RAG with document ingestion, semantic search, and MCP prompts.
Generate QA datasets & evaluate RAG systems. Privacy-first, any LLM, local or cloud.
Local RAG MCP server with hybrid search, PDF/DOCX support, and zero-config setup
SQL Server MCP with RAG capabilities for Windows (native ODBC support)
Productivity-boosting RAG engine for codebases with multi-provider AI support and semantic search.
BM25 search + tree navigation over markdown docs for AI agents. No embeddings, no LLM calls.
Compress prompts 40-60% using local LLM + embedding validation. Preserves all conditionals.
GenAI Cookbook
🎖️ 🦀 🏠 🍎 Local-first system capturing screen/audio with timestamped indexing, SQL/embedding storage, semantic search, LLM-powered history analysis, and event-triggered actions - enables building context-aware AI agents through a NextJS plugin ecosystem
Privacy-first document search server running entirely locally. Supports semantic search over PDFs, DOCX, TXT, and Markdown files with LanceDB vector storage and local embeddings - no API keys or cloud services required
"primitive" RAG-like web search model context protocol (MCP) server that runs locally. No APIs needed
A framework for creating multi-agent systems using MCP for coordinated AI collaboration, featuring task management, shared context, and RAG capabilities
Intelligent learning sidecar for AI coding assistants. Helps developers learn from AI-generated code changes through interactive blocking quizzes and provides agents with persistent project-specific debugging memory using silent RAG tools. Features 56% token optimization and multi-language support
Provides up-to-date documentation context for a specific Rust crate to LLMs via an MCP tool, using semantic search (embeddings) and LLM summarization
Lightweight local RAG MCP server for semantic vector search over markdown documents. Reduces token consumption by 40x with sqlite-vec and multilingual-e5-small embeddings. Supports filtered search by directory and filename patterns
LLM-driven context and memory management with wide-recall + precise-reranking RAG architecture. Features multi-dimensional retrieval (vector/timeline/knowledge graph), short/long-term memory, and complete MCP support (HTTP/WebSocket/SSE)