Browse and search the AI agent directory
2861 agents found
VSCode extension that acts as a Model Context Protocol (MCP) client, enabling integration between MCP servers and GitHub Copilot Chat
Self-learning marketing intelligence system with Hebbian synapse network
MCP server for Things 3 task management via URL Scheme
MCP server for SixthWall AI code security scanner. Integrates with Claude Code for automatic vulnerability detection with Fix Packs.
verl-agent is an extension of veRL, designed for training LLM/VLM agents via RL. verl-agent is also the official code fo
🧩 Lobe Chat Plugin SDK - The LobeChat Plugin SDK assists you in creating exceptional chat plugins for Lobe Chat.
【AI低代码平台】“低代码+零代码”双模驱动AI智能平台 AI low-code platform empowers enterprises to quickly develop low-code solutions and build
A simple and well-tailored LLM application framework that enables you to seamlessly integrate LLM capabilities in the mo
This MCP server provides documentation about Strands Agents to your GenAI tools, so you can use your favorite AI coding
AI Agent Engineering Platform built on an Open Source TypeScript AI Agent Framework
MCP server for Ara Records API integration
MCP server package - managed by mcp-prep
Video generation via code
MCP Server for freee accounting API integration with Claude AI
Node.js/TypeScript MCP server for Atlassian Bitbucket. Enables AI systems (LLMs) to interact with workspaces, repositories, and pull requests via tools (list, get, comment, search). Connects AI directly to version control workflows through the standard MC
ClickUp MCP Server - Powering AI Agents with full ClickUp task, document, and chat management capabilities.
MCP Server for Web AI Media Editor - 让 Kiro/Cursor/Claude Desktop 控制浏览器视频编辑器
MCP server for web scraping — fetch URLs, extract text/links/metadata, CSS selector extraction. Zero deps, LLM-optimized output.
Production-grade platform for building agentic IM bots - 生产级多平台智能机器人开发平台. 提供 Agent、知识库编排、插件系统 / Bots for Discord / Slack
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading