Qdrant MCP Server
Official Qdrant vector database MCP server. Acts as a semantic memory layer on top of Qdrant: store information with metadata, retrieve via similarity search. Two tools, very small surface area, exceptionally maintained by the Qdrant team. Configurable embedding provider (fastembed default), supports remote and local Qdrant clusters.
“Official Qdrant vector database MCP server. Acts as a semantic memory layer on top of Qdrant: store information with metadata, retrieve via similarity search. Two tools, very small surface area, exceptionally maintained by the Qdrant team. Pushed two days ago with 2 commits in the last 30 days; 6 releases shipped, 0.8.x line stable. The simplest path to durable agent memory backed by a real vector database. Configurable embedding provider (fastembed default with sentence-transformers/all-MiniLM-L6-v2). Supports both remote Qdrant clusters via QDRANT_URL and local databases via QDRANT_LOCAL_PATH. Read-only mode available via QDRANT_READ_ONLY for query-only deployments. FastMCP-based, distributed via PyPI as mcp-server-qdrant.”
INSTALL THIS SERVER
{
"mcpServers": {
"qdrant": {
"command": "uvx",
"args": [
"mcp-server-qdrant"
],
"env": {
"QDRANT_URL": "http://localhost:6333",
"COLLECTION_NAME": "agent-memory",
"EMBEDDING_MODEL": "sentence-transformers/all-MiniLM-L6-v2"
}
}
}
}
{
"mcpServers": {
"qdrant": {
"command": "uvx",
"args": [
"mcp-server-qdrant"
],
"env": {
"QDRANT_URL": "http://localhost:6333",
"COLLECTION_NAME": "agent-memory",
"EMBEDDING_MODEL": "sentence-transformers/all-MiniLM-L6-v2"
}
}
}
}
{
"mcpServers": {
"qdrant": {
"command": "uvx",
"args": [
"mcp-server-qdrant"
],
"env": {
"QDRANT_URL": "http://localhost:6333",
"COLLECTION_NAME": "agent-memory",
"EMBEDDING_MODEL": "sentence-transformers/all-MiniLM-L6-v2"
}
}
}
}
{
"mcpServers": {
"qdrant": {
"command": "uvx",
"args": [
"mcp-server-qdrant"
],
"env": {
"QDRANT_URL": "http://localhost:6333",
"COLLECTION_NAME": "agent-memory",
"EMBEDDING_MODEL": "sentence-transformers/all-MiniLM-L6-v2"
}
}
}
}
{
"mcpServers": {
"qdrant": {
"command": "uvx",
"args": [
"mcp-server-qdrant"
],
"env": {
"QDRANT_URL": "http://localhost:6333",
"COLLECTION_NAME": "agent-memory",
"EMBEDDING_MODEL": "sentence-transformers/all-MiniLM-L6-v2"
}
}
}
}
2 TOOLS AVAILABLE
OUR ASSESSMENT
- Official Qdrant org publication.
- Tiny tool surface (qdrant-store, qdrant-find) keeps the agent tool selection clean.
- Configurable embedding model via EMBEDDING_PROVIDER and EMBEDDING_MODEL.
- Read-only mode via QDRANT_READ_ONLY for query-only deployments.
- Supports remote and local Qdrant via QDRANT_URL or QDRANT_LOCAL_PATH.
- FastMCP-based with all FastMCP environment variables available.
- Apache-2.0 license.
- 6 releases shipped with 1,373 stars and 267 forks.
- Two tools only; teams wanting collection management or batched operations need the Qdrant client SDK.
- Latest release v0.8.1 from December 2025; release cadence is slower than commit cadence.
- Embedding provider currently locked to fastembed family.
Stores in plaintext within Qdrant (encryption at rest is a Qdrant configuration concern). For multi-tenant deployments, scope each agent to a dedicated COLLECTION_NAME. QDRANT_API_KEY pins the Qdrant cluster; rotate on credential exposure. For local-only operation use QDRANT_LOCAL_PATH so vector data stays on the host.
Teams already running Qdrant for production vector search, agents needing durable queryable memory across sessions, and read-only knowledge bases where the LLM operates in query-only mode.
TECHNICAL DETAILS
ADOPTION METRICS
// Reading this1,373 stars and 267 forks confirm it as the canonical Qdrant integration path. Smithery badge in the README amplifies discovery.
// Reading thisSecond-ranked in ai-ml category. Cleaner Tier 1 trade-off than Pinecone for evaluation: more stars, more forks, freshly maintained.
SOURCES & VERIFICATION
We don't take any single directory's word for it. Before scoring, we cross-reference 5 public MCP sources, install the server ourselves against the clients we cover, and record when we last re-verified.
The same server, 5 different lenses. We reconcile these signals into our editorial score, which is why our number sometimes diverges from a directory-aggregate star count.
| Source | Their rating | Their star count | Their downloads | Last synced |
|---|---|---|---|---|
| AutomationSwitch This page | 4.5editorial | 1,373 | — | APR 29, 2026 |
| PulseMCP | — unrated | unavailable | unavailable | APR 29, 2026 |
| Smithery | — unrated | unavailable | unavailable | APR 29, 2026 |
| Glama | — unrated | unavailable | unavailable | APR 29, 2026 |
| MCP.so | — unrated | unavailable | unavailable | APR 29, 2026 |
| Official MCP Registry | — unrated | unavailable | unavailable | APR 29, 2026 |
// Counts are directory-reported; we don't adjust them. Discrepancies usually come from different snapshot times or star-caching.
OTHER AI / ML MCP SERVERS
Cognee MCP
Knowledge graph plus vector memory engine for AI agents, exposed as an MCP server with V2 session-aware memory tools (remember, recall, forget, improve) and classic V1 ingestion pipelines (cognify, codify). Three transports: stdio, SSE, Streamable HTTP. 16,965 GitHub stars, Apache-2.0.
Codebase Memory MCP
High-performance code intelligence MCP server for AI coding agents. Indexes a codebase into a queryable knowledge graph in milliseconds, with 14 MCP tools spanning structural search, call-chain tracing, impact analysis, dead-code detection, and Cypher queries. Single static C binary, 66 languages via tree-sitter, zero runtime dependencies.
Arize Phoenix MCP
LLM observability platform exposing prompts, projects, traces, spans, sessions, datasets, and experiments through MCP. Published to npm as @arizeai/phoenix-mcp, current 4.0.8 (2026-04-29). 9,496 stars on parent monorepo, Elastic License 2.0.
Amazon Bedrock AgentCore MCP
Official AWS Labs MCP server for Amazon Bedrock AgentCore: agent runtime, memory, gateway, identity, and observability. Tools fetch curated AgentCore documentation and surface deployment guides for runtime, memory, and gateway resources. Apache-2.0 within awslabs/mcp monorepo (8,924 parent stars).
Vapi MCP
Vapi.ai voice AI server SDK with MCP server module. Manage voice assistants, trigger outbound calls, and inspect call transcripts. 118 GitHub stars and 14 commits on main in the last 30 days.
Weights & Biases MCP
Official W&B MCP server for Weights & Biases Models and Weave. Query experiments, runs, sweeps, models, traces, evaluations through MCP. 50 GitHub stars and 13 commits on main in the last 30 days.
DISCUSS YOUR
MCP REQUIREMENTS.
Evaluating a server, scoping an internal deployment, or working out whether MCP is the right fit at all. Start the conversation and we will point you at the right piece of the ecosystem.