
Starts a bulk import job into an index (POST /bulk/imports). Returns job id. Use for large static corpora; monitor with PC_DESCRIBE_BULK_IMPORT.
Generate MCP URLOverview: Starts a bulk import job into an index (POST /bulk/imports). Returns job id. Use for large static corpora; monitor with PC_DESCRIBE_BULK_IMPORT through Pinecone.
Benefits:
Common Use Cases:
1. Semantic Search at Scale
AI agents store and query millions of vectors in Pinecone to power fast, accurate semantic search across documents, products, and knowledge bases.
2. RAG Pipeline Backbone
AI agents use Pinecone as the high-performance vector store in retrieval-augmented generation workflows, delivering relevant context to LLMs with low-latency queries.
3. Personalized Recommendation Systems
AI agents query Pinecone embeddings to deliver real-time, personalized recommendations for products, content, and resources based on user profiles and behavior.
4. Duplicate Detection & Deduplication
AI agents compare incoming records against Pinecone indices to identify duplicates, near-matches, and overlapping entries across CRMs, databases, and content systems.
5. Real-Time Anomaly Detection
AI agents stream data embeddings into Pinecone and query for outliers in real time, detecting fraud, unusual behavior, or system anomalies as they occur.

Describes a collection by name (GET /collections/{name}). Returns size and source index. Use before PC_CREATE_INDEX_FROM_BACKUP style flows.

Shortcut POST /query with includeValues false and includeMetadata true for lighter payloads. Returns ids, scores, metadata without dense vectors. Use to save bandwidth; enable values in PC_QUERY_VECTORS when debugging embeddings.

Creates an index restored from a backup snapshot when your project supports backups. Returns async operation info. Use after PC_LIST_BACKUPS; for net-new indexes use PC_CREATE_INDEX.

Runs Pinecone-hosted rerank (POST /rerank) with model, query, documents, top_n, return_documents. Returns ordered docs. Use after vector recall; alternative to client-side cross-encoder hosting.

Shortcut POST /query merging body with a higher default topK (50) for recall-oriented searches. Returns more neighbors per call. Use when precision@5 is too narrow; tighten with PC_RERANK afterward.

Lists bulk import jobs (GET /bulk/imports). Returns job ids and states. Paginate with tokens if provided.

Updates an existing vector id with new values, sparse values, or metadata (POST /vectors/update). Returns acknowledgment. Use for partial fixes; use PC_UPSERT_VECTORS to replace full vectors wholesale.

Deletes a backup (DELETE /backups/{id}). Returns confirmation. Use to reclaim storage; irreversible unlike PC_DESCRIBE_BACKUP.

Deletes a collection (DELETE /collections/{name}). Returns status. Use to prune old snapshots; does not delete live indexes.

Describes one index and returns its host, dimension, metric, and status. Use the host for all data-plane tools (upsert/query). Use PC_LIST_INDEXES to discover names.

Gets bulk import job status (GET /bulk/imports/{id}). Returns progress and errors. Use with PC_START_BULK_IMPORT; cancel via PC_CANCEL_BULK_IMPORT.

Calls POST /describe_index_stats on the data plane host for namespace cardinalities and fullness hints. Returns per-namespace vector counts and index fullness. Use after PC_DESCRIBE_INDEX resolves host; lightweight alternative to PC_QUERY_VECTORS for capacity checks.

Upserts vectors on the data plane (POST /vectors/upsert) with ids, values, sparse values, and metadata per record. Returns upsertedCount. Use for writes; use PC_QUERY_VECTORS for nearest neighbor search afterward.

Lists vector ids with pagination via POST /vectors/list (prefix, limit, paginationToken, namespace). Returns ids and next pagination token. Use to enumerate namespaces; pair paginationToken for next page.

Shortcut: POST /query with topK default 5 and includeMetadata true—pass full JSON body minus topK to merge. Returns matches for “top documents” RAG. Use for quick retrieval; tune topK via PC_QUERY_VECTORS instead.

Calls Pinecone Inference embed API on the control plane (POST /embed). Returns vectors for inputs. Use when you want hosted embeddings without a separate vendor; still store results with PC_UPSERT_VECTORS.

Runs POST /query with a metadata filter preset—pass filter as JSON string, e.g. {"genre":{"$eq":"sci-fi"}} plus vector or id. Returns filtered neighbors. Use when metadata gates results; use PC_QUERY_VECTORS for fully custom bodies.

Lists collections (snapshots) in the project via control plane. Returns ids for backup workflows. Use with PC_CREATE_COLLECTION; unrelated to vector namespaces.

Cancels an in-flight bulk import (POST /bulk/imports/{id}/cancel). Returns updated job. Use when imports stall; does not remove already upserted vectors.

Deletes every vector in a namespace via deleteAll=true (POST /vectors/delete). Returns confirmation. Use for test resets; prefer PC_DELETE_VECTORS with ids for surgical deletes.

Deletes an index and all vectors permanently. Returns empty or status payload. Use only after backups (PC_CREATE_BACKUP); cannot be undone unlike PC_CONFIGURE_INDEX tweaks.

Creates a collection from an index snapshot (POST /collections). Returns collection name. Use for DR or cloning; use PC_DELETE_COLLECTION to remove.

Patches index settings such as replicas or pod type where supported (PATCH /indexes/{name}). Returns updated metadata. Use for scale changes; for vector CRUD use PC_UPSERT_VECTORS instead.

Deletes vectors by id list or filter, or all vectors in a namespace (POST /vectors/delete). Returns delete counts. Use PC_DELETE_ALL_VECTORS when you intend namespace wipe with deleteAll.

Fetches vectors by id via GET /vectors/fetch with repeated ids query params. Returns vectors map and usage. Use when you know exact ids; for similarity use PC_QUERY_VECTORS.

Describes backup metadata (GET /backups/{id}). Returns status and source index. Use while monitoring long-running backups.

Creates a new index (serverless or pod spec) with POST /indexes. Returns creation status and eventual host. Use PC_DESCRIBE_INDEX to poll until ready; use PC_DELETE_INDEX to roll back mistakes.

Nearest-neighbor search with POST /query using vector or id, top_k, namespace, metadata filter, includeValues, includeMetadata. Returns matches with scores. Example filter: {"genre":{"$eq":"sci-fi"}}. Use PC_QUERY_BY_ID when you only have a stored id; use PC_QUERY_WITH_FILTER for metadata-heavy presets.

Lists backups filtered client-side by index name substring when the API returns an array (best-effort). Returns matching subset or full list. Use when organizing DR by index; use PC_LIST_BACKUPS for the raw catalog.

Fetches one page of vector ids with a higher default limit (500) for bulk audits. Returns ids and paginationToken. Use for migrations; continue with PC_LIST_VECTORS using the token.

Queries using an existing vector id (POST /query with id set, no raw embedding). Returns neighbors like PC_QUERY_VECTORS. Use when embeddings live only in Pinecone; when you have the query embedding in memory use PC_QUERY_VECTORS with vector.

Lists backups for the project (GET /backups). Returns backup ids and timestamps. Paginate using API tokens if returned. Use before PC_DESCRIBE_BACKUP.

Calls POST /rerank with top_n defaulted to 3 and return_documents true for quick reranking. Returns top matches with scores. Use for fast UX; use PC_RERANK for full parameter control.

Creates a backup of an index (POST /backups). Returns backup id. Use for compliance snapshots; restore with PC_CREATE_INDEX_FROM_BACKUP when supported.

Lists every index in the project via the Pinecone control plane. Returns names, hosts, dimension, metric, and readiness. Paginate client-side if you manage many indexes; use PC_DESCRIBE_INDEX before data-plane calls.
Do I need my own developer credentials to use Pinecone MCP with Adopt AI?
No, you can get started immediately using Adopt AI's built-in Pinecone integration. For production use, we recommend configuring your own API keys for greater control and security.
Can I connect Pinecone with other apps through Adopt AI?
Yes! Adopt AI supports multi-app workflows, so your AI agents can seamlessly move data between Pinecone and LLM platforms, data pipelines, search tools, and more.
Is Adopt AI secure?
Absolutely. Adopt AI is SOC 2 Type 2 certified and ISO/IEC 27001 compliant, and adheres to EU GDPR, CCPA, and HIPAA standards. All data is encrypted in transit and at rest, ensuring the confidentiality, integrity, and availability of your data. Learn more here.
What happens if the Pinecone API changes?
Adopt AI maintains and updates all integrations automatically, so your agents always work with the latest API versions, no manual maintenance required.
Do I need coding skills to set up the Pinecone integration?
Not at all. Adopt AI's zero-shot API discovery means your agents understand Pinecone's schema on first contact. Setup takes minutes with no code required.
How do I set up custom Pinecone MCP in Adopt AI?
For a step-by-step guide on creating and configuring your own Pinecone API keys with Adopt AI, see here.