
Creates a dataset via POST /v1/datasets (multipart or JSON per API—pass name, type, and keepalive in body fields as JSON string). Returns new dataset id. Use for uploads; follow Cohere docs for file parts. Use CO_DELETE_DATASET to remove failed uploads.
Generate MCP URLOverview: Creates a dataset via POST /v1/datasets (multipart or JSON per API—pass name, type, and keepalive in body fields as JSON string). Returns new dataset id. Use for uploads; follow Cohere docs for file parts. Use CO_DELETE_DATASET to remove failed uploads through Cohere.
Benefits:
Common Use Cases:
1. Enterprise Search & Retrieval
AI agents use Cohere's embedding models to power semantic search across enterprise knowledge bases, documents, and support articles with high relevance.
2. Content Classification & Routing
AI agents leverage Cohere's classification API to automatically categorize support tickets, emails, and documents, routing them to the right teams instantly.
3. Multilingual Text Processing
AI agents use Cohere's multilingual models to process, translate, and analyze text across languages, enabling global workflows without language barriers.
4. RAG-Powered Knowledge Assistants
AI agents combine Cohere's retrieval and generation capabilities to build knowledge assistants that answer questions grounded in your organization's data.
5. Text Summarization & Extraction
AI agents call Cohere's summarize and generate endpoints to condense long documents, extract key entities, and produce structured outputs for downstream workflows.

Starts OAuth authorization for a connector via POST /v1/connectors/{id}/oauth/authorize. Returns redirect or auth payload per API. Use for secured data sources; unrelated to CO_TEST_CONNECTOR which revalidates via PATCH.

Deletes a fine-tuned model with DELETE /v1/finetuning/finetuned-models/{id}. Returns status. Use to retire custom models; destructive compared to CO_GET_FINETUNED_MODEL.

Lists models with GET /v1/models using page_size and page_token pagination plus optional endpoint filter. Returns model ids and capabilities. Use to discover IDs for CO_CHAT or CO_EMBED; paginate with next_page_token.

Updates fine-tuned model metadata via PATCH /v1/finetuning/finetuned-models/{id}. Returns updated record. Use for renames or settings; use CO_DELETE_FINETUNED_MODEL to remove.

Forces JSON object responses via response_format on POST /v2/chat. Returns message content as parseable JSON when the model complies plus usage metadata. Use for extraction, classification-as-JSON, or schemas; validate output. Use CO_CHAT when free-form prose is acceptable or CO_CHAT_WITH_TOOLS when calling APIs instead of JSON mode.

Reads organization dataset storage usage via GET /v1/datasets/usage. Returns quota consumption metrics for datasets and embed jobs storage. Use for capacity planning; for per-call token counts inspect chat response usage fields instead of this endpoint.

Cancels an embed job with POST /v1/embed-jobs/{id}/cancel. Returns updated job. Use to abort long batches; unlike CO_GET_EMBED_JOB this mutates state.

Lists embed jobs with GET /v1/embed-jobs and pagination. Returns job statuses. Use to monitor queues; use CO_GET_EMBED_JOB for one id.

Registers a connector with POST /v1/connectors; Cohere tests the endpoint during creation. Returns the new connector. Use for RAG search integrations; use CO_UPDATE_CONNECTOR to change URL or name.

Embeds document chunks with POST /v2/embed using input_type preset search_document. Returns embedding vectors aligned to input order for indexing pipelines. Use when building a vector store; use CO_EMBED_QUERY for user queries and CO_EMBED for custom input_type such as classification.

Creates an async batch job via POST /v2/batches with JSON body per Cohere batches API. Returns batch id. Use for high-volume offline processing; poll with CO_V2_GET_BATCH.

Fetches connector metadata with GET /v1/connectors/{id}. Returns URL and configuration. Use to verify registration; use CO_AUTHORIZE_CONNECTOR for OAuth flows.

Retrieves dataset details with GET /v1/datasets/{id}. Returns status, schema, and validation info. Use before launching embed jobs or fine-tunes; use CO_LIST_DATASETS to discover IDs.

Embeds a query string or batch via POST /v2/embed with input_type search_query. Returns vectors suited for similarity search against document embeddings. Use at retrieval time; pair with CO_EMBED_DOCUMENTS outputs in your vector DB. Use CO_EMBED if you need clustering or classification input types instead.

Starts fine-tuning with POST /v1/finetuning/finetuned-model body per API. Returns job or model handle. Use with CO_LIST_FINETUNING_EVENTS to track; use CO_CREATE_FINETUNED_MODEL_AND_WAIT to block until terminal state.

Creates embeddings via POST /v2/embed for texts or images with model, input_type, embedding_types, and truncate. Returns float/int8/uint8/binary vectors per item. Use input_type search_document when indexing corpora and search_query at query time. Prefer CO_EMBED_DOCUMENTS or CO_EMBED_QUERY when you want those presets without remembering enum values.

Deletes a dataset with DELETE /v1/datasets/{id}. Returns confirmation. Use to clean up obsolete corpora; irreversible unlike CO_GET_DATASET.

Creates a v2 batch then polls GET /v2/batches/{id} until terminal status or timeout. Returns final batch JSON. Use in CI; prefer CO_V2_CREATE_BATCH for manual monitoring.

Updates connector fields via PATCH /v1/connectors/{id}. Returns updated connector. Use to fix URLs; may trigger revalidation server-side. Use CO_DELETE_CONNECTOR to remove.

Lists connectors with GET /v1/connectors?limit&offset pagination. Returns connector ids ordered by recency. Use before CO_GET_CONNECTOR; use CO_CREATE_CONNECTOR to register a new search endpoint.

Deletes a connector with DELETE /v1/connectors/{id}. Returns confirmation. Use to unregister search endpoints; unlike CO_GET_CONNECTOR this removes the resource.

Calls POST /v2/chat with tools JSON parsed and tool_choice defaulted to auto so the model may emit tool_calls. Returns the same chat response envelope as CO_CHAT plus tool invocation payloads when applicable. Use for agent workflows that call your functions; use CO_CHAT when no tools are needed. For structured JSON without tools prefer CO_CHAT_JSON_MODE.

Reranks documents for a query using POST /v2/rerank with model, query, documents, top_n, return_documents. Returns ordered results with relevance scores. Use after vector recall to improve precision; use CO_RERANK_SIMPLE for a fast top-3 default. For single-score similarity without a query list use embeddings instead.

Revalidates connector configuration by PATCH /v1/connectors/{id} with an updated url (or name) so Cohere re-runs connectivity checks. Returns updated connector object. Use after changing infra; for read-only inspection use CO_GET_CONNECTOR instead.

Transcribes audio via POST /v2/audio/transcriptions with multipart or JSON per API. Returns text transcript. Use for speech input pipelines; unrelated to CO_CHAT text flows.

Runs Cohere chat completion on POST /v2/chat with full controls (model, messages, tools, documents, response_format, sampling, safety_mode, stream). Returns assistant text, citations, tool calls, token usage, and finish reason. Use as the primary chat entry point when you need every v2 parameter. Prefer CO_CHAT_JSON_MODE for strict JSON, CO_CHAT_STREAM when you must stream tokens, or CO_CHAT_WITH_TOOLS for agentic tool loops.

Creates a fine-tune then polls GET /v1/finetuning/finetuned-models/{id} until status is terminal or timeout. Returns final model payload. Use when scripts must block; prefer CO_CREATE_FINETUNED_MODEL for fire-and-forget.

Runs RAG-style chat on POST /v2/chat by supplying a documents JSON array alongside messages. Returns grounded answers with optional citations depending on model behavior. Use when you already have chunked documents from your datastore; use CO_EMBED plus a vector DB when you must retrieve docs dynamically first.

Classifies inputs with few-shot examples via POST /v1/classify. Returns predicted labels and confidences per input. Use for short text classification when you can supply labeled examples; for generative labeling use CO_CHAT_JSON_MODE instead.

Lists embed-compatible models via GET /v1/models?endpoint=embed. Returns models suitable for CO_EMBED. Use when building RAG; use CO_LIST_RERANK_MODELS for rerankers.

Creates a batch embed job via POST /v1/embed-jobs. Returns job id for polling. Use for large corpora; pair with CO_GET_EMBED_JOB or CO_CREATE_EMBED_JOB_AND_WAIT.

Lists training events for a model via GET /v1/finetuning/finetuned-models/{finetuned_model_id}/events. Returns chronological log lines. Use for debugging failed jobs; use CO_LIST_FINETUNING_CHECKPOINTS for step metrics.

Gets one model via GET /v1/models/{model}. Returns details for routing and deprecation flags. Use after CO_LIST_MODELS; for filtered lists use CO_LIST_CHAT_MODELS shortcuts.

Lists batches with GET /v2/batches and limit/after pagination when supported. Returns batch job summaries. Use to audit workloads; use CO_V2_GET_BATCH for detail.

Legacy text generation via POST /v1/generate with prompt-level controls (max_tokens, temperature, p, k, stop_sequences, return_likelihoods). Returns generations and likelihood metadata when requested. Use when integrating older Cohere examples; prefer CO_CHAT for conversational models and tool use.

Converts token IDs back to text via POST /v1/detokenize. Returns the joined string for debugging or inspection. Use after CO_TOKENIZE; not needed for normal chat flows.

Cancels an in-progress batch via POST /v2/batches/{id}:cancel per Cohere v2 batches API. Returns cancel confirmation payload. Use to stop runaway jobs; use CO_V2_GET_BATCH to confirm status afterward.

Reads training step metrics via GET /v1/finetuning/finetuned-models/{finetuned_model_id}/training-step-metrics (checkpoint-style curves). Returns metric series for loss and quality. Use alongside CO_LIST_FINETUNING_EVENTS; for model metadata alone use CO_GET_FINETUNED_MODEL.

Lists rerank-compatible models via GET /v1/models?endpoint=rerank. Returns rerank SKUs for CO_RERANK. Use when selecting rankers; use CO_LIST_MODELS for unfiltered catalogs.

Lists fine-tuned models with GET /v1/finetuning/finetuned-models and optional pagination params. Returns model ids and states. Use before CO_GET_FINETUNED_MODEL; use CO_CREATE_FINETUNED_MODEL to start training.

Fetches one fine-tuned model via GET /v1/finetuning/finetuned-models/{id}. Returns settings and status. Use to poll training completion; use CO_LIST_FINETUNING_EVENTS for detailed logs.

Sets stream=true on POST /v2/chat to receive newline-delimited streamed events (not SSE parsed here—raw body). Returns streamed chunks as returned by the API. Use for low-latency UX; for a single final string use CO_CHAT with stream false. For JSON guarantees combine streaming carefully with CO_CHAT_JSON_MODE semantics.

Gets embed job status via GET /v1/embed-jobs/{id}. Returns progress and output references. Use for polling; use CO_CANCEL_EMBED_JOB to stop.

Summarizes long text via POST /v1/summarize with length, format, extractiveness, and optional additional_command. Returns a concise summary string. Use for document digests; for chat-style Q&A use CO_CHAT_WITH_DOCS or CO_CHAT instead.

Gets one batch job via GET /v2/batches/{id}. Returns status and output handles. Use after CO_V2_CREATE_BATCH; use CO_V2_LIST_BATCHES to discover ids.

Creates an embed job then polls GET /v1/embed-jobs/{id} until complete or timeout. Returns final job record. Use in automation; prefer CO_CREATE_EMBED_JOB for interactive flows.

Calls POST /v2/rerank with top_n fixed to 3 unless overridden for quick triage of the best few chunks. Returns the top results and scores only. Use for fast UX or LLM context trimming; use CO_RERANK when you need custom top_n or return_documents toggles.

Lists datasets with GET /v1/datasets using limit and pagination before/after cursors as supported. Returns dataset metadata for training or eval. Use to browse corpora; use CO_GET_DATASET for one record. Paginate with next cursors from responses when present.

Validates the API key and returns metadata via GET /v1/check-api-key. Returns organization or key hints. Use for onboarding diagnostics; routine auth validation uses validateCredentials instead.

Tokenizes text via POST /v1/tokenize for a given model. Returns token IDs for budgeting or analysis. Use before estimating costs; use CO_DETOKENIZE to reconstruct text from IDs.

Lists chat-compatible models using GET /v1/models?endpoint=chat (and optional default_only). Returns paginated models for conversational use. Use to pick a CO_CHAT model; use CO_LIST_EMBED_MODELS for embedding SKUs.
Do I need my own developer credentials to use Cohere MCP with Adopt AI?
No, you can get started immediately using Adopt AI's built-in Cohere integration. For production use, we recommend configuring your own API keys for greater control and security.
Can I connect Cohere with other apps through Adopt AI?
Yes! Adopt AI supports multi-app workflows, so your AI agents can seamlessly move data between Cohere and search engines, knowledge bases, CRMs, and more.
Is Adopt AI secure?
Absolutely. Adopt AI is SOC 2 Type 2 certified and ISO/IEC 27001 compliant, and adheres to EU GDPR, CCPA, and HIPAA standards. All data is encrypted in transit and at rest, ensuring the confidentiality, integrity, and availability of your data. Learn more here.
What happens if the Cohere API changes?
Adopt AI maintains and updates all integrations automatically, so your agents always work with the latest API versions, no manual maintenance required.
Do I need coding skills to set up the Cohere integration?
Not at all. Adopt AI's zero-shot API discovery means your agents understand Cohere's schema on first contact. Setup takes minutes with no code required.
How do I set up custom Cohere MCP in Adopt AI?
For a step-by-step guide on creating and configuring your own Cohere API keys with Adopt AI, see here.