
Adds a message to a thread (user/assistant). content can be string or JSON for multimodal.
Generate MCP URLOverview: Adds a message to a thread (user/assistant). content can be string or JSON for multimodal through OpenAI.
Benefits:
Common Use Cases:
1. Intelligent Content Generation
AI agents call OpenAI models to generate marketing copy, documentation, email drafts, and reports, integrating generated content directly into connected workflows.
2. Automated Data Extraction & Summarization
AI agents use OpenAI to parse unstructured data � emails, PDFs, transcripts � extract key information, and produce structured summaries for downstream processing.
3. Conversational AI & Chatbot Workflows
AI agents leverage OpenAI's chat models to power customer-facing chatbots, internal helpdesks, and conversational interfaces across your tech stack.
4. Code Generation & Review
AI agents use OpenAI to generate code snippets, review pull requests, suggest refactors, and document functions, accelerating developer productivity.
5. Embedding & Classification Pipelines
AI agents call OpenAI embedding and classification APIs to power semantic search, content categorization, and sentiment analysis across enterprise data.

Deletes a fine-tuned model by id.

Transcribes audio to text (Whisper). Requires multipart file in real clients; this tool accepts JSON metadata only—use file_id from OAI_UPLOAD_FILE with purpose assist for pre-uploaded audio.

Attaches an uploaded file id to an assistant for retrieval. Use after OAI_UPLOAD_FILE.

Creates a new thread and run in one call (assistant_id required, optional thread messages JSON).

Classifies content against OpenAI moderation policy. Returns category scores.

Deletes an assistant.

TTS preset returning opus format for low-latency playback. Same as OAI_TEXT_TO_SPEECH with response_format opus.

Chat completion with integer seed for more reproducible sampling. Returns standard completion object. Use when debugging or A/B testing prompts. For production diversity omit seed and use OAI_CHAT_COMPLETION.

Calls chat completions with stream true (SSE). Note: this MCP returns the first chunk as JSON text, not a live stream to the user. Use when integrating streaming clients; for full non-stream responses use OAI_CHAT_COMPLETION.

Submits tool outputs to continue a run waiting on tool_calls.

Updates message metadata.

Gets thread metadata.

Updates thread metadata or tool_resources.

Creates variations of an image (multipart). Use for exploring alternatives to a seed image. Same limitation as OAI_EDIT_IMAGE for MCP binary upload.

Lists uploaded files with optional purpose filter. Returns file metadata array. Use to find ids for OAI_GET_FILE or vector store attachment.

Deletes a thread and its messages.

Cancels a batch.

Lists messages in a thread.

Lists runs for a thread.

Creates a Batch API job from a JSONL input file id. Returns batch id.

Lists checkpoints for a fine-tuning job.

Translates spoken audio to English text. Same multipart constraint as OAI_TRANSCRIBE_AUDIO.

Uploads a file (multipart). Pass file_path on the server filesystem, purpose (fine-tune, assistants, batch, vision). Returns file id and bytes. Use before fine-tuning or Assistants file search.

Embedding preset using text-embedding-3-small with dimensions=512 for smaller vectors. Use when storage cost matters.

Image generation preset with quality=hd for DALL·E 3. Returns high-detail images. Use when standard quality is insufficient.

Gets vector store file batch status.

Polls OAI_GET_FILE_BATCH until completed/failed/cancelled.

Creates a fine-tuning job from a training file id. Returns job id and status. Use OAI_LIST_FINETUNES to monitor.

Embeddings with input forced to JSON array of strings (batch). Returns multiple embedding objects in data. Use when embedding many snippets in one request.

Updates run metadata.

Creates a vector store for file search. Returns id.

Removes a file from a vector store.

Lists events for a fine-tuning job (metrics, checkpoints).

Gets vector store by id.

Polls OAI_GET_FINETUNE until terminal status (succeeded/failed/cancelled) or max_wait_ms. Returns final job object.

Lists steps for a run (tool calls, messages).

Deletes a vector store.

Gets a single run step.

Creates a batch attach job for many files to a vector store.

Removes a file attachment from an assistant (does not delete the file from storage).

Lists batches.

Deletes a message.

Converts text to speech audio (JSON body). Returns binary; MCP surfaces JSON error if non-JSON response. Prefer model tts-1 or tts-1-hd.

Chat completion with logprobs enabled for token-level likelihoods. Returns logprob content in response. Use for debugging model uncertainty; omit for OAI_CHAT_COMPLETION.

Cancels an in-progress run.

Creates a chat completion with full control (model, messages, temperature, max_tokens, tools, tool_choice, response_format, stream, seed, logprobs). Returns assistant message(s), usage, and optional tool_calls. Use for general chat or when you need every OpenAI parameter. For JSON-only output use OAI_CHAT_JSON_MODE. For streaming SSE use OAI_CHAT_STREAM.

Forces response_format type json_object so the model returns parseable JSON in message.content. Returns same shape as chat completions plus usage. Use when you need structured extraction; validate JSON client-side. For arbitrary chat without JSON guarantee use OAI_CHAT_COMPLETION.

Cancels a running fine-tune job.

Lists files attached to an assistant (knowledge retrieval). Returns file ids and statuses. Use before detaching with OAI_DELETE_ASSISTANT_FILE.

Creates an Assistant (v2) with model, instructions, tools. Returns assistant id.

Lists models available to the key. Returns id, owned_by, created.

Attaches a file to a vector store.

Updates vector store name or expiration.

Image generation with n>1 for multiple candidates (DALL·E 2 style counts). Returns multiple image objects.

Lists fine-tuning jobs with pagination. Returns job statuses. Use after OAI_CREATE_FINETUNE.

Creates a thread with optional initial messages JSON.

Starts a run for an assistant on a thread.

Edits an image with a mask (multipart upload). Returns revised images. Use when inpainting; requires source PNG and mask. For variations without prompt-based edit use OAI_VARY_IMAGE.

Gets one fine-tuning job by id including fine_tuned_model when succeeded.

Lists files in a vector store.

Retrieves file metadata by id. Returns bytes, purpose, status. Use before downloading content.

Gets batch status and output file ids when completed.

Deletes a file by id. Returns deleted:true. Use to clean up after fine-tune or failed uploads.

Downloads raw file bytes (text or binary as string). Use for small text training files.

Embeddings with encoding_format base64 for smaller payloads over the wire. Returns base64-encoded vectors. Use when float arrays are too heavy; decode client-side.

Polls OAI_GET_BATCH until completed/failed/cancelled/expired.

Creates embeddings for input string or array of strings. Returns embedding vectors and usage tokens. Use for semantic search pipelines. For many chunks in one call use OAI_BATCH_EMBEDDINGS.

Updates assistant fields partially.

Creates a run then polls OAI_GET_RUN until completed/failed/cancelled/expired or timeout. Returns final run.

Retrieves one model descriptor.

Gets one message.

Legacy text completions (not chat) for a single prompt string. Returns choices with text and usage. Use only for legacy models; prefer OAI_CHAT_COMPLETION for instruct models.

Retrieves assistant by id.

Lists assistants with pagination.

Same endpoint as OAI_CHAT_COMPLETION but expects tools as a JSON string and sets tool_choice to auto unless overridden. Returns completion with possible tool_calls for agent loops. Use when the model must call functions. For plain text without tools use OAI_CHAT_COMPLETION.

Gets run status and usage.

Composite: OAI_UPLOAD_FILE from file_path then OAI_ADD_FILE_TO_VECTOR_STORE. Returns both responses.

Generates images from a text prompt (DALL·E 3 / 2). Returns b64_json or url per response_format. Use for creative assets. For edits use OAI_EDIT_IMAGE.

Lists vector stores.
Do I need my own developer credentials to use OpenAI MCP with Adopt AI?
No, you can get started immediately using Adopt AI's built-in OpenAI integration. For production use, we recommend configuring your own API keys for greater control and security.
Can I connect OpenAI with other apps through Adopt AI?
Yes! Adopt AI supports multi-app workflows, so your AI agents can seamlessly move data between OpenAI and CRMs, databases, content platforms, and more.
Is Adopt AI secure?
Absolutely. Adopt AI is SOC 2 Type 2 certified and ISO/IEC 27001 compliant, and adheres to EU GDPR, CCPA, and HIPAA standards. All data is encrypted in transit and at rest, ensuring the confidentiality, integrity, and availability of your data. Learn more here.
What happens if the OpenAI API changes?
Adopt AI maintains and updates all integrations automatically, so your agents always work with the latest API versions, no manual maintenance required.
Do I need coding skills to set up the OpenAI integration?
Not at all. Adopt AI's zero-shot API discovery means your agents understand OpenAI's schema on first contact. Setup takes minutes with no code required.
How do I set up custom OpenAI MCP in Adopt AI?
For a step-by-step guide on creating and configuring your own OpenAI API keys with Adopt AI, see here.