Answers natural language questions over a structured table (TAPAS-style). Pass inputs JSON with query and table fields. Use for spreadsheet QA; for free-text RAG use HF_QUESTION_ANSWERING or HF_TEXT_GENERATION.
Generate MCP URLOverview: Answers natural language questions over a structured table (TAPAS-style). Pass inputs JSON with query and table fields. Use for spreadsheet QA; for free-text RAG use HF_QUESTION_ANSWERING or HF_TEXT_GENERATION through Hugging Face.
Benefits:
Common Use Cases:
1. Model Discovery & Deployment
AI agents search the Hugging Face Hub for pre-trained models, evaluate their benchmarks, and deploy the best-fit model for your use case in minutes.
2. Automated Fine-Tuning Pipelines
AI agents orchestrate fine-tuning workflows on Hugging Face, managing datasets, training runs, and model versioning for custom NLP and vision tasks.
3. Dataset Curation & Management
AI agents browse, filter, and pull datasets from Hugging Face to build training pipelines, ensuring data quality and compatibility with target model architectures.
4. Inference API Integration
AI agents call Hugging Face Inference APIs to run predictions � text generation, classification, translation, and more � directly within enterprise workflows.
5. Model Performance Monitoring
AI agents track model metrics on Hugging Face, compare versions, detect performance drift, and trigger retraining workflows when accuracy drops below thresholds.
Returns the authenticated user or org context from GET /whoami-v2. Use to verify tokens and read username for HF_LIST_USER_MODELS. Does not return fine-grained scopes in all cases.
Claim paper authorship (POST /settings/papers/claim).
Rename discussion.
Suspend scheduled job.
Get Xet read token for rev.
List jobs in a namespace.
Summarizes long input text with seq2seq summarization models. Returns summary string and optional metadata. Use when the model is tagged summarization; for chat-style summaries HF_TEXT_GENERATION may be simpler.
Create SQL console embed for a dataset/model repo.
Daily papers feed.
Batch add items (slug-id collection).
Update cron/schedule JSON.
PATCH collection by slug only.
Hub-wide trending feed (models/spaces/datasets mix). Complements HF_LIST_TRENDING_MODELS which filters /models only.
Delete a discussion.
Storage metadata for discussion.
Move or rename a repository (POST /repos/move).
Fetches a single model repository metadata JSON from the Hub. Returns siblings, pipeline_tag, and config pointers. Use before picking an Inference model id; use HF_LIST_MODELS to browse.
List files at a revision (tree API).
Approve/deny access request (admin).
Add item to collection (slug-only form).
Deletes an Inference Endpoint permanently. Returns confirmation. Use to tear down unused spend; irreversible unlike HF_PAUSE_ENDPOINT.
Configure Space persistent volumes.
Creates a new Inference Endpoint (POST /v2/endpoint). Returns creation status. Use for production model hosting; body must follow HF cloud schema (repository, accelerator, etc.).
List discussions/PRs for a repo.
Update repo settings JSON.
Fetches configuration and health for a dedicated Inference Endpoint by namespace and name. Returns replica status and URL. Use after HF_LIST_ENDPOINTS; for public Inference API models use HF_TEXT_GENERATION without an endpoint.
Transcribes speech to text (Whisper, Wav2Vec2, etc.). Returns text and optional chunks. Use for voice notes and meetings; for classification of audio use HF_AUDIO_CLASSIFICATION.
Resumes a paused Inference Endpoint and provisions replicas. Returns status transitions. Use after HF_PAUSE_ENDPOINT; for cold-start cost savings consider HF_SCALE_TO_ZERO if supported on your account.
Delete a community post.
Search Hub documentation.
Security/scan status for model or dataset repo.
Queries the Hub model index with filter, search, sort, limit, author, and tags parameters. Returns model cards metadata and download stats. Paginate with cursor/limit as returned by API. Use HF_SEARCH_MODELS for a focused text search preset.
Reply to a blog comment (global slug).
Resume scheduled job.
Clear or delete notifications (DELETE /notifications).
Tag catalog for datasets grouped by type.
PATCH collection metadata.
Grant access directly (admin).
Duplicate job definition.
Cancel your pending access request.
Update a member role in an organization.
List arXiv papers on the Hub.
Batch LFS preupload metadata.
PATCH SQL console embed.
Answers a question given a context passage via extractive QA models. Returns answer text, score, and start/end offsets. Use when you have a fixed context string; for generative RAG over chat models use HF_TEXT_GENERATION with a prompt instead.
Tag catalog for models grouped by type (different from HF_GET_MODEL_TAGS /models-tags).
Create a new model, dataset, or space repository (POST /repos/create).
DELETE collection by slug-id.
Lists Spaces with optional search/filter/limit. Returns Space metadata including SDK and hardware hints. Use for demos and apps; not for Inference Endpoints management.
Remove one item from collection (slug-id).
Lists models authored by a Hub username via author= filter. Returns that user’s public model repos. Use after HF_WHOAMI to pass your username; for global search use HF_SEARCH_MODELS.
Lists models sorted by Hub trending signal with configurable limit. Returns models gaining traction recently. Use for discovery; for raw download ranking use HF_SEARCH_MODELS with sort=downloads.
Classifies an image input and returns top labels with scores. Send base64 data URL or raw bytes per Inference API conventions in inputs field as JSON. Use for vision checkpoints; for detection use HF_OBJECT_DETECTION.
Delete one LFS object by sha from repo.
Lists your Inference Endpoints on Hugging Face (managed GPU/CPU deployments). Returns paginated endpoint records with status and URLs. Use before HF_GET_ENDPOINT; requires a token with inference endpoints scope.
Updates an Inference Endpoint (PUT /v2/endpoint/ns/name). Returns updated spec. Use to change instance type or image revision; pair with HF_PAUSE_ENDPOINT when maintenance is needed.
Public overview for a Hub username.
Lists organizations the authenticated user belongs to (GET /organizations). Returns org names and roles when permitted. Use for multi-tenant Hub workflows; unrelated to model inference.
Check inference-endpoint permissions for namespace.
Runs encoder–decoder generation (T5, BART, FLAN) via the Inference API. Returns generated text for summarization, translation, or conditional generation tasks. Use instead of HF_TEXT_GENERATION when the model card shows text2text-generation; for causal LMs use HF_TEXT_GENERATION.
Classifies text (sentiment, topic, etc.) and returns an array of {label, score}. Use when the model is a sequence-classification checkpoint. For free-form labels without fine-tuning use HF_ZERO_SHOT_CLASSIFICATION instead.
Detects objects with bounding boxes, labels, and scores. Returns structured detections. Use for localization tasks; for captions use HF_IMAGE_TO_TEXT.
Create a job (POST body).
Reply to a post comment.
Create a commit (add/delete/update operations in JSON body).
Runs diffusion models from a text prompt. Returns image bytes or base64 per API response. Parameters JSON may include negative_prompt, width, height, num_inference_steps, guidance_scale. Use for creative assets; not for text embeddings.
Pauses billing for a dedicated endpoint while preserving configuration. Returns updated status. Use overnight or in dev; resume with HF_RESUME_ENDPOINT when traffic returns.
Submit or refresh paper index (POST /papers/index).
Resolve cached blob URL path for a file at rev.
Cancel a running job.
Get resource group linkage for repo.
Fetch notebook file JSON at rev/path.
Gets dataset repository metadata for a dataset id. Returns readme refs and download stats. Use when wiring training jobs; use HF_LIST_DATASETS to discover names.
Request access to a gated model or dataset.
Classifies audio clips into categories. Pass audio as base64 or URL per model expectations in inputs JSON. Use for environmental sound or speech emotion classifiers; for transcripts use HF_AUTOMATIC_SPEECH_RECOGNITION.
Shortcut dataset search with sort=downloads and configurable limit (e.g. search=imdb). Returns dataset cards for benchmarking. Use HF_LIST_DATASETS when you need raw filter strings instead.
Hybrid semantic / full-text paper search.
Reply to a paper comment.
Super-squash history at rev (destructive; models/datasets/spaces per account).
Comment on a paper.
List hardware for Hub jobs.
Performs NER or token tagging and returns entities with entity, score, start, end, word indices. Use for structured span extraction from text. For whole-document labels use HF_TEXT_CLASSIFICATION.
Directory size summary at rev/path.
Count jobs in namespace.
MCP-related settings for the account (if enabled).
Shortcut for popular model discovery: GET /models?search=query&sort=downloads&limit=10. Returns top downloaded matches for the query string. Use for “find a Llama/Mistral checkpoint”; for full filters use HF_LIST_MODELS.
Kernel at specific git revision.
Get Xet write token for rev.
Batch add items (slug-only collection).
Create branch (body: startingPoint, emptyBranch, overwrite).
List notifications for the authenticated user.
List LFS files in repo.
Classifies text into caller-provided candidate labels without task-specific heads. Returns scores per label sorted by confidence. Use for quick taxonomy mapping; for trained class heads prefer HF_TEXT_CLASSIFICATION.
List access requests (admin); status path segment e.g. pending.
Add item to collection (slug-id form).
List docs roots.
Requests scale-to-zero behavior where the platform spins down idle endpoints (account/feature dependent). Returns updated scaling policy metadata when accepted. Use to minimize cost; contrast with HF_PAUSE_ENDPOINT for explicit off.
Space deployment events.
Open a new discussion or PR (body JSON).
Job logs.
Do I need my own developer credentials to use Hugging Face MCP with Adopt AI?
No, you can get started immediately using Adopt AI's built-in Hugging Face integration. For production use, we recommend configuring your own API tokens for greater control and security.
Can I connect Hugging Face with other apps through Adopt AI?
Yes! Adopt AI supports multi-app workflows, so your AI agents can seamlessly move data between Hugging Face and data platforms, vector databases, deployment tools, and more.
Is Adopt AI secure?
Absolutely. Adopt AI is SOC 2 Type 2 certified and ISO/IEC 27001 compliant, and adheres to EU GDPR, CCPA, and HIPAA standards. All data is encrypted in transit and at rest, ensuring the confidentiality, integrity, and availability of your data. Learn more here.
What happens if the Hugging Face API changes?
Adopt AI maintains and updates all integrations automatically, so your agents always work with the latest API versions, no manual maintenance required.
Do I need coding skills to set up the Hugging Face integration?
Not at all. Adopt AI's zero-shot API discovery means your agents understand Hugging Face's schema on first contact. Setup takes minutes with no code required.
How do I set up custom Hugging Face MCP in Adopt AI?
For a step-by-step guide on creating and configuring your own Hugging Face API tokens with Adopt AI, see here.