Skip to main content

MCP (WebSearch) integration with /v1/responses

This page explains how to connect to MCP WebSearch via the OpenAI-compatible /v1/responses route.
Once connected, the model can query the web and use the results to automatically enrich its responses.

Endpoint

POST/v1/responses

Full URL (Clovis Gateway)

Headers
POST https://llm-gateway.clovis-ai.fr/v1/responses
Authorization: Bearer <CLOVIS_API_KEY>
Content-Type: application/json

Objective

MCP integration allows the model to:

Automatically trigger an external tool (e.g., WebSearch)

Retrieve up-to-date information from the Internet

Synthesize this information into a structured response

In practice:

The user asks a question → the model triggers a tool call → the gateway executes the MCP WebSearch → the model responds with the results.

Prerequisites

MCP operating principle

1) Declare an MCP server in the request

The Clovis gateway allows you to declare an MCP server via a provider-specific field in tools.

Example: WebSearch server declaration

MCP declaration
{
"type": "mcp",
"server_label": "web_search_preview",
"server_description": "The websearch mcp",
"server_url": "https://llm-gateway.clovis-ai.fr/api/v1/mcp",
"require_approval": "never"
}

This declaration means:

  • web_search_preview is an MCP server available during the request
  • MCP calls are routed via server_url
  • no manual approval is required (require_approval: "never")

MCP fields (technical details)

FieldTypeRequiredDescription
typestring✔️Must be "mcp" to enable MCP connection
server_labelstring✔️Logical name of the MCP server (stable alias)
server_descriptionstringFunctional description of the server
server_urlstring✔️Clovis MCP broker URL
require_approvalstringExecution policy (never = automatic)

Full example — /v1/responses with MCP WebSearch

Request

JSON payload
{
"model": "ClovisLLM",
"input": [
{
"role": "system",
"content": [
{
"type": "input_text",
"text": "You are a helpful assistant. If a question requires up-to-date information, use websearch."
}
]
},
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "Can you give me the latest information about n8n and AI Agents?"
}
]
}
],
"tools": [
{
"type": "mcp",
"server_label": "web_search_preview",
"server_description": "The websearch mcp",
"server_url": "https://llm-gateway.clovis-ai.fr/api/v1/mcp",
"require_approval": "never"
}
],
"temperature": 0.3,
"max_output_tokens": 1200
}

API Response

Once the MCP result is retrieved, the model returns a final response of type message:

Response
{
"id": "resp_xxx",
"object": "response",
"output": [
{
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "Here is the latest information about n8n and AI Agents: ... (summary based on web search)"
}
]
}
],
"usage": {
"input_tokens": 350,
"output_tokens": 420,
"total_tokens": 770
}
}

Usage examples

Python
from openai import OpenAI

# Configuration
CLOVIS_API_KEY = "sk-rezr"

CLOVIS_BASE_URL = "https://llm-gateway.clovis-ai.fr/v1"
CLOVIS_MODEL_NAME = "ClovisLLM"

# Initialize client
client = OpenAI(api_key=CLOVIS_API_KEY, base_url=CLOVIS_BASE_URL)

# Test /responses route
answer = client.responses.create(
model=CLOVIS_MODEL_NAME,
input="What are today's news?",
tools=[
{
"type": "mcp",
"server_label": "web_search_preview",
"server_description": "The websearch mcp",
"server_url": "https://llm-gateway.clovis-ai.fr/api/v1/mcp",
"require_approval": "never",
}
]
)

print(answer.output_text)