Model Context Protocol (MCP)
Introduction to Model Context Protocol (MCP)
The Model Context Protocol (MCP) is an open standard, created by the AI company Anthropic, that enables seamless integration between AI models and local or remote data sources. It provides a universal way for AI agents to safely access your files, databases, and software tools without requiring custom code for every new integration.
Why Use MCP?
Instead of fragmented integrations, MCP allows you to build a server once and use it across various AI platforms. By using MCP-servers, you can:
Securely connect your private data to LLMs.
Standardize how tools and "functions" are exposed to the model.
Scale your AI capabilities by plugging into a growing ecosystem of pre-built servers.
Setting up your MCP-server
As the originators and primary maintainers of the MCP specification, Anthropic provides the most accurate and up-to-date technical reference. Using the official documentation ensures you are working with the latest security standards, SDK updates, and architectural best practices as the protocol evolves.
View the official guide here: https://modelcontextprotocol.io/docs/getting-started/intro
Connect your MCP-server with Ebbot
Where settings are configured
Users configure MCP by filling in fields in the GPT Config UI under MCP Configuration.
These values are entered as UI inputs (not typically edited as raw JSON):
URL
Transport (
streamable_http/sse)Auth Header Name
Auth Secret
Additional Header key/value pairs
Default Resource URIs
What MCP does in Ebbot GPT
When MCP is enabled in a GPT configuration, Ebbot GPT can:
Discover tools from your MCP server.
Call MCP tools during the tool-calling loop.
Optionally read configured default MCP resources before generation.
Optionally expose a
get_resourcehelper tool when your MCP server publishes resources.
Configuration options
URL
MCP server URL.
If omitted, MCP is disabled.
Transport
Supported values:
streamable_httpsse(legacy/deprecated)
Transport guidance:
streamable_httpis the current MCP standard for new server integrations.sseis retained for compatibility with older MCP servers.
Auth Header Name and Auth Secret
Auth Header Name defines which header should carry your secret.
Auth Secret is treated as sensitive input and is stored securely on the backend.
Additional Headers
Extra static headers as key/value pairs.
Default Resource URIs
Optional list of resources that should be fetched at startup.
Format: array of objects with
uri.
Tool behaviour
MCP tools are mapped into GPT callable tools.
Tool input schemas are forwarded from MCP
inputSchema.Tool responses are returned as text content to the model.
If no text content is returned by the MCP tool, result falls back to
<no_content>.
Chat-level bearer token support
MCP requests can also include a bearer token from chat session state:
Field name:
mcp_bearer_tokenIf present, requests include
Authorization: Bearer <token>.
Typical usage is to set this token for a specific chat/session when you need per-chat or per-user MCP authorization.
Best practices
Use
streamable_httpfor new MCP server integrations.Use
sseonly when you must support a legacy MCP endpoint.Keep tool descriptions and schemas clear on the MCP server side, since they are surfaced to the model.
Limit default resources to small, high-value context to reduce prompt bloat.
Use least-privilege auth headers and rotate secrets regularly.
Prefer chat-scoped bearer tokens when auth should vary between users or conversations.
Last updated
Was this helpful?

