MCP Tool Search Unpacked: Discovery, Invocation, and Dynamic Updates for LLM-Powered Agents

The Model Context Protocol (MCP) transforms how Large Language Models (LLMs) interact with the outside world by treating "Tool Search" as a standardiz...

Deep Research AI

Author’s note:

Question: Explain how MCPs use Tool Search

Context: Context:


Executive Summary

The Model Context Protocol (MCP) transforms how Large Language Models (LLMs) interact with the outside world by treating “Tool Search” as a standardized, dynamic API. Instead of relying on hardcoded integrations, MCP enables agents to discover, select, and execute tools at runtime through a defined lifecycle: Discovery (tools/list), Invocation (tools/call), and Dynamic Updates (notifications/tools/list_changed).

This architecture allows for “dynamic self-discovery,” where an LLM can query a server to understand what capabilities are currently available—such as searching a vector database or fetching a document—and then autonomously decide which tool to use 1. For developers, this means building servers that are modular and adaptable; for example, a tool requiring authentication can be hidden until the user logs in, without restarting the client 2. Major platforms like OpenAI have already standardized on this pattern, requiring specific search and fetch tools for deep research integrations 3.


MCP defines a rigorous structure for how tools are exposed to models. At its heart, the protocol is “model-controlled,” meaning the server exposes capabilities, but the LLM (often with a human-in-the-loop) decides when and how to use them 4.

ConceptWhat it isWhere it lives in the specTypical use case
Tool definitionJSON-schema-driven description (name, description, inputSchema, optional outputSchema, annotations)§ Tools → Tool definition 4Enables the model to understand arguments & output format
Tool discoverytools/list RPC (supports pagination) that returns the current catalogue§ Tools → Listing Tools 4Model queries “What can I call?” at runtime
Tool invocationtools/call RPC that executes a named tool with supplied arguments§ Tools → Calling Tools 4Model sends “Please fetch this doc”
Dynamic updatesnotifications/tools/list_changed (optional capability listChanged) informs the client that the catalogue changed§ Tools → List Changed Notification 4Hide a tool after auth expires, add a new API on-the-fly
Human-in-the-loopUI guidelines & should recommendations for confirmation prompts§ User Interaction Model 4Prevent accidental destructive actions

Strategic Insight: By standardizing discovery, MCP shifts the burden of “knowing what to do” from the developer (who previously hardcoded function calls) to the model (which now reads the tool definition at runtime). This enables true agentic autonomy 1.


2. The Discovery Flow – tools/list

The discovery process is the entry point for any MCP-enabled agent. Before an agent can act, it must know what actions are possible.

2.1 Request / Response Shape

Clients initiate discovery by sending a tools/list request. This operation supports pagination, allowing servers to expose large catalogues of tools without overwhelming the client 4.

// Request (JSON-RPC 2.0)
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": { "page": 1 }
}

The server responds with a list of tool definitions. Each definition includes a unique name, a human-readable description, and a JSON Schema for inputSchema 4.

// Successful response
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{
"name": "search",
"description": "Return a ranked list of URLs for a query",
"inputSchema": {
"type": "object",
"properties": { "query": { "type": "string" } },
"required": ["query"]
}
}
]
}
}

2.2 What the Model Does with the List

Upon receiving this list, the LLM uses the description and inputSchema to understand the tool’s purpose and requirements 5.

  1. Ingestion: The model parses the available tools into its context window.
  2. Reasoning: It evaluates the user’s prompt (e.g., “Find the latest research on quantum-safe crypto”) against the available tools.
  3. Selection: If a match is found (e.g., search), the model generates a structured call; otherwise, it relies on its internal knowledge 1.

2.3 Real-World Example – OpenAI’s Required search & fetch Tools

OpenAI has adopted MCP for its “deep research” and ChatGPT connector features. To integrate with these services, an MCP server must implement two specific tools: search and fetch 3.

ToolArgumentsReturns
search{ "query": "string" }{ "results": [{ "id": "string", "title": "string", "url": "string" },...] }
fetch{ "id": "string" }{ "id":"string","title":"string","text":"string","url":"string","metadata":{...} }

The search tool returns a list of relevant items, while fetch retrieves the full content of a specific item using its ID 3.


3. Invoking a Tool – tools/call

Once a tool is selected, the client sends a tools/call request. This request includes the tool’s name and the arguments generated by the model 4.

3.1 Request / Response Shape

{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "search",
"arguments": { "query": "latest AI safety papers 2025" }
}
}

The server executes the logic (e.g., querying a database) and returns the result. The result must be wrapped in a content array, which can contain text, images, or other resource types 4.

{
"jsonrpc": "2.0",
"id": 2,
"result": {
"content": [
{
"type": "text",
"text": "{\"results\":[{\"id\":\"1\",\"title\":\"AI Safety Landscape\",\"url\":\"https://example.com/ai-safety\"}]}"
}
],
"isError": false
}
}

3.2 Structured vs. Unstructured Results

Tools can return results as unstructured text or structured data.

  • Unstructured: Returned in the content field (e.g., plain text or JSON-encoded strings). This is the standard for OpenAI’s search tool, which requires a JSON-encoded string inside a text block 3.
  • Structured: If an outputSchema is defined, servers must provide structured results in the structuredContent field. For backward compatibility, servers should also return a serialized text version 4.

3.3 Error Handling

MCP distinguishes between protocol errors and execution errors to help the model recover gracefully 4.

LayerWhen it triggersExample payload
Protocol error (JSON-RPC)Unknown tool, malformed request{ "code": -32601, "message": "Method not found" }
Tool execution errorAPI timeout, business-logic failure"isError": true, "content": [{ "type":"text", "text":"Error: Rate limit exceeded" }]

By setting isError: true in the result rather than throwing a protocol exception, the server allows the LLM to see the error message and potentially try a different approach 5.


4. Dynamic Tool Discovery – notifications/tools/list_changed

One of MCP’s most powerful features is dynamic self-discovery. Unlike static APIs, MCP servers can notify clients when their capabilities change, prompting a refresh of the tool list 2.

4.1 Why Dynamic Discovery Matters

Dynamic discovery allows the available toolset to adapt to the runtime context.

  • Authentication: A tool like getWhatsAppChatById can be hidden if the user’s authentication token expires and re-enabled once they log in 2.
  • Context Awareness: Tools relevant only to a specific project or file type can appear only when that context is active 2.
  • Service Availability: If a backend service goes down, its associated tools can be removed from the list to prevent errors 2.

4.2 Notification Flow

  1. Server Detects Change: The server identifies a state change (e.g., auth expiry).
  2. Notification Sent: The server sends a notifications/tools/list_changed message to the client 4.
  3. Client Refreshes: The client receives the notification and automatically calls tools/list to get the updated catalogue 2.

4.3 Implementation Snippet (TypeScript SDK)

The following example demonstrates how a client might handle these dynamic updates using the Speakeasy MCP pattern 2.

// Conceptual example of client-side handling
client.on('notifications/tools/list_changed', async () => {
// The server has signaled a change; refresh the tool list
const { tools } = await client.call('tools/list', { page: 1 });
console.log('Tool catalogue refreshed:', tools.map(t => t.name));
});
// On the server side (conceptual):
// If auth fails, the server disables the tool and emits the notification
await server.sendNotification('notifications/tools/list_changed');

5. Security & Trust Considerations

Exposing tools to an LLM introduces risks, particularly regarding data privacy and unintended actions.

ConcernRecommended MitigationSpec reference
Untrusted annotationsClients must treat tool annotations as untrusted unless from a verified server.§ Tool → Tool Definition 4
Input validationServers must validate all inputs against the schema before execution.§ Security → Input validation 4
Prompt InjectionMalicious inputs could trick the model into fetching sensitive data. Trusting the developer isn’t enough; you must trust the content accessed.Risks and safety 3
Human confirmationApplications should provide UI for users to confirm tool invocations, especially for write operations.§ User Interaction Model 4
Data LeakageAvoid putting sensitive info in tool definitions. Sanitize outputs before returning them to the model.Risks and safety 3

OpenAI explicitly warns that custom MCP servers are third-party services not verified by OpenAI. Users should only connect to servers they trust, as these servers can access the data sent in the chat context 3.


6. Best-Practice Checklist for Building a Tool-Search-Enabled MCP Server

✅ ItemWhy it mattersQuick tip
Declare CapabilitiesServers must declare the tools capability. Add listChanged if supporting dynamic updates.Add "capabilities": { "tools": {}, "listChanged": "true" } in server metadata 4.
Rich DescriptionsClear descriptions and examples help the LLM choose the right tool and reduce hallucinations.Include "description": "Add two numbers together" and specific parameter descriptions 5.
Output SchemaProviding an outputSchema enables strict validation and better integration with code.Define the expected JSON structure of the result 4.
Dynamic UpdatesImplement notifications/tools/list_changed for auth-dependent tools to avoid runtime errors.Emit the event immediately when tool availability changes 2.
Error ReportingReturn errors within the result object (isError: true) so the model can handle them.Don’t just throw exceptions; give the model a chance to self-correct 5.
Human-in-the-LoopAlways require approval for sensitive actions.Use the require_approval flag or UI prompts for write actions 1.

Bottom Line

Tool Search is the fundamental mechanism that transforms an LLM from a text generator into an autonomous agent. By standardizing how tools are discovered (tools/list) and invoked (tools/call), MCP allows agents to dynamically adapt to their environment.

  • For Developers: The search and fetch pattern is becoming the industry standard for retrieval, mandated by platforms like OpenAI for deep research integrations 3.
  • For Architects: The listChanged notification capability enables robust, state-aware systems where tools appear and disappear based on context and authentication, preventing the “stale tool” problem 2.
  • For Security: While powerful, this capability requires strict input validation and human-in-the-loop safeguards to prevent prompt injection and unauthorized data access 4 3.

Implementing these patterns ensures your MCP server is not just a static API, but a dynamic extension of the AI’s cognitive process.

References

Footnotes

  1. Tools - Model Context Protocol 2 3 4

  2. Tools - Model Context Protocol 2 3 4 5 6 7 8 9

  3. Tools - Model Context Protocol (MCP) 2 3 4 5 6 7 8 9

  4. Official MCP Registry - Model Context Protocol 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

  5. Dynamic Self-Discovery Is The Super Power of MCP | by Cobus Greyling | Medium 2 3 4