MCP Tool Search Unpacked: Discovery, Invocation, and Dynamic Updates for LLM-Powered Agents
The Model Context Protocol (MCP) transforms how Large Language Models (LLMs) interact with the outside world by treating "Tool Search" as a standardiz...
Author’s note:
Question: Explain how MCPs use Tool Search
Context: Context:
Executive Summary
The Model Context Protocol (MCP) transforms how Large Language Models (LLMs) interact with the outside world by treating “Tool Search” as a standardized, dynamic API. Instead of relying on hardcoded integrations, MCP enables agents to discover, select, and execute tools at runtime through a defined lifecycle: Discovery (tools/list), Invocation (tools/call), and Dynamic Updates (notifications/tools/list_changed).
This architecture allows for “dynamic self-discovery,” where an LLM can query a server to understand what capabilities are currently available—such as searching a vector database or fetching a document—and then autonomously decide which tool to use 1. For developers, this means building servers that are modular and adaptable; for example, a tool requiring authentication can be hidden until the user logs in, without restarting the client 2. Major platforms like OpenAI have already standardized on this pattern, requiring specific search and fetch tools for deep research integrations 3.
1. Core Concepts of MCP Tool Search
MCP defines a rigorous structure for how tools are exposed to models. At its heart, the protocol is “model-controlled,” meaning the server exposes capabilities, but the LLM (often with a human-in-the-loop) decides when and how to use them 4.
| Concept | What it is | Where it lives in the spec | Typical use case |
|---|---|---|---|
| Tool definition | JSON-schema-driven description (name, description, inputSchema, optional outputSchema, annotations) | § Tools → Tool definition 4 | Enables the model to understand arguments & output format |
| Tool discovery | tools/list RPC (supports pagination) that returns the current catalogue | § Tools → Listing Tools 4 | Model queries “What can I call?” at runtime |
| Tool invocation | tools/call RPC that executes a named tool with supplied arguments | § Tools → Calling Tools 4 | Model sends “Please fetch this doc” |
| Dynamic updates | notifications/tools/list_changed (optional capability listChanged) informs the client that the catalogue changed | § Tools → List Changed Notification 4 | Hide a tool after auth expires, add a new API on-the-fly |
| Human-in-the-loop | UI guidelines & should recommendations for confirmation prompts | § User Interaction Model 4 | Prevent accidental destructive actions |
Strategic Insight: By standardizing discovery, MCP shifts the burden of “knowing what to do” from the developer (who previously hardcoded function calls) to the model (which now reads the tool definition at runtime). This enables true agentic autonomy 1.
2. The Discovery Flow – tools/list
The discovery process is the entry point for any MCP-enabled agent. Before an agent can act, it must know what actions are possible.
2.1 Request / Response Shape
Clients initiate discovery by sending a tools/list request. This operation supports pagination, allowing servers to expose large catalogues of tools without overwhelming the client 4.
// Request (JSON-RPC 2.0){ "jsonrpc": "2.0", "id": 1, "method": "tools/list", "params": { "page": 1 }}The server responds with a list of tool definitions. Each definition includes a unique name, a human-readable description, and a JSON Schema for inputSchema 4.
// Successful response{ "jsonrpc": "2.0", "id": 1, "result": { "tools": [ { "name": "search", "description": "Return a ranked list of URLs for a query", "inputSchema": { "type": "object", "properties": { "query": { "type": "string" } }, "required": ["query"] } } ] }}2.2 What the Model Does with the List
Upon receiving this list, the LLM uses the description and inputSchema to understand the tool’s purpose and requirements 5.
- Ingestion: The model parses the available tools into its context window.
- Reasoning: It evaluates the user’s prompt (e.g., “Find the latest research on quantum-safe crypto”) against the available tools.
- Selection: If a match is found (e.g.,
search), the model generates a structured call; otherwise, it relies on its internal knowledge 1.
2.3 Real-World Example – OpenAI’s Required search & fetch Tools
OpenAI has adopted MCP for its “deep research” and ChatGPT connector features. To integrate with these services, an MCP server must implement two specific tools: search and fetch 3.
| Tool | Arguments | Returns |
|---|---|---|
search | { "query": "string" } | { "results": [{ "id": "string", "title": "string", "url": "string" },...] } |
fetch | { "id": "string" } | { "id":"string","title":"string","text":"string","url":"string","metadata":{...} } |
The search tool returns a list of relevant items, while fetch retrieves the full content of a specific item using its ID 3.
3. Invoking a Tool – tools/call
Once a tool is selected, the client sends a tools/call request. This request includes the tool’s name and the arguments generated by the model 4.
3.1 Request / Response Shape
{ "jsonrpc": "2.0", "id": 2, "method": "tools/call", "params": { "name": "search", "arguments": { "query": "latest AI safety papers 2025" } }}The server executes the logic (e.g., querying a database) and returns the result. The result must be wrapped in a content array, which can contain text, images, or other resource types 4.
{ "jsonrpc": "2.0", "id": 2, "result": { "content": [ { "type": "text", "text": "{\"results\":[{\"id\":\"1\",\"title\":\"AI Safety Landscape\",\"url\":\"https://example.com/ai-safety\"}]}" } ], "isError": false }}3.2 Structured vs. Unstructured Results
Tools can return results as unstructured text or structured data.
- Unstructured: Returned in the
contentfield (e.g., plain text or JSON-encoded strings). This is the standard for OpenAI’ssearchtool, which requires a JSON-encoded string inside a text block 3. - Structured: If an
outputSchemais defined, servers must provide structured results in thestructuredContentfield. For backward compatibility, servers should also return a serialized text version 4.
3.3 Error Handling
MCP distinguishes between protocol errors and execution errors to help the model recover gracefully 4.
| Layer | When it triggers | Example payload |
|---|---|---|
| Protocol error (JSON-RPC) | Unknown tool, malformed request | { "code": -32601, "message": "Method not found" } |
| Tool execution error | API timeout, business-logic failure | "isError": true, "content": [{ "type":"text", "text":"Error: Rate limit exceeded" }] |
By setting isError: true in the result rather than throwing a protocol exception, the server allows the LLM to see the error message and potentially try a different approach 5.
4. Dynamic Tool Discovery – notifications/tools/list_changed
One of MCP’s most powerful features is dynamic self-discovery. Unlike static APIs, MCP servers can notify clients when their capabilities change, prompting a refresh of the tool list 2.
4.1 Why Dynamic Discovery Matters
Dynamic discovery allows the available toolset to adapt to the runtime context.
- Authentication: A tool like
getWhatsAppChatByIdcan be hidden if the user’s authentication token expires and re-enabled once they log in 2. - Context Awareness: Tools relevant only to a specific project or file type can appear only when that context is active 2.
- Service Availability: If a backend service goes down, its associated tools can be removed from the list to prevent errors 2.
4.2 Notification Flow
- Server Detects Change: The server identifies a state change (e.g., auth expiry).
- Notification Sent: The server sends a
notifications/tools/list_changedmessage to the client 4. - Client Refreshes: The client receives the notification and automatically calls
tools/listto get the updated catalogue 2.
4.3 Implementation Snippet (TypeScript SDK)
The following example demonstrates how a client might handle these dynamic updates using the Speakeasy MCP pattern 2.
// Conceptual example of client-side handlingclient.on('notifications/tools/list_changed', async () => { // The server has signaled a change; refresh the tool list const { tools } = await client.call('tools/list', { page: 1 }); console.log('Tool catalogue refreshed:', tools.map(t => t.name));});
// On the server side (conceptual):// If auth fails, the server disables the tool and emits the notificationawait server.sendNotification('notifications/tools/list_changed');5. Security & Trust Considerations
Exposing tools to an LLM introduces risks, particularly regarding data privacy and unintended actions.
| Concern | Recommended Mitigation | Spec reference |
|---|---|---|
| Untrusted annotations | Clients must treat tool annotations as untrusted unless from a verified server. | § Tool → Tool Definition 4 |
| Input validation | Servers must validate all inputs against the schema before execution. | § Security → Input validation 4 |
| Prompt Injection | Malicious inputs could trick the model into fetching sensitive data. Trusting the developer isn’t enough; you must trust the content accessed. | Risks and safety 3 |
| Human confirmation | Applications should provide UI for users to confirm tool invocations, especially for write operations. | § User Interaction Model 4 |
| Data Leakage | Avoid putting sensitive info in tool definitions. Sanitize outputs before returning them to the model. | Risks and safety 3 |
OpenAI explicitly warns that custom MCP servers are third-party services not verified by OpenAI. Users should only connect to servers they trust, as these servers can access the data sent in the chat context 3.
6. Best-Practice Checklist for Building a Tool-Search-Enabled MCP Server
| ✅ Item | Why it matters | Quick tip |
|---|---|---|
| Declare Capabilities | Servers must declare the tools capability. Add listChanged if supporting dynamic updates. | Add "capabilities": { "tools": {}, "listChanged": "true" } in server metadata 4. |
| Rich Descriptions | Clear descriptions and examples help the LLM choose the right tool and reduce hallucinations. | Include "description": "Add two numbers together" and specific parameter descriptions 5. |
| Output Schema | Providing an outputSchema enables strict validation and better integration with code. | Define the expected JSON structure of the result 4. |
| Dynamic Updates | Implement notifications/tools/list_changed for auth-dependent tools to avoid runtime errors. | Emit the event immediately when tool availability changes 2. |
| Error Reporting | Return errors within the result object (isError: true) so the model can handle them. | Don’t just throw exceptions; give the model a chance to self-correct 5. |
| Human-in-the-Loop | Always require approval for sensitive actions. | Use the require_approval flag or UI prompts for write actions 1. |
Bottom Line
Tool Search is the fundamental mechanism that transforms an LLM from a text generator into an autonomous agent. By standardizing how tools are discovered (tools/list) and invoked (tools/call), MCP allows agents to dynamically adapt to their environment.
- For Developers: The
searchandfetchpattern is becoming the industry standard for retrieval, mandated by platforms like OpenAI for deep research integrations 3. - For Architects: The
listChangednotification capability enables robust, state-aware systems where tools appear and disappear based on context and authentication, preventing the “stale tool” problem 2. - For Security: While powerful, this capability requires strict input validation and human-in-the-loop safeguards to prevent prompt injection and unauthorized data access 4 3.
Implementing these patterns ensures your MCP server is not just a static API, but a dynamic extension of the AI’s cognitive process.
References
Footnotes
Other Ideas