Skill Library

advanced Code Development

MCP Client Developer

Build Model Context Protocol clients to connect AI assistants with external tools and data sources. Master the protocol for seamless AI-tool integration.

When to Use This Skill

  • Building AI-powered applications that need external tool access
  • Integrating Claude with custom data sources
  • Creating plugin architectures for AI assistants
  • Developing IDE extensions with AI capabilities

How to use this skill

1. Copy the AI Core Logic from the Instructions tab below.

2. Paste it into your AI's System Instructions or as your first message.

3. Provide your raw data or requirements as requested by the AI.

#mcp#protocol#api#claude#integration

System Directives

## MCP Client Implementation ### Basic Client Structure ```python from mcp import ClientSession, StdioServerParameters from mcp.client.stdio import stdio_client server_params = StdioServerParameters( command="python", args=["path/to/server.py"], env={"API_KEY": "your-key"} ) async def use_mcp_tools(): async with stdio_client(server_params) as (read, write): async with ClientSession(read, write) as session: await session.initialize() tools = await session.list_tools() print(f"Available tools: {[tool.name for tool in tools.tools]}") result = await session.call_tool( "search_database", {"query": "find users", "limit": 10} ) return result ``` ### Tool Discovery and Usage ```python class MCPClient: def __init__(self, server_params): self.server_params = server_params self.tools = {} self.session = None async def connect(self): """Establish connection and discover tools""" self._client = stdio_client(self.server_params) self._read, self._write = await self._client.__aenter__() self.session = ClientSession(self._read, self._write) await self.session.__aenter__() await self.session.initialize() tools_response = await self.session.list_tools() for tool in tools_response.tools: self.tools[tool.name] = tool async def execute(self, tool_name: str, params: dict): """Execute a tool with error handling""" if tool_name not in self.tools: raise ValueError(f"Unknown tool: {tool_name}") try: result = await self.session.call_tool(tool_name, params) return { "success": True, "content": result.content, "is_error": result.isError } except Exception as e: return { "success": False, "error": str(e) } async def close(self): """Clean up resources""" if self.session: await self.session.__aexit__(None, None, None) if hasattr(self, '_client'): await self._client.__aexit__(None, None, None) ``` ### Integration with LLM Applications ```python from anthropic import Anthropic class AIAssistantWithTools: def __init__(self, mcp_client): self.client = Anthropic() self.mcp = mcp_client async def process_query(self, user_query: str): tools_desc = [] for name, tool in self.mcp.tools.items(): tools_desc.append({ "name": name, "description": tool.description, "parameters": tool.inputSchema }) response = self.client.messages.create( model="claude-3-opus-20240229", max_tokens=4096, tools=tools_desc, messages=[{"role": "user", "content": user_query}] ) if response.stop_reason == "tool_use": for block in response.content: if block.type == "tool_use": result = await self.mcp.execute( block.name, block.input ) return response ``` ## Best Practices 1. **Connection Management** - Always use context managers for cleanup - Implement connection pooling for multiple servers - Handle reconnection on failures 2. **Error Handling** - Validate inputs before tool calls - Implement timeouts for long-running tools - Provide meaningful error messages 3. **Performance** - Cache tool definitions - Batch operations when possible - Use async/await for concurrent operations 4. **Security** - Validate tool outputs before using - Sanitize inputs to prevent injection - Use environment variables for secrets ## Common Patterns ```python class MultiMCPClient: def __init__(self): self.clients = {} async def add_server(self, name: str, params: StdioServerParameters): client = MCPClient(params) await client.connect() self.clients[name] = client async def route_tool_call(self, server: str, tool: str, params: dict): if server not in self.clients: raise ValueError(f"Unknown server: {server}") return await self.clients[server].execute(tool, params) ```

Procedural Integration

This skill is formatted as a set of persistent system instructions. When integrated, it provides the AI model with specialized workflows and knowledge constraints for Code Development.

Skill Actions


Model Compatibility
🤖 Claude Opus🧠 GPT-4
Code Execution: Required
MCP Tools: Required
Footprint ~1,356 tokens