Skill Library

advanced Automation Integration

Tool Orchestration Architect

Master complex tool usage across hundreds of tools with Claude Opus 4.5, one of the strongest tool-using models for building seamless multi-tool workflows.

When to Use This Skill

  • Building agents that integrate 10+ different tools/APIs
  • Complex workflows requiring sequential tool calls
  • Dynamic tool selection based on context
  • Tool chain optimization and debugging
  • Building Model Context Protocol (MCP) servers
  • Creating custom tool ecosystems

How to use this skill

1. Copy the AI Core Logic from the Instructions tab below.

2. Paste it into your AI's System Instructions or as your first message.

3. Provide your raw data or requirements as requested by the AI.

#tools#apis#integration#orchestration#automation#mcp

System Directives

## Tool Orchestration Framework ### Phase 1: Tool Inventory & Mapping ``` I'm building a system that needs to orchestrate multiple tools: **Goal:** [High-level objective] **Available Tools:** [List all tools/APIs available] **Constraints:** [Rate limits, cost limits, latency requirements] Help me map out the tool landscape: 1. **Tool Capabilities Matrix** For each tool, document: - Primary function and use cases - Input requirements (parameters, authentication) - Output format and structure - Rate limits and quotas - Typical latency - Error modes and handling - Dependencies on other tools 2. **Tool Relationships** - Which tools produce outputs consumed by other tools? - Which tools can serve as fallbacks for each other? - Which tools must be called in sequence? - Which tools can be called in parallel? 3. **Capability Gaps** - What functionality is needed but missing? - Can existing tools be combined to fill gaps? - What new tools should be added? Generate a tool dependency graph and capability matrix. ``` ### Phase 2: Intelligent Tool Selection #### Context-Aware Tool Routing ``` Design a tool selection strategy for: **Task:** [What needs to be accomplished] **Available Tools:** [List relevant tools] **Context:** [Current state, previous actions, user preferences] Create tool selection logic: 1. **Primary Tool Selection** - Which tool is best suited for this task? - What factors influence the choice? (cost, speed, accuracy) - Decision tree or scoring algorithm 2. **Fallback Strategy** - If primary tool fails, what's the backup? - What constitutes a "failure" (error, timeout, poor result)? - How many fallbacks before escalating to human? 3. **Dynamic Adaptation** - How to learn which tools work best for which tasks? - When to try alternative approaches? - How to balance exploration vs. exploitation? **Example Logic:** ``` Task: "Translate this document from English to Japanese" Decision Flow: 1. Check document length - < 1000 words → Google Translate API (fast, cheap) - > 1000 words → DeepL API (better quality for long documents) 2. Check domain - Legal/Medical → Custom terminology API first - General → Standard translation API 3. Quality Check - Run back-translation to verify accuracy - If confidence < 80%, escalate to human translator - If confidence > 95%, proceed directly 4. Fallback - If primary API fails → Try secondary API - If both fail → Queue for human translation ``` Implement this selection logic with proper error handling. ``` #### Tool Call Composition ``` Break down this complex task into tool calls: **Complex Task:** [Multi-step task description] **Available Tools:** [List tools] Decompose into a sequence: 1. **Sequential Steps** Step 1: [Tool A] - [Purpose] ↓ Pass output X Step 2: [Tool B] - [Purpose] ↓ Pass output Y Step 3: [Tool C] - [Purpose] ↓ Final result 2. **Parallel Opportunities** Which steps can run concurrently? - Steps with no data dependencies - Steps that can fan-out and merge results 3. **Conditional Branching** Where do results determine next steps? ``` IF Tool A returns X THEN call Tool B ELSE IF Tool A returns Y THEN call Tool C ELSE retry Tool A with different parameters ``` 4. **Data Transformation** Map data between tool formats: - Tool A outputs JSON, Tool B expects XML → Transform - Tool C outputs array, Tool D expects CSV → Convert Generate executable workflow with proper data mapping. ``` ### Phase 3: Error Handling & Resilience #### Comprehensive Error Strategy ``` Design error handling for this tool chain: **Workflow:** [Sequence of tool calls] **Failure Modes:** [Known ways tools can fail] For each tool in the chain: 1. **Expected Errors** - 400 Bad Request → Invalid input, validate before calling - 401 Unauthorized → Refresh credentials and retry - 404 Not Found → Resource doesn't exist, fail gracefully - 429 Rate Limited → Backoff and retry with exponential delay - 500 Server Error → Retry with fresh request after delay - Timeout → Increase timeout or break into smaller chunks 2. **Unexpected Errors** - Unknown error codes → Log full context, fail safely - Malformed responses → Validate with schema, request retry - Partial failures → Determine if partial result is usable 3. **Retry Logic** ``` Max retries: 3 Backoff strategy: Exponential (1s, 2s, 4s) Jitter: Add random 0-500ms to prevent thundering herd Circuit breaker: After 5 consecutive failures, pause 60s ``` 4. **Rollback & Cleanup** If Step 3 fails after Steps 1-2 succeeded: - Undo Step 2 (delete created resource) - Undo Step 1 (release reservation) - Return system to initial state 5. **Graceful Degradation** If tool unavailable: - Can we proceed with reduced functionality? - Can we use cached/stale data temporarily? - Notify user of degraded experience Provide error handling code with proper logging. ``` #### Idempotency & Transaction Safety ``` Ensure workflow is safe to retry: **Problem:** Partial execution leaves system in inconsistent state **Solution: Idempotent Operations** 1. **Idempotency Keys** - Generate unique request ID upfront - Include ID in all tool calls - Tool checks if ID already processed - If yes, return cached result (no duplicate side effects) 2. **State Machine Approach** ``` State: INIT → STEP1_COMPLETE → STEP2_COMPLETE → FINAL On restart: - Check current state - Resume from last completed step - Don't re-execute completed steps ``` 3. **Two-Phase Commit** For critical operations: - Phase 1: Prepare (validate, reserve resources) - Wait for confirmation all participants ready - Phase 2: Commit (execute irreversible actions) - If any fail in Phase 1, rollback all 4. **Compensating Transactions** For operations that can't be undone: - Create offsetting transaction - Example: Can't un-send email, but can send correction Implement idempotent workflow orchestration. ``` ### Phase 4: Performance Optimization #### Parallel Execution ``` Optimize this workflow for speed: **Current: Sequential (Slow)** Tool A (2s) → Tool B (3s) → Tool C (2s) = 7s total **Optimized: Parallel** ``` Identify dependencies: - Tool A: No dependencies → Start immediately - Tool B: Depends on Tool A output - Tool C: No dependencies → Start immediately Parallel execution: Time 0s: Launch Tool A and Tool C simultaneously Time 2s: Tool A and Tool C complete Time 2s: Launch Tool B (using Tool A's output) Time 5s: Tool B completes Total time: 5s (28% faster) ``` Design parallel execution plan: 1. **Dependency Analysis** - Build dependency graph - Identify critical path (longest sequence) - Find independent operations 2. **Concurrency Strategy** - Max parallel calls (respect rate limits) - Thread pool or async/await pattern - Result aggregation logic 3. **Early Termination** - If any critical tool fails, cancel remaining - If optional tool fails, continue without it Provide parallel execution code (async/await or promises). ``` #### Caching & Memoization ```` Add intelligent caching to reduce redundant tool calls: **Cache Strategy:** 1. **What to Cache** - Expensive operations (>1s latency, >$0.01 cost) - Deterministic results (same input → same output) - Low-change data (APIs with infrequent updates) 2. **Cache Keys** - Hash of input parameters - Include tool version (bust cache on updates) - Example: `cache_key = hash(tool_name + version + params)` 3. **Expiration Policy** - Time-based: TTL 5 minutes for real-time data, 24 hours for static - Event-based: Invalidate on known triggers (user update, etc.) - LRU: Evict least recently used when cache full 4. **Conditional Requests** - ETag: If-None-Match header for HTTP caching - Last-Modified: Only fetch if changed - Saves bandwidth and latency 5. **Prefetching** - Predict likely next tool calls - Fetch proactively while user waits - Warm cache for common workflows **Implementation:** ```python from functools import lru_cache import hashlib class CachedToolOrchestrator: def __init__(self): self.cache = {} def call_tool(self, tool_name, params, ttl=300): key = hashlib.sha256( f"{tool_name}:{json.dumps(params, sort_keys=True)}".encode() ).hexdigest() if key in self.cache: cached_result, timestamp = self.cache[key] if time.time() - timestamp < ttl: return cached_result # Cache hit result = self._execute_tool(tool_name, params) self.cache[key] = (result, time.time()) return result ```` Implement caching layer for tool orchestration. ``` ## Advanced Techniques ### Model Context Protocol (MCP) Integration ``` Build an MCP server to expose custom tools: **MCP Server Structure:** ```typescript import { Server } from '@modelcontextprotocol/sdk/server/index.js'; const server = new Server({ name: 'my-tools-server', version: '1.0.0' }); // Register tools server.setRequestHandler('tools/list', async () => ({ tools: [ { name: 'search_database', description: 'Search internal database for records', inputSchema: { type: 'object', properties: { query: { type: 'string' }, limit: { type: 'number', default: 10 } }, required: ['query'] } }, { name: 'send_email', description: 'Send email to user', inputSchema: { type: 'object', properties: { to: { type: 'string', format: 'email' }, subject: { type: 'string' }, body: { type: 'string' } }, required: ['to', 'subject', 'body'] } } ] })); // Implement tool handlers server.setRequestHandler('tools/call', async (request) => { const { name, arguments: args } = request.params; switch (name) { case 'search_database': return await searchDatabase(args.query, args.limit); case 'send_email': return await sendEmail(args.to, args.subject, args.body); default: throw new Error(`Unknown tool: ${name}`); } }); ``` Best practices: - Clear tool descriptions - Strict input validation with JSON Schema - Rate limiting per tool - Audit logging of all tool calls - Authentication and authorization Implement MCP server with your custom tools. ``` ### Dynamic Tool Discovery ``` Enable agents to discover and use new tools at runtime: **Tool Registry:** ```json { "tools": [ { "id": "weather_api", "name": "Get Weather", "description": "Fetch current weather for a location", "endpoint": "https://api.weather.com/v1/current", "auth_type": "api_key", "parameters": { "location": { "type": "string", "required": true }, "units": { "type": "enum", "values": ["metric", "imperial"] } }, "output_format": "json", "cost_per_call": 0.001, "avg_latency_ms": 200 } ] } ``` **Discovery Process:** 1. Agent queries registry for tools matching capability 2. Filters by constraints (cost, latency, auth available) 3. Selects best tool using scoring function 4. Generates API call dynamically from spec 5. Validates response against output schema 6. Updates tool performance metrics This enables adding new tools without code changes! Implement dynamic tool discovery system. ```` ## Best Practices 1. **Tool Call Budget**: Set limits on total tool calls per task to prevent runaway costs 2. **Observability**: Log every tool call with inputs, outputs, latency, and cost 3. **Testing**: Mock tool APIs for unit tests, use test mode APIs when available 4. **Documentation**: Maintain tool catalog with examples for each tool 5. **Version Management**: Handle tool API versioning, deprecation notices 6. **Security**: Validate all inputs, sanitize outputs, use least-privilege auth 7. **User Feedback**: Allow users to approve high-stakes tool calls before execution ## Example Session **User**: "Build an agent that monitors our API, and if errors spike, creates a PagerDuty incident and posts to Slack." **Opus Response**: > I'll orchestrate tools for monitoring, alerting, and communication: > > **Tool Chain:** > 1. **Monitor**: Datadog API → Query error rate (last 5 min) > 2. **Detect**: If error_rate > threshold (2x baseline) > 3. **Alert Branch:** > - **PagerDuty API** → Create P2 incident > - **Slack API** → Post to #incidents channel (parallel with PagerDuty) > 4. **Enrich**: Fetch recent error logs from Datadog > 5. **Update**: Add error logs to PagerDuty incident description > > **Implementation:** > ```python > async def monitor_and_alert(): > # Step 1: Check error rate > metrics = await datadog.query("avg:api.errors{*}.as_rate()") > current_rate = metrics['series'][0]['pointlist'][-1][1] > baseline = get_baseline_error_rate() > > if current_rate > baseline * 2: > # Step 2: Parallel alerts > incident, slack_msg = await asyncio.gather( > pagerduty.create_incident({ > "title": f"API Error Rate Spike: {current_rate:.2f}/sec", > "severity": "error", > "service": "api-production" > }), > slack.post_message({ > "channel": "#incidents", > "text": f"🚨 API errors spiked to {current_rate:.2f}/sec" > }) > ) > > # Step 3: Enrich with logs > logs = await datadog.get_logs(severity="error", limit=10) > await pagerduty.add_note(incident.id, format_logs(logs)) > ``` > > **Error Handling:** > - If Datadog fails → Retry 3x, then escalate to manual check > - If PagerDuty fails → Send urgent Slack DM to on-call engineer > - If Slack fails → Log locally, don't block PagerDuty incident > > Shall I add caching to reduce Datadog API costs? ## Related Resources - [Model Context Protocol](https://modelcontextprotocol.io/) for tool standardization - [OpenAPI Spec](https://swagger.io/specification/) for API documentation - [JSONSchema](https://json-schema.org/) for input validation - [Temporal](https://temporal.io/) for durable workflow orchestration ````

Procedural Integration

This skill is formatted as a set of persistent system instructions. When integrated, it provides the AI model with specialized workflows and knowledge constraints for Automation Integration.

Skill Actions


Model Compatibility
🤖 Claude Opus
Code Execution: Required
MCP Tools: Required
Footprint ~3,751 tokens