Skill Library

advanced Code Development

Performance Profiling Expert

Identify and resolve performance bottlenecks using profiling tools, flame graphs, and systematic optimization strategies for web, backend, and database applications.

When to Use This Skill

  • Diagnosing slow application performance
  • Optimizing database query performance
  • Reducing page load times
  • Debugging memory leaks
  • Capacity planning and scaling
  • Meeting SLA requirements

How to use this skill

1. Copy the AI Core Logic from the Instructions tab below.

2. Paste it into your AI's System Instructions or as your first message.

3. Provide your raw data or requirements as requested by the AI.

#performance#profiling#optimization#debugging#benchmarking

System Directives

## Curation Note Performance issues are notoriously difficult to diagnose because symptoms rarely point to root causes. This skill compiles profiling methodologies from performance engineering practitioners. The "measure first" principle is paramount because premature optimization wastes effort on non-bottlenecks. The flame graph interpretation section addresses one of the most powerful yet underutilized debugging tools available. ## Profiling Methodology ### Step 1: Define Performance Goals ```markdown ## Performance Requirements - Page load time: < 2 seconds - API response time: < 200ms (p95) - Database queries: < 50ms - Memory usage: < 512MB - CPU utilization: < 70% ``` ### Step 2: Measure Current State ```bash node --prof app.js node --prof-process isolate-*.log > profile.txt python -m cProfile -o profile.stats app.py python -m pstats profile.stats ``` ### Step 3: Identify Bottlenecks ``` Common bottleneck locations: 1. Database queries (N+1 problems, missing indexes) 2. Network calls (external APIs, serialization) 3. CPU-intensive operations (parsing, compression) 4. Memory allocation (object creation, leaks) 5. I/O operations (file system, logging) ``` ## Flame Graph Analysis ```bash npm install -g 0x 0x app.js pip install py-spy py-spy record -o profile.svg -- python app.py go tool pprof -http=:8080 cpu.prof ``` ### Reading Flame Graphs ``` Width = Time spent (wider = more time) Height = Call stack depth Color = Random (for visual distinction) Look for: - Wide bars at the top (hot functions) - Tall stacks (deep call chains) - Repeated patterns (loops) ``` ## Database Query Optimization ```sql -- Identify slow queries (PostgreSQL) SELECT query, calls, mean_time, total_time FROM pg_stat_statements ORDER BY mean_time DESC LIMIT 10; -- Analyze query plan EXPLAIN ANALYZE SELECT * FROM orders WHERE user_id = 123; -- Common optimizations: -- 1. Add indexes for WHERE clauses CREATE INDEX idx_orders_user ON orders(user_id); -- 2. Fix N+1 queries with JOINs SELECT o.*, u.name FROM orders o JOIN users u ON o.user_id = u.id; -- 3. Use covering indexes CREATE INDEX idx_orders_covering ON orders(user_id) INCLUDE (total, status); ``` ## Web Performance ```javascript // Measure Core Web Vitals const observer = new PerformanceObserver((list) => { for (const entry of list.getEntries()) { console.log(`${entry.name}: ${entry.value}`); } }); observer.observe({ type: 'largest-contentful-paint', buffered: true }); observer.observe({ type: 'first-input', buffered: true }); observer.observe({ type: 'layout-shift', buffered: true }); // Resource timing const resources = performance.getEntriesByType('resource'); resources.forEach((r) => { console.log(`${r.name}: ${r.duration}ms`); }); ``` ## Memory Profiling ```javascript // Node.js heap snapshot const v8 = require('v8'); const fs = require('fs'); function takeHeapSnapshot() { const snapshotStream = v8.writeHeapSnapshot(); console.log(`Heap snapshot written to: ${snapshotStream}`); } // Monitor memory usage setInterval(() => { const used = process.memoryUsage(); console.log({ heapUsed: Math.round(used.heapUsed / 1024 / 1024) + ' MB', heapTotal: Math.round(used.heapTotal / 1024 / 1024) + ' MB', rss: Math.round(used.rss / 1024 / 1024) + ' MB' }); }, 10000); ``` ## Optimization Patterns ```typescript // Caching const cache = new Map(); async function getUser(id: string): Promise<User> { if (cache.has(id)) { return cache.get(id); } const user = await db.users.findById(id); cache.set(id, user); return user; } // Batching async function getUsers(ids: string[]): Promise<User[]> { // Instead of N queries, do 1 return db.users.findMany({ where: { id: { in: ids } } }); } // Lazy loading const expensiveData = lazy(() => computeExpensiveData()); ``` ## Best Practices 1. **Measure first** - Never optimize without data 2. **Profile in production** - Dev ≠ Prod behavior 3. **Focus on hot paths** - 80/20 rule applies 4. **Set budgets** - Define acceptable thresholds 5. **Automate monitoring** - Catch regressions early 6. **Document optimizations** - Explain why, not just what

Procedural Integration

This skill is formatted as a set of persistent system instructions. When integrated, it provides the AI model with specialized workflows and knowledge constraints for Code Development.

Skill Actions


Model Compatibility
🤖 Claude Opus🤖 Gemini 2.5 Pro
Code Execution: Required
MCP Tools: Optional
Footprint ~1,201 tokens