Adds the plugin-dev plugin to public marketplace. A comprehensive toolkit for
developing Claude Code plugins with 7 expert skills, 3 AI-assisted agents, and
extensive documentation covering the complete plugin development lifecycle.
Key features:
- 7 skills: hook-development, mcp-integration, plugin-structure, plugin-settings,
command-development, agent-development, skill-development
- 3 agents: agent-creator (AI-assisted generation), plugin-validator (structure
validation), skill-reviewer (quality review)
- 1 command: /plugin-dev:create-plugin (guided 8-phase workflow)
- 10 utility scripts for validation and testing
- 21 reference docs with deep-dive guidance (~11k words)
- 9 working examples demonstrating best practices
Changes for public release:
- Replaced all references to internal repositories with "Claude Code"
- Updated MCP examples: internal.company.com → api.example.com
- Updated token variables: ${INTERNAL_TOKEN} → ${API_TOKEN}
- Reframed agent-creation-system-prompt as "proven in production"
- Preserved all ${CLAUDE_PLUGIN_ROOT} references (186 total)
- Preserved valuable test blocks in core modules
Validation:
- All 3 agents validated successfully with validate-agent.sh
- All JSON files validated with jq
- Zero internal references remaining
- 59 files migrated, 21,971 lines added
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
11 KiB
Agent Triggering Examples: Best Practices
Complete guide to writing effective <example> blocks in agent descriptions for reliable triggering.
Example Block Format
The standard format for triggering examples:
<example>
Context: [Describe the situation - what led to this interaction]
user: "[Exact user message or request]"
assistant: "[How Claude should respond before triggering]"
<commentary>
[Explanation of why this agent should be triggered in this scenario]
</commentary>
assistant: "[How Claude triggers the agent - usually 'I'll use the [agent-name] agent...']"
</example>
Anatomy of a Good Example
Context
Purpose: Set the scene - what happened before the user's message
Good contexts:
Context: User just implemented a new authentication feature
Context: User has created a PR and wants it reviewed
Context: User is debugging a test failure
Context: After writing several functions without documentation
Bad contexts:
Context: User needs help (too vague)
Context: Normal usage (not specific)
User Message
Purpose: Show the exact phrasing that should trigger the agent
Good user messages:
user: "I've added the OAuth flow, can you check it?"
user: "Review PR #123"
user: "Why is this test failing?"
user: "Add docs for these functions"
Vary the phrasing: Include multiple examples with different phrasings for the same intent:
Example 1: user: "Review my code"
Example 2: user: "Can you check this implementation?"
Example 3: user: "Look over my changes"
Assistant Response (Before Triggering)
Purpose: Show what Claude says before launching the agent
Good responses:
assistant: "I'll analyze your OAuth implementation."
assistant: "Let me review that PR for you."
assistant: "I'll investigate the test failure."
Proactive example:
assistant: "Great! Now let me review the code quality."
<commentary>
Code was just written, proactively trigger review agent.
</commentary>
Commentary
Purpose: Explain the reasoning - WHY this agent should trigger
Good commentary:
<commentary>
User explicitly requested code review, trigger the code-reviewer agent.
</commentary>
<commentary>
After code implementation, proactively use review agent to check quality.
</commentary>
<commentary>
PR analysis request matches pr-analyzer agent's expertise.
</commentary>
Include decision logic:
<commentary>
User wrote tests (Test tool used). The test-analyzer agent should check
test quality and coverage before continuing.
</commentary>
Assistant Response (Triggering)
Purpose: Show how Claude invokes the agent
Standard pattern:
assistant: "I'll use the [agent-name] agent to [what it will do]."
Examples:
assistant: "I'll use the code-reviewer agent to analyze the changes."
assistant: "Let me use the test-generator agent to create comprehensive tests."
assistant: "I'll use the security-analyzer agent to check for vulnerabilities."
Example Types
Type 1: Explicit Request
User directly asks for what the agent does:
<example>
Context: User explicitly requests security analysis
user: "Check my code for security vulnerabilities"
assistant: "I'll use the security-analyzer agent to perform a thorough security review."
<commentary>
Direct security analysis request triggers the security-analyzer agent.
</commentary>
</example>
Type 2: Proactive Triggering
Agent triggers after relevant work without explicit request:
<example>
Context: User just implemented database queries
user: "I've added the user search functionality"
assistant: "Great! Let me review it for security."
<commentary>
Database code written. Proactively trigger security-analyzer to check
for SQL injection and other database security issues.
</commentary>
assistant: "I'll use the security-analyzer agent to check the database code."
</example>
Type 3: Implicit Request
User implies need without stating it directly:
<example>
Context: User mentions code is hard to understand
user: "This code is confusing and hard to follow"
assistant: "I can help improve the code clarity."
<commentary>
User indicates complexity issues. Trigger code-simplifier agent to refactor
for better readability.
</commentary>
assistant: "I'll use the code-simplifier agent to suggest improvements."
</example>
Type 4: Tool Usage Pattern
Agent triggers based on prior tool usage:
<example>
Context: User made multiple edits to test files
user: "I've updated all the tests"
assistant: "Great! Let me verify test quality."
<commentary>
Multiple Edit tools used on test files. Proactively trigger test-quality-analyzer
to ensure tests follow best practices.
</commentary>
assistant: "I'll use the test-quality-analyzer agent to review the tests."
</example>
Multiple Examples Strategy
Cover Different Phrasings
<example>
user: "Review my code"
[...]
</example>
<example>
user: "Can you check my implementation?"
[...]
</example>
<example>
user: "Look over these changes"
[...]
</example>
Cover Proactive and Reactive
<example>
Context: User explicitly requests review
user: "Review my code for issues"
[...]
</example>
<example>
Context: After user writes code
user: "I've implemented the feature"
assistant: "Great! Now let me review it."
<commentary>
Code written, proactively review.
</commentary>
[...]
</example>
Cover Edge Cases
<example>
Context: Typical usage
user: "Check my PR"
[...]
</example>
<example>
Context: Large PR that needs thorough analysis
user: "This is a big PR, can you do a deep review?"
[...]
<commentary>
Large PR mentioned, use thorough analysis mode.
</commentary>
</example>
Common Mistakes
❌ Missing Context
<example>
user: "Review my code"
assistant: "I'll use the code-reviewer agent."
</example>
Why bad: No context about what led to this request.
✅ With Context
<example>
Context: User just finished implementing authentication
user: "Review my code"
assistant: "I'll use the code-reviewer agent to analyze the auth implementation."
<commentary>
Auth code written, review for security and best practices.
</commentary>
</example>
❌ No Commentary
<example>
Context: User requests review
user: "Check my changes"
assistant: "I'll use the reviewer agent."
</example>
Why bad: Doesn't explain WHY agent triggers.
✅ With Commentary
<example>
Context: User requests review after making changes
user: "Check my changes"
assistant: "I'll use the code-reviewer agent to analyze the modifications."
<commentary>
Code review request triggers agent to check quality, security, and adherence to standards.
</commentary>
</example>
❌ Agent Responds Directly
<example>
user: "Review my code"
assistant: "I found the following issues: [lists issues]"
</example>
Why bad: Shows agent's output, not triggering.
✅ Shows Triggering
<example>
user: "Review my code"
assistant: "I'll use the code-reviewer agent to perform the review."
<commentary>
Review request triggers the agent.
</commentary>
</example>
How Many Examples?
Minimum: 2 Examples
Cover at least:
- Explicit request
- One variation or proactive trigger
Recommended: 3-4 Examples
Cover:
- Explicit request (direct ask)
- Implicit request (user implies need)
- Proactive trigger (after relevant work)
- Edge case or specific scenario
Maximum: 6 Examples
More than 6 makes description too long. Focus on most important scenarios.
Template Library
Code Review Agent
<example>
Context: User just implemented a new feature
user: "I've added the user registration feature"
assistant: "Great! Let me review the code quality."
<commentary>
Code written, proactively trigger code-reviewer agent.
</commentary>
assistant: "I'll use the code-reviewer agent to analyze the implementation."
</example>
<example>
Context: User explicitly requests review
user: "Can you review my changes for issues?"
assistant: "I'll use the code-reviewer agent to perform a thorough review."
<commentary>
Explicit review request triggers the agent.
</commentary>
</example>
Test Generation Agent
<example>
Context: User implemented functions without tests
user: "I've added the data processing functions"
assistant: "Great! Let me generate tests for these."
<commentary>
New code without tests, proactively trigger test-generator agent.
</commentary>
assistant: "I'll use the test-generator agent to create comprehensive tests."
</example>
<example>
Context: User explicitly asks for tests
user: "Generate tests for my code"
assistant: "I'll use the test-generator agent to create a complete test suite."
<commentary>
Direct test generation request triggers the agent.
</commentary>
</example>
Documentation Agent
<example>
Context: User added new API endpoints
user: "I've implemented the REST API endpoints"
assistant: "I'll document these endpoints."
<commentary>
New API code, proactively generate documentation.
</commentary>
assistant: "I'll use the api-docs-writer agent to create comprehensive documentation."
</example>
<example>
Context: User requests documentation
user: "Write docs for this API"
assistant: "I'll use the api-docs-writer agent to generate clear documentation."
<commentary>
Documentation request triggers the agent.
</commentary>
</example>
Validation Agent
<example>
Context: User ready to commit changes
user: "I'm ready to commit this code"
assistant: "Let me validate it first."
<commentary>
Before commit, proactively validate with validation-agent.
</commentary>
assistant: "I'll use the code-validator agent to check for issues."
</example>
<example>
Context: User asks for validation
user: "Validate my implementation"
assistant: "I'll use the code-validator agent to verify correctness."
<commentary>
Explicit validation request triggers the agent.
</commentary>
</example>
Debugging Triggering Issues
Agent Not Triggering
Check:
- Examples include relevant keywords from user message
- Context matches actual usage scenarios
- Commentary explains triggering logic clearly
- Assistant shows use of Agent tool in examples
Fix: Add more examples covering different phrasings.
Agent Triggers Too Often
Check:
- Examples are too broad or generic
- Triggering conditions overlap with other agents
- Commentary doesn't distinguish when NOT to use
Fix: Make examples more specific, add negative examples.
Agent Triggers in Wrong Scenarios
Check:
- Examples don't match actual intended use
- Commentary suggests inappropriate triggering
Fix: Revise examples to show only correct triggering scenarios.
Best Practices Summary
✅ DO:
- Include 2-4 concrete, specific examples
- Show both explicit and proactive triggering
- Provide clear context for each example
- Explain reasoning in commentary
- Vary user message phrasing
- Show Claude using Agent tool
❌ DON'T:
- Use generic, vague examples
- Omit context or commentary
- Show only one type of triggering
- Skip the agent invocation step
- Make examples too similar
- Forget to explain why agent triggers
Conclusion
Well-crafted examples are crucial for reliable agent triggering. Invest time in creating diverse, specific examples that clearly demonstrate when and why the agent should be used.