Skip to content

Conversation

@mattpodwysocki
Copy link
Contributor

Overview

Adds MCP prompts feature to capture geospatial workflow expertise in reusable templates. Prompts guide AI agents through multi-step tasks with best practices built-in, complementing the tool description improvements in #78.

What are MCP Prompts?

Prompts are pre-built workflow templates that:

  • Guide agents through multi-step tasks (e.g., geocode → search → visualize)
  • Capture domain expertise and best practices
  • Provide consistent, user-friendly output formatting
  • Can be semantically matched to user intents (with RAG)

Think of them as "recipes" for common geospatial workflows.

Infrastructure Added

Base Classes

  • BasePrompt - Abstract base class with:
    • Argument validation
    • Metadata generation for MCP protocol
    • Message generation with filled-in arguments

Prompt Registry

  • promptRegistry.ts - Central registry for all prompts
  • getAllPrompts() - Get all available prompts
  • getPromptByName(name) - Get specific prompt

Server Integration

  • Added prompts capability to MCP server
  • Registered ListPrompts request handler
  • Registered GetPrompt request handler

Prompts Added (3)

1. find-places-nearby

Search for places near a location with map visualization.

Workflow: Geocode location → Category search → Display results on map

Arguments:

  • location (required) - Address, place name, or coordinates
  • category (optional) - Type of place (e.g., "coffee shops", "restaurants")
  • radius (optional) - Search radius in meters (default: 1000)

Example queries:

  • "Find coffee shops near downtown Seattle"
  • "Show me restaurants near 123 Main St"
  • "What museums are near the Eiffel Tower?"

2. get-directions

Turn-by-turn directions with route visualization.

Workflow: Geocode locations → Get directions → Display route on map

Arguments:

  • from (required) - Starting location
  • to (required) - Destination location
  • mode (optional) - Travel mode: driving, walking, or cycling (default: driving)

Example queries:

  • "Get directions from my office to the airport"
  • "How do I drive from Seattle to Portland?"
  • "Walking directions from here to the museum"

3. show-reachable-areas

Isochrone visualization for accessibility analysis.

Workflow: Geocode location → Calculate isochrone → Display coverage areas

Arguments:

  • location (required) - Center point location
  • time (optional) - Travel time in minutes (comma-separated for multiple: "10,20,30")
  • mode (optional) - Travel mode: driving, walking, or cycling (default: driving)

Example queries:

  • "Show me areas within 30 minutes of downtown"
  • "What's reachable in 15 minutes by walking from here?"
  • "Delivery coverage areas for our restaurant"

Benefits

For AI Agents

  • ✅ Consistent multi-step workflows
  • ✅ Right tools used in the right order
  • ✅ Comprehensive, user-friendly outputs
  • ✅ Reduced errors from following proven patterns

For RAG-based Systems

Example RAG Flow

User: "Find coffee shops near downtown"
  ↓
RAG semantic match → find-places-nearby prompt
  ↓
Agent follows prompt workflow:
  1. Geocode "downtown"
  2. Search category "coffee shops"
  3. Display on map with results
  ↓
Consistent, high-quality output

How Clients Use Prompts

List available prompts

const response = await client.request({
  method: "prompts/list"
});
// Returns: { prompts: [{ name, description, arguments }, ...] }

Get a prompt with arguments

const response = await client.request({
  method: "prompts/get",
  params: {
    name: "find-places-nearby",
    arguments: {
      location: "downtown Seattle",
      category: "coffee shops"
    }
  }
});
// Returns filled-in prompt ready for agent to execute

Testing

  • All tests passing: 365/365 tests passed
  • Build successful: TypeScript compilation completed
  • No breaking changes: Prompts are additive-only feature

Relationship to #78

This PR complements the tool description improvements:

Together they provide:

  • Better tool selection (via improved descriptions)
  • Better tool orchestration (via workflow prompts)

Future Work

Potential additional prompts:

  • optimize-route - Multi-stop route optimization
  • compare-locations - Travel time comparison between locations
  • analyze-accessibility - Accessibility analysis for a service area
  • map-visualization - Create custom map with multiple layers

🤖 Generated with Claude Code

Co-Authored-By: Claude Sonnet 4.5 noreply@anthropic.com

Implements base prompt infrastructure and 3 common geospatial workflow prompts
that capture domain expertise and enable AI agents to follow best practices for
multi-step tasks.

Infrastructure:
- BasePrompt class with argument validation and metadata generation
- Prompt registry for managing available prompts
- ListPrompts and GetPrompt request handlers in MCP server
- Prompts capability added to server configuration

Prompts added:
1. find-places-nearby: Search for places near a location with map visualization
   - Guides geocoding → category search → map visualization workflow
   - Example: "Find coffee shops near downtown Seattle"

2. get-directions: Turn-by-turn directions with route visualization
   - Guides geocoding → routing → map visualization workflow
   - Supports driving, walking, cycling modes
   - Example: "Get directions from LAX to Hollywood"

3. show-reachable-areas: Isochrone visualization for accessibility analysis
   - Guides geocoding → isochrone → map visualization workflow
   - Example: "Show me areas within 30 minutes of downtown"

These prompts help AI agents:
- Follow consistent multi-step workflows
- Use the right tools in the right order
- Generate comprehensive, user-friendly outputs
- Combine multiple tools effectively

Benefits for RAG-based agents:
- Prompts can be semantically matched to user intents
- Pre-built templates reduce error rates
- Domain expertise captured in reusable workflows
- Consistent output formatting across similar tasks

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@mattpodwysocki mattpodwysocki requested a review from a team as a code owner December 17, 2025 05:19
Adds comprehensive tests for the prompts feature focusing on high-ROI areas:
core infrastructure and registration.

Tests added (27 total):

1. BasePrompt validation (17 tests)
   - Metadata structure compliance with MCP protocol
   - Argument validation (required vs optional)
   - Message generation with various argument combinations
   - Edge cases (no arguments, multiple required args)
   - Error handling for missing required arguments

2. Prompt Registry (10 tests)
   - getAllPrompts returns all registered prompts
   - getPromptByName lookup (valid and invalid names)
   - Unique naming validation
   - Metadata structure for all prompts
   - Kebab-case naming convention enforcement
   - Per-prompt metadata validation (arguments, descriptions)

Test philosophy:
- Focus on infrastructure and registration (high value, stable API)
- Skip end-to-end workflow tests (low value, high maintenance)
- Skip message content parsing (too brittle, prompts change often)

All 27 tests passing.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@mattpodwysocki
Copy link
Contributor Author

✅ Tests Added

Added comprehensive unit tests for the prompts infrastructure (27 tests total).

Test Coverage

1. BasePrompt Tests (17 tests)

Core validation and infrastructure:

  • ✅ Metadata generation (MCP protocol compliance)
  • ✅ Argument validation (required vs optional)
  • ✅ Error handling for missing required arguments
  • ✅ Message generation with various argument combinations
  • ✅ Edge cases (no arguments, multiple required args)

2. Prompt Registry Tests (10 tests)

Registration and lookup:

  • ✅ getAllPrompts returns all registered prompts
  • ✅ getPromptByName lookup (valid and invalid names)
  • ✅ Unique naming validation
  • ✅ Kebab-case naming convention enforcement
  • ✅ Per-prompt metadata validation

Test Philosophy

Tests we added (high ROI):

  • ✅ Core infrastructure (BasePrompt validation)
  • ✅ Registration system (registry functions)
  • ✅ MCP protocol compliance (metadata structure)

Tests we skipped (low ROI):

  • ❌ End-to-end workflow tests (requires mocking all tools)
  • ❌ Message content parsing (too brittle, prompts change often)
  • ❌ All argument permutations (overkill for simple templates)

Results

npm test -- test/prompts/

✓ test/prompts/BasePrompt.test.ts (17 tests)
✓ test/prompts/promptRegistry.test.ts (10 tests)

Test Files  2 passed (2)
     Tests  27 passed (27)

The tests focus on preventing regressions in core infrastructure while avoiding brittle tests that would break with normal prompt template changes.

@mattpodwysocki mattpodwysocki merged commit 640849b into main Dec 17, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants