Skip to content

Latest commit

 

History

History
358 lines (302 loc) · 13.3 KB

File metadata and controls

358 lines (302 loc) · 13.3 KB

Observability with OpenTelemetry

Learn how to enable and setup OpenTelemetry for Gemini CLI.

Key Benefits

  • 🔍 Usage Analytics: Understand interaction patterns and feature adoption across your team
  • ⚡ Performance Monitoring: Track response times, token consumption, and resource utilization
  • 🐛 Real-time Debugging: Identify bottlenecks, failures, and error patterns as they occur
  • 📊 Workflow Optimization: Make informed decisions to improve configurations and processes
  • 🏢 Enterprise Governance: Monitor usage across teams, track costs, ensure compliance, and integrate with existing monitoring infrastructure

OpenTelemetry Integration

Built on OpenTelemetry — the vendor-neutral, industry-standard observability framework — Gemini CLI's observability system provides:

  • Universal Compatibility: Export to any OpenTelemetry backend (Google Cloud, Jaeger, Prometheus, Datadog, etc.)
  • Standardized Data: Use consistent formats and collection methods across your toolchain
  • Future-Proof Integration: Connect with existing and future observability infrastructure
  • No Vendor Lock-in: Switch between backends without changing your instrumentation

Configuration

All telemetry behavior is controlled through your .gemini/settings.json file and can be overridden with CLI flags:

Setting Values Default CLI Override Description
enabled true/false false --telemetry / --no-telemetry Enable or disable telemetry
target "gcp"/"local" "local" --telemetry-target <local|gcp> Where to send telemetry data
otlpEndpoint URL string http://localhost:4317 --telemetry-otlp-endpoint <URL> OTLP collector endpoint
otlpProtocol "grpc"/"http" "grpc" --telemetry-otlp-protocol <grpc|http> OTLP transport protocol
outfile file path - --telemetry-outfile <path> Save telemetry to file (requires otlpEndpoint: "")
logPrompts true/false true --telemetry-log-prompts / --no-telemetry-log-prompts Include prompts in telemetry logs
useCollector true/false false - Use external OTLP collector (advanced)

For detailed information about all configuration options, see the Configuration Guide.

Google Cloud Telemetry

Prerequisites

Before using either method below, complete these steps:

  1. Set your Google Cloud project ID:

    • For telemetry in a separate project from inference:
      export OTLP_GOOGLE_CLOUD_PROJECT="your-telemetry-project-id"
    • For telemetry in the same project as inference:
      export GOOGLE_CLOUD_PROJECT="your-project-id"
  2. Authenticate with Google Cloud:

    • If using a user account:
      gcloud auth application-default login
    • If using a service account:
      export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account.json"
  3. Make sure your account or service account has these IAM roles:

    • Cloud Trace Agent
    • Monitoring Metric Writer
    • Logs Writer
  4. Enable the required Google Cloud APIs (if not already enabled):

    gcloud services enable \
      cloudtrace.googleapis.com \
      monitoring.googleapis.com \
      logging.googleapis.com \
      --project="$OTLP_GOOGLE_CLOUD_PROJECT"

Direct Export (Recommended)

Sends telemetry directly to Google Cloud services. No collector needed.

  1. Enable telemetry in your .gemini/settings.json:
    {
      "telemetry": {
        "enabled": true,
        "target": "gcp"
      }
    }
  2. Run Gemini CLI and send prompts.
  3. View logs and metrics:

Collector-Based Export (Advanced)

For custom processing, filtering, or routing, use an OpenTelemetry collector to forward data to Google Cloud.

  1. Configure your .gemini/settings.json:
    {
      "telemetry": {
        "enabled": true,
        "target": "gcp",
        "useCollector": true
      }
    }
  2. Run the automation script:
    npm run telemetry -- --target=gcp
    This will:
    • Start a local OTEL collector that forwards to Google Cloud
    • Configure your workspace
    • Provide links to view traces, metrics, and logs in Google Cloud Console
    • Save collector logs to ~/.gemini/tmp/<projectHash>/otel/collector-gcp.log
    • Stop collector on exit (e.g. Ctrl+C)
  3. Run Gemini CLI and send prompts.
  4. View logs and metrics:

Local Telemetry

For local development and debugging, you can capture telemetry data locally:

File-based Output (Recommended)

  1. Enable telemetry in your .gemini/settings.json:
    {
      "telemetry": {
        "enabled": true,
        "target": "local",
        "otlpEndpoint": "",
        "outfile": ".gemini/telemetry.log"
      }
    }
  2. Run Gemini CLI and send prompts.
  3. View logs and metrics in the specified file (e.g., .gemini/telemetry.log).

Collector-Based Export (Advanced)

  1. Run the automation script:
    npm run telemetry -- --target=local
    This will:
    • Download and start Jaeger and OTEL collector
    • Configure your workspace for local telemetry
    • Provide a Jaeger UI at http://localhost:16686
    • Save logs/metrics to ~/.gemini/tmp/<projectHash>/otel/collector.log
    • Stop collector on exit (e.g. Ctrl+C)
  2. Run Gemini CLI and send prompts.
  3. View traces at http://localhost:16686 and logs/metrics in the collector log file.

Logs and Metrics

The following section describes the structure of logs and metrics generated for Gemini CLI.

  • A sessionId is included as a common attribute on all logs and metrics.

Logs

Logs are timestamped records of specific events. The following events are logged for Gemini CLI:

  • gemini_cli.config: This event occurs once at startup with the CLI's configuration.

    • Attributes:
      • model (string)
      • embedding_model (string)
      • sandbox_enabled (boolean)
      • core_tools_enabled (string)
      • approval_mode (string)
      • api_key_enabled (boolean)
      • vertex_ai_enabled (boolean)
      • code_assist_enabled (boolean)
      • log_prompts_enabled (boolean)
      • file_filtering_respect_git_ignore (boolean)
      • debug_mode (boolean)
      • mcp_servers (string)
      • output_format (string: "text" or "json")
  • gemini_cli.user_prompt: This event occurs when a user submits a prompt.

    • Attributes:
      • prompt_length (int)
      • prompt_id (string)
      • prompt (string, this attribute is excluded if log_prompts_enabled is configured to be false)
      • auth_type (string)
  • gemini_cli.tool_call: This event occurs for each function call.

    • Attributes:
      • function_name
      • function_args
      • duration_ms
      • success (boolean)
      • decision (string: "accept", "reject", "auto_accept", or "modify", if applicable)
      • error (if applicable)
      • error_type (if applicable)
      • content_length (int, if applicable)
      • metadata (if applicable, dictionary of string -> any)
  • gemini_cli.file_operation: This event occurs for each file operation.

    • Attributes:
      • tool_name (string)
      • operation (string: "create", "read", "update")
      • lines (int, if applicable)
      • mimetype (string, if applicable)
      • extension (string, if applicable)
      • programming_language (string, if applicable)
      • diff_stat (json string, if applicable): A JSON string with the following members:
        • ai_added_lines (int)
        • ai_removed_lines (int)
        • user_added_lines (int)
        • user_removed_lines (int)
  • gemini_cli.api_request: This event occurs when making a request to Gemini API.

    • Attributes:
      • model
      • request_text (if applicable)
  • gemini_cli.api_error: This event occurs if the API request fails.

    • Attributes:
      • model
      • error
      • error_type
      • status_code
      • duration_ms
      • auth_type
  • gemini_cli.api_response: This event occurs upon receiving a response from Gemini API.

    • Attributes:
      • model
      • status_code
      • duration_ms
      • error (optional)
      • input_token_count
      • output_token_count
      • cached_content_token_count
      • thoughts_token_count
      • tool_token_count
      • response_text (if applicable)
      • auth_type
  • gemini_cli.tool_output_truncated: This event occurs when the output of a tool call is too large and gets truncated.

    • Attributes:
      • tool_name (string)
      • original_content_length (int)
      • truncated_content_length (int)
      • threshold (int)
      • lines (int)
      • prompt_id (string)
  • gemini_cli.malformed_json_response: This event occurs when a generateJson response from Gemini API cannot be parsed as a json.

    • Attributes:
      • model
  • gemini_cli.flash_fallback: This event occurs when Gemini CLI switches to flash as fallback.

    • Attributes:
      • auth_type
  • gemini_cli.slash_command: This event occurs when a user executes a slash command.

    • Attributes:
      • command (string)
      • subcommand (string, if applicable)

Metrics

Metrics are numerical measurements of behavior over time. The following metrics are collected for Gemini CLI:

  • gemini_cli.session.count (Counter, Int): Incremented once per CLI startup.

  • gemini_cli.tool.call.count (Counter, Int): Counts tool calls.

    • Attributes:
      • function_name
      • success (boolean)
      • decision (string: "accept", "reject", or "modify", if applicable)
      • tool_type (string: "mcp", or "native", if applicable)
  • gemini_cli.tool.call.latency (Histogram, ms): Measures tool call latency.

    • Attributes:
      • function_name
      • decision (string: "accept", "reject", or "modify", if applicable)
  • gemini_cli.api.request.count (Counter, Int): Counts all API requests.

    • Attributes:
      • model
      • status_code
      • error_type (if applicable)
  • gemini_cli.api.request.latency (Histogram, ms): Measures API request latency.

    • Attributes:
      • model
  • gemini_cli.token.usage (Counter, Int): Counts the number of tokens used.

    • Attributes:
      • model
      • type (string: "input", "output", "thought", "cache", or "tool")
  • gemini_cli.file.operation.count (Counter, Int): Counts file operations.

    • Attributes:
      • operation (string: "create", "read", "update"): The type of file operation.
      • lines (Int, if applicable): Number of lines in the file.
      • mimetype (string, if applicable): Mimetype of the file.
      • extension (string, if applicable): File extension of the file.
      • model_added_lines (Int, if applicable): Number of lines added/changed by the model.
      • model_removed_lines (Int, if applicable): Number of lines removed/changed by the model.
      • user_added_lines (Int, if applicable): Number of lines added/changed by user in AI proposed changes.
      • user_removed_lines (Int, if applicable): Number of lines removed/changed by user in AI proposed changes.
      • programming_language (string, if applicable): The programming language of the file.
  • gemini_cli.chat_compression (Counter, Int): Counts chat compression operations

    • Attributes:
      • tokens_before: (Int): Number of tokens in context prior to compression
      • tokens_after: (Int): Number of tokens in context after compression