Skip to content

A very simple implementation of a Ralph-Wiggum-like solution

License

Notifications You must be signed in to change notification settings

MyMel2001/codename-sammy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Codename Sammy

Overview

This is a Node.js application that implements an AI agent based on a basic version of the Ralph Wiggum (as in Claude Code plugin implementation as of Jan 20th 2026) loop. It uses Ollama to generate code segments or commands to build, modify, test, and run code for a given project task. The agent operates in a loop: generating segments, executing them as Node.js code (using child_process for shell or other languages), handling errors, and checking for project completion.

Key features:

  • Stateful progress tracking with logs.
  • Customizable Ollama model and host.
  • Adjustable context length (default: 42000).
  • Strict output formatting from the LLM to avoid extraneous text.
  • A REPL
  • MCP support
  • Advanced summary technology

The Basic Logic Flowchart (Where it all started)

A flowchart of our loop

Installation

  1. Ensure Node.js is installed (v14+ recommended).

  2. Install dependencies:

    npm install ollama
  3. Set up Ollama:

    • Run Ollama on your specified host (e.g., http://localhost:11434).
    • Pull the desired model, e.g., ollama pull ministral-3:8b.

Usage

Run the script with your project task as a CLI argument or a Markdown file path.

Basic command:

node index.js "Your project task here"

Options

  • --model <model-name>_: Specify the Ollama model (default: ministral-3:8b).
  • --host <url>_: Specify the Ollama host (default: http://localhost:11434).
  • --context-length <number>_: Set the context length for Ollama (default: 42000).

Input Formats

  • Direct instructions: Pass as string arguments.
  • Markdown spec file: Provide the file path; it will be read as the main task.

How It Works

The app follows this process:

  1. Parse options and initial prompt.
  2. Enter a loop:
    • Generate next code segment using Ollama (system prompt ensures raw Node.js code output).
    • Execute as Node.js (via temp file).
    • If error, append to error log and retry.
    • If success, append to progress log and check completion with Ollama.
  3. Exit when "PROJECT_DONE" or completion check returns "yes".

For non-Node.js tasks (e.g., Python), the generated code uses Node's child_process to execute them.

Examples

Example with options

node index.js --model ministral-3:8b --host http://192.168.50.135:11434 --context-length 50000 "Echo out 'Hello, World\!'."

Example with options and tools

node index.js --model ministral-3:8b --host http://192.168.50.135:11434 --context-length 50000 "Calculate the 10th Fibonacci number using Python." --mcp '@modelcontextprotocol/server-everything'

Example with options, tools, AND repl

node index.js --model ministral-3:8b --host http://192.168.50.135:11434 --context-length 50000 --mcp '@modelcontextprotocol/server-everything'

Troubleshooting

  • If Ollama returns formatted output, adjust the system prompt or model.
  • For long projects, increase context length to avoid truncation.
  • Errors are logged to console; the agent auto-corrects via loops.

License

SPL-R5

About

A very simple implementation of a Ralph-Wiggum-like solution

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published