Skip to content

Systematic workflows for AI-assisted development - Task-oriented framework with quality gates

License

Notifications You must be signed in to change notification settings

shinpr/agentic-code

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

63 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Agentic Code

Your AI (LLM), guided by built-in workflows. Describe what you want, and it follows a professional development process.

MIT License AGENTS.md Version

Demo: Building a Slack bot with Agentic Code

AI builds a Slack bot with tests & docs β€” in 30s

What You Get

You: "Build a Slack bot with Gemini API"
AI:  βœ“ Reads AGENTS.md
     βœ“ Analyzes requirements
     βœ“ Plans architecture
     βœ“ Writes tests first
     βœ“ Implements with best practices
     βœ“ Verifies everything works

Works out of the boxβ€”no configuration or learning curve required.

Using Claude Code with TypeScript?
Check out AI Coding Project Boilerplate - a specialized alternative optimized for that specific stack.

Quick Start (30 seconds)

npx github:shinpr/agentic-code my-project && cd my-project
# Ready to go

That's it. Works with any AI tool - Codex, Cursor, Aider, or anything AGENTS.md compatible.

Why This Exists

Every AI coding tool has the same problems:

  • Forgets your project structure after 10 messages
  • Deletes tests when adding features
  • Ignores architectural decisions
  • Skips quality checks

We built the solution into the framework. AGENTS.md guides your AI through professional workflows automatically.

What Makes It Different

🎯 Zero Configuration

Pre-built workflows that work without setup.

🌐 Universal Compatibility

Works with any programming language and any AI tool that reads AGENTS.md.

βœ… Test-First by Default

Generates test skeletons before writing implementation code.

πŸ“ˆ Smart Scaling

  • Simple task β†’ Direct execution
  • Complex feature β†’ Full workflow with approvals

How It Actually Works

  1. AGENTS.md tells your AI the process - Like a README but for AI agents
  2. Progressive rule loading - Only loads what's needed, when needed
  3. Quality gates - Automatic checkpoints ensure consistent output
  4. You stay in control - Approval points for major decisions
.agents/
β”œβ”€β”€ tasks/                   # What to build
β”‚   β”œβ”€β”€ task-analysis.md     # Entry point - AI starts here
β”‚   └── ...                  # Design, test, implement, QA tasks
β”œβ”€β”€ workflows/               # How to build it
└── rules/                   # Quality standards

Real Examples

Simple Task

You: "Add API endpoint for user search"
# AI: Reads existing code β†’ Plans changes β†’ Tests β†’ Implements β†’ Done

Complex Feature

You: "Build user authentication system"
# AI: Requirements β†’ Design doc β†’ Your approval β†’ Test skeletons β†’
#     Implementation β†’ Quality checks β†’ Done

Installation Options

For New Projects

# Create project
npx github:shinpr/agentic-code my-project

# Optional: Add language-specific rules
npx github:shinpr/agentic-code my-project --lang=typescript

For Existing Projects

# Copy the framework files
cp -r path/to/agentic-code/AGENTS.md .
cp -r path/to/agentic-code/.agents .

# Set up language rules (choose one)
cd .agents/rules/language
ln -s general/rules.md rules.md
ln -s general/testing.md testing.md

Common Questions

Q: Can I use this with other AI coding tools besides Codex?
Yes! This framework works with any AGENTS.md-compatible tool like Cursor, Aider, and other LLM-assisted development environments.

Q: What programming languages are supported?
The framework is language-agnostic and works with any programming language through general development principles. For TypeScript projects, you can optionally use --lang=typescript to enable enhanced TypeScript-specific rules.

Q: Do I need to learn a new syntax?
No. Describe what you want in plain language; the framework handles the rest.

Q: What if my AI doesn't support AGENTS.md?
Check if your tool is AGENTS.md compatible. If so, point it to the AGENTS.md file first.

Q: Can I customize the workflows?
Yes, everything in .agents/ is customizable. The defaults are production-ready, but you can adapt them to your team's process.

Q: What about my existing codebase?
It works with existing projects. Your AI analyzes the code and follows your established patterns.

The Technical Stuff

The framework has three pillars:

  1. Tasks - Define WHAT to build
  2. Workflows - Define HOW to build it
  3. Rules - Define quality STANDARDS
Advanced features for the curious...

Progressive Rule Loading

Rules load based on task analysis:

  • Small (1-2 files) β†’ Direct execution with minimal rules
  • Medium/Large (3+ files) β†’ Structured workflow with design docs
  • Each task definition specifies its required rules

Quality Gates

Automatic checkpoints ensure:

  • Tests pass before proceeding
  • Code meets standards
  • Documentation stays updated

Special Features

  • Metacognition - AI self-assessment and error recovery
  • Plan Injection - Enforces all required steps are in work plan
  • Test Generation - Test skeletons from acceptance criteria
  • 1-Commit Principle - Each task = one atomic commit

Reviewing Generated Outputs

Important: Always review AI-generated outputs in a separate session.

LLMs cannot reliably review their own outputs within the same context. When the AI generates code or documents, it carries the same assumptions and blind spots into any "self-review." This leads to missed issues that a fresh perspective would catch.

Why Separate Sessions Matter

Same Session New Session
Shares context and assumptions Fresh perspective, no prior bias
May overlook own mistakes Catches issues objectively
"Confirmation bias" in review Applies standards independently

How to Use Review Tasks

After completing implementation or documentation, start a new session and request a review:

# For code review
You: "Review the implementation in src/auth/ against docs/design/auth-design.md"
# AI loads code-review task β†’ Validates against Design Doc β†’ Reports findings

# For document review
You: "Review docs/design/payment-design.md as a Design Doc"
# AI loads technical-document-review task β†’ Checks structure and content β†’ Reports gaps

# For test review
You: "Review the integration tests in tests/integration/auth.test.ts"
# AI loads integration-test-review task β†’ Validates test quality β†’ Reports issues

Available Review Tasks

Task Target What It Checks
code-review Implementation files Design Doc compliance, code quality, architecture
technical-document-review Design Docs, ADRs, PRDs Structure, content quality, failure scenarios
integration-test-review Integration/E2E tests Skeleton compliance, AAA structure, mock boundaries

Pro tip: Make reviews part of your workflow. After any significant generation, switch sessions and review before merging.

For Cursor Users: Isolated Context Reviews via MCP

Cursor users can run reviews in isolated contexts without switching sessions using sub-agents-mcp. When review runs as a sub-agent, it executes in a completely separate contextβ€”achieving the same "fresh perspective" benefit as switching sessions, but without leaving your workflow.

Quick Setup:

Add to your MCP config (~/.cursor/mcp.json or .cursor/mcp.json):

{
  "mcpServers": {
    "sub-agents": {
      "command": "npx",
      "args": ["-y", "sub-agents-mcp"],
      "env": {
        "AGENTS_DIR": "/absolute/path/to/your/project/.agents/tasks",
        "AGENT_TYPE": "cursor"
      }
    }
  }
}

After restarting Cursor, task definitions become available as sub-agents:

You: "Use the code-review agent to review src/auth/ against docs/design/auth-design.md"

Start Building

npx github:shinpr/agentic-code my-awesome-project
cd my-awesome-project
# Tell your AI what to build

Consistent, professional AI-assisted development.


Contributing

Found a bug? Want to add language-specific rules? PRs welcome!

License

MIT - Use it however you want.


Built on the AGENTS.md standard β€” an open community specification for AI coding agents.

Ready to code properly with AI? npx github:shinpr/agentic-code my-project

About

Systematic workflows for AI-assisted development - Task-oriented framework with quality gates

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •