Your AI (LLM), guided by built-in workflows. Describe what you want, and it follows a professional development process.
AI builds a Slack bot with tests & docs β in 30s
You: "Build a Slack bot with Gemini API"
AI: β Reads AGENTS.md
β Analyzes requirements
β Plans architecture
β Writes tests first
β Implements with best practices
β Verifies everything works
Works out of the boxβno configuration or learning curve required.
Using Claude Code with TypeScript?
Check out AI Coding Project Boilerplate - a specialized alternative optimized for that specific stack.
npx github:shinpr/agentic-code my-project && cd my-project
# Ready to goThat's it. Works with any AI tool - Codex, Cursor, Aider, or anything AGENTS.md compatible.
Every AI coding tool has the same problems:
- Forgets your project structure after 10 messages
- Deletes tests when adding features
- Ignores architectural decisions
- Skips quality checks
We built the solution into the framework. AGENTS.md guides your AI through professional workflows automatically.
Pre-built workflows that work without setup.
Works with any programming language and any AI tool that reads AGENTS.md.
Generates test skeletons before writing implementation code.
- Simple task β Direct execution
- Complex feature β Full workflow with approvals
- AGENTS.md tells your AI the process - Like a README but for AI agents
- Progressive rule loading - Only loads what's needed, when needed
- Quality gates - Automatic checkpoints ensure consistent output
- You stay in control - Approval points for major decisions
.agents/
βββ tasks/ # What to build
β βββ task-analysis.md # Entry point - AI starts here
β βββ ... # Design, test, implement, QA tasks
βββ workflows/ # How to build it
βββ rules/ # Quality standards
You: "Add API endpoint for user search"
# AI: Reads existing code β Plans changes β Tests β Implements β DoneYou: "Build user authentication system"
# AI: Requirements β Design doc β Your approval β Test skeletons β
# Implementation β Quality checks β Done# Create project
npx github:shinpr/agentic-code my-project
# Optional: Add language-specific rules
npx github:shinpr/agentic-code my-project --lang=typescript# Copy the framework files
cp -r path/to/agentic-code/AGENTS.md .
cp -r path/to/agentic-code/.agents .
# Set up language rules (choose one)
cd .agents/rules/language
ln -s general/rules.md rules.md
ln -s general/testing.md testing.mdQ: Can I use this with other AI coding tools besides Codex?
Yes! This framework works with any AGENTS.md-compatible tool like Cursor, Aider, and other LLM-assisted development environments.
Q: What programming languages are supported?
The framework is language-agnostic and works with any programming language through general development principles. For TypeScript projects, you can optionally use --lang=typescript to enable enhanced TypeScript-specific rules.
Q: Do I need to learn a new syntax?
No. Describe what you want in plain language; the framework handles the rest.
Q: What if my AI doesn't support AGENTS.md?
Check if your tool is AGENTS.md compatible. If so, point it to the AGENTS.md file first.
Q: Can I customize the workflows?
Yes, everything in .agents/ is customizable. The defaults are production-ready, but you can adapt them to your team's process.
Q: What about my existing codebase?
It works with existing projects. Your AI analyzes the code and follows your established patterns.
The framework has three pillars:
- Tasks - Define WHAT to build
- Workflows - Define HOW to build it
- Rules - Define quality STANDARDS
Advanced features for the curious...
Rules load based on task analysis:
- Small (1-2 files) β Direct execution with minimal rules
- Medium/Large (3+ files) β Structured workflow with design docs
- Each task definition specifies its required rules
Automatic checkpoints ensure:
- Tests pass before proceeding
- Code meets standards
- Documentation stays updated
- Metacognition - AI self-assessment and error recovery
- Plan Injection - Enforces all required steps are in work plan
- Test Generation - Test skeletons from acceptance criteria
- 1-Commit Principle - Each task = one atomic commit
Important: Always review AI-generated outputs in a separate session.
LLMs cannot reliably review their own outputs within the same context. When the AI generates code or documents, it carries the same assumptions and blind spots into any "self-review." This leads to missed issues that a fresh perspective would catch.
| Same Session | New Session |
|---|---|
| Shares context and assumptions | Fresh perspective, no prior bias |
| May overlook own mistakes | Catches issues objectively |
| "Confirmation bias" in review | Applies standards independently |
After completing implementation or documentation, start a new session and request a review:
# For code review
You: "Review the implementation in src/auth/ against docs/design/auth-design.md"
# AI loads code-review task β Validates against Design Doc β Reports findings
# For document review
You: "Review docs/design/payment-design.md as a Design Doc"
# AI loads technical-document-review task β Checks structure and content β Reports gaps
# For test review
You: "Review the integration tests in tests/integration/auth.test.ts"
# AI loads integration-test-review task β Validates test quality β Reports issues| Task | Target | What It Checks |
|---|---|---|
code-review |
Implementation files | Design Doc compliance, code quality, architecture |
technical-document-review |
Design Docs, ADRs, PRDs | Structure, content quality, failure scenarios |
integration-test-review |
Integration/E2E tests | Skeleton compliance, AAA structure, mock boundaries |
Pro tip: Make reviews part of your workflow. After any significant generation, switch sessions and review before merging.
Cursor users can run reviews in isolated contexts without switching sessions using sub-agents-mcp. When review runs as a sub-agent, it executes in a completely separate contextβachieving the same "fresh perspective" benefit as switching sessions, but without leaving your workflow.
Quick Setup:
Add to your MCP config (~/.cursor/mcp.json or .cursor/mcp.json):
{
"mcpServers": {
"sub-agents": {
"command": "npx",
"args": ["-y", "sub-agents-mcp"],
"env": {
"AGENTS_DIR": "/absolute/path/to/your/project/.agents/tasks",
"AGENT_TYPE": "cursor"
}
}
}
}After restarting Cursor, task definitions become available as sub-agents:
You: "Use the code-review agent to review src/auth/ against docs/design/auth-design.md"npx github:shinpr/agentic-code my-awesome-project
cd my-awesome-project
# Tell your AI what to buildConsistent, professional AI-assisted development.
Found a bug? Want to add language-specific rules? PRs welcome!
- π Report issues
- π§ Submit PRs
- π Improve docs
MIT - Use it however you want.
Built on the AGENTS.md standard β an open community specification for AI coding agents.
Ready to code properly with AI? npx github:shinpr/agentic-code my-project
