An open-source, code-first NodeJS toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.
The Agent Development Kit (ADK) is a flexible, modular framework for building and deploying AI agents. The Node.js implementation provides a TypeScript-based toolkit that's model-agnostic and designed to make agent development feel more like traditional software development.
- Event-driven architecture: All interactions are modeled as events that flow through the system
- Modular design: Pluggable components for flows, processors, tools, and services
- LLM-agnostic: Works with different language models (optimized for Gemini but extensible)
- Session management: Persistent conversation state and history
- Tool integration: Rich ecosystem for extending agent capabilities
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Runner ββββββ Agent ββββββ Flow β
β (Orchestrator)β β (Business β β (Interaction β
β β β Logic) β β Pattern) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
β β β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Services β β Tools β β Processors β
β (Session, β β (Functions, β β (Request/ β
β Memory, etc.) β β APIs, etc.) β β Response) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
The Runner is the main orchestrator that manages the entire agent execution lifecycle.
Key Responsibilities:
- Session management (create/retrieve sessions)
- Event streaming and persistence
- Agent factory resolution
- Error handling and cleanup
- Converting user input to events
Core Method:
async *runAgent(runConfig: RunConfig): AsyncGenerator<Event, RunOutput, undefined>Abstract base class providing common agent functionality:
- Agent hierarchy management (parent/sub-agents)
- Invocation context creation
- Agent discovery (
findAgent)
Concrete implementation for LLM-powered agents:
- Flow integration: Uses configurable flows for interaction patterns
- Tool integration: Supports toolsets for extended capabilities
- LLM configuration: Model selection, generation parameters, safety settings
- Event generation: Creates and yields events throughout execution
Key Features:
- Configurable flows (SingleFlow, AutoFlow)
- Before/after agent callbacks
- Agent transfer capabilities
- Error handling and recovery
Flows define interaction patterns between agents and LLMs.
Handles single request-response cycles with tool support:
- Tool execution loop: Automatically handles function calls from LLM
- Event creation: Generates
LLM_RESPONSEevents for each interaction - State management: Uses session state for coordination
- Max iterations: Prevents infinite loops (default: 5)
More complex flow with automatic planning and execution phases.
Modular components that process requests and responses:
Request Processors:
BasicRequestProcessor: Basic request setupInstructionsRequestProcessor: System instructions handlingContentRequestProcessor: Content formattingFunctionRequestProcessor: Tool/function setupCodeExecutionRequestProcessor: Code execution setup
Response Processors:
FunctionResponseProcessor: Tool execution and response handlingCodeExecutionResponseProcessor: Code execution resultsAuthResponseProcessor: Authentication flow handling
Events are the core communication mechanism in ADK:
interface Event {
readonly eventId: string;
readonly interactionId: string;
readonly sessionId: string;
readonly type: EventType;
readonly source: EventSource;
readonly timestamp: Date;
readonly data?: EventData;
// ... other fields
}Event Types:
MESSAGE: User/agent messagesLLM_REQUEST/LLM_RESPONSE: LLM interactionsTOOL_CALL/TOOL_RESULT: Tool executionsINVOCATION_START/INVOCATION_END: Agent lifecycleERROR: Error conditionsAGENT_TRANSFER: Agent handoffs
Sessions maintain conversation state:
- Events: Complete interaction history
- State: Key-value store for temporary data
- Metadata: User ID, app name, timestamps
Tools extend agent capabilities:
- Function calling interface
- Before/after execution callbacks
- Context-aware execution
Services provide infrastructure:
ISessionService: Session persistenceIArtifactService: File/artifact storageIMemoryService: Long-term memoryICodeExecutor: Code execution sandbox
import {
LlmAgent,
SingleFlow,
RunConfig,
InMemorySessionService,
InMemoryArtifactService,
InMemoryMemoryService,
EventType,
InvocationContext
} from 'adk-nodejs';
// 1. Create services
const sessionService = new InMemorySessionService();
const artifactService = new InMemoryArtifactService();
const memoryService = new InMemoryMemoryService();
// Note: LlmRegistry is a static class, no instantiation needed
// 2. Create an agent
const agent = new LlmAgent({
name: 'assistant',
description: 'A helpful AI assistant',
llmConfig: {
modelName: 'gemini-2.0-flash',
instructions: 'You are a helpful assistant.'
},
flow: new SingleFlow() // Optional, defaults to SingleFlow
});
// 3. Create agent factory
const agentFactory = async (agentName: string, runConfig: RunConfig, invocationContext: InvocationContext) => {
if (agentName === 'assistant') {
return agent;
}
return undefined;
};
// 4. Create runner
const runner = new Runner(
sessionService,
artifactService,
memoryService,
agentFactory
// LlmRegistry is static, no need to pass instance
);
// 5. Run the agent
async function runExample() {
const runConfig: RunConfig = {
agentName: 'assistant',
input: 'Hello, how can you help me?',
userId: 'user123',
defaultModelName: 'gemini-2.0-flash'
};
// Stream events as they occur
for await (const event of runner.runAgent(runConfig)) {
console.log(`Event: ${event.type}`, event.data);
}
}
runExample();import { BaseToolset, ITool, ToolContext, FunctionDeclaration, AdkJsonSchemaType } from 'adk-nodejs';
// Create a custom tool
class WeatherTool implements ITool {
name = 'get_weather';
description = 'Get current weather for a location';
async asFunctionDeclaration(context?: ToolContext): Promise<FunctionDeclaration> {
return {
name: this.name,
description: this.description,
parameters: {
type: AdkJsonSchemaType.OBJECT,
properties: {
location: {
type: AdkJsonSchemaType.STRING,
description: 'The location to get weather for'
}
},
required: ['location']
}
};
}
async execute(args: any, context: ToolContext): Promise<string> {
const location = args.location;
// Simulate weather API call
return `The weather in ${location} is sunny, 72Β°F`;
}
}
// Create toolset
const toolset = new BaseToolset({ name: 'WeatherToolset' });
toolset.addTool(new WeatherTool());
// Create agent with tools
const weatherAgent = new LlmAgent({
name: 'weather_assistant',
description: 'An assistant that can check weather',
toolset: toolset,
llmConfig: {
modelName: 'gemini-2.0-flash',
instructions: 'You can help users check weather. Use the get_weather tool when needed.'
}
});// Create specialized agents
const greeterAgent = new LlmAgent({
name: 'greeter',
description: 'Handles greetings and introductions',
llmConfig: {
modelName: 'gemini-2.0-flash',
instruction: 'You are a friendly greeter. Keep responses brief and welcoming.'
}
});
const taskAgent = new LlmAgent({
name: 'task_executor',
description: 'Handles task execution and problem solving',
llmConfig: {
modelName: 'gemini-2.0-flash',
instruction: 'You are a task executor. Focus on solving problems efficiently.'
}
});
// Create coordinator agent
const coordinator = new LlmAgent({
name: 'coordinator',
description: 'Coordinates between different specialized agents',
subAgents: [greeterAgent, taskAgent],
llmConfig: {
modelName: 'gemini-2.0-flash',
instruction: `You coordinate between agents:
- Use 'greeter' for welcomes and introductions
- Use 'task_executor' for problem-solving tasks`
}
});
// Agent factory that resolves the right agent
const multiAgentFactory = async (agentName: string, runConfig: RunConfig) => {
switch (agentName) {
case 'coordinator': return coordinator;
case 'greeter': return greeterAgent;
case 'task_executor': return taskAgent;
default: return undefined;
}
};import { BaseLlmFlow, ILlmRequestProcessor, ILlmResponseProcessor } from 'adk-nodejs';
class CustomFlow extends BaseLlmFlow {
constructor() {
super(
[new BasicRequestProcessor(), new InstructionsRequestProcessor()], // Request processors
[new FunctionResponseProcessor()], // Response processors
'CustomFlow',
'A custom flow with specific behavior'
);
}
async runLlmInteraction(
initialLlmRequest: LlmRequest,
llm: IBaseLlm,
context: InvocationContext
): Promise<Event> {
// Custom interaction logic
const processedRequest = await this.applyRequestProcessors(initialLlmRequest, context);
const response = await llm.generateContentAsync(processedRequest);
// Custom response handling
const result = await this.applyResponseProcessors(response, processedRequest, context);
return this.createEventFromLlmResponse(response, processedRequest, context);
}
}
// Use custom flow
const customAgent = new LlmAgent({
name: 'custom_agent',
description: 'Agent with custom flow',
flow: new CustomFlow()
});async function runWithEventMonitoring() {
const runConfig: RunConfig = {
agentName: 'assistant',
input: 'Explain quantum computing',
userId: 'user123'
};
for await (const event of runner.runAgent(runConfig)) {
switch (event.type) {
case EventType.LLM_REQUEST:
console.log('π€ LLM Request:', event.llmRequest?.contents);
break;
case EventType.LLM_RESPONSE:
console.log('π LLM Response:', event.llmResponse?.candidates?.[0]?.content);
break;
case EventType.TOOL_CALL:
console.log('π§ Tool Called:', event.functionCalls);
break;
case EventType.TOOL_RESULT:
console.log('β
Tool Result:', event.data);
break;
case EventType.ERROR:
console.error('β Error:', event.data?.error);
break;
case EventType.MESSAGE:
console.log('π¬ Message:', event.data?.content);
break;
}
}
}All interactions are modeled as events, enabling:
- Comprehensive logging and debugging
- Real-time monitoring
- Event replay and analysis
- Loose coupling between components
Request and response processors provide:
- Modular functionality
- Easy customization
- Reusable components
- Clear separation of concerns
Agent factories enable:
- Dynamic agent resolution
- Dependency injection
- Testing flexibility
- Runtime configuration
Async generators provide:
- Streaming event delivery
- Memory efficiency
- Real-time feedback
- Cancellation support
The codebase includes comprehensive testing with Jest:
- Unit tests for individual components
- Integration tests for full workflows
- Mock implementations for external dependencies
- Event flow validation
Key Test Patterns:
// Event consumption helper
async function consumeRunnerOutput(
generator: AsyncGenerator<Event, RunOutput, undefined>
): Promise<{ yieldedEvents: Event[], finalOutput: RunOutput }> {
const yieldedEvents: Event[] = [];
let result = await generator.next();
while (!result.done) {
if (result.value) {
yieldedEvents.push(result.value);
}
result = await generator.next();
}
return { yieldedEvents, finalOutput: result.value as RunOutput };
}The Node.js ADK provides a robust, flexible framework for building sophisticated AI agents. Its event-driven architecture, modular design, and comprehensive tooling make it suitable for everything from simple chatbots to complex multi-agent systems. The TypeScript implementation ensures type safety while maintaining the flexibility needed for diverse AI applications.
The framework's strength lies in its balance of structure and flexibility - providing sensible defaults while allowing deep customization at every level of the stack.
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
This project is a derivative work of https://github.com/google/adk-python
Happy Agent Building!
