Ir al contenido

Pi Agent

Pi Agent is the open-source TypeScript monorepo toolkit powering OpenClaw. Created by Mario Zechner (badlogic), it provides a full agent stack from unified LLM API to coding agent CLI, terminal UI, web UI, Slack bot, and vLLM pod support.

GitHub: https://github.com/earendil-works/pi Website: https://pi.dev License: MIT Language: TypeScript

# Install the coding agent CLI globally
npm install -g @pi-agent/cli

# Or install from source
git clone https://github.com/earendil-works/pi.git
cd pi
npm install
npm run build
# Update Pi Agent to the latest version
pi update

# Check current version
pi --version

Pi Agent is a monorepo with modular packages that can be used independently or together.

PackageDescription
pi-aiUnified LLM API — streaming, completions, tool definitions, cost tracking
pi-agent-coreAgent loop with tool calling, message management, context overflow handling
pi-coding-agentBuilt-in tools (read, write, edit, bash) + session persistence
pi-tuiTerminal UI library for interactive agent sessions
pi-web-uiWeb-based UI for agent interaction
pi-slackSlack bot integration for team-based agent access
pi-vllmvLLM pod support for self-hosted model inference
# Launch the coding agent in the current directory
pi

# Start with a specific prompt
pi "Explain the architecture of this project"

# Run a one-shot task
pi --prompt "Add error handling to the API routes"
# Initialize a Pi Agent config in your project
pi init

# This creates a .pi/ directory with configuration files

The coding agent ships with four core tools out of the box.

ToolDescription
readRead file contents, supports partial reads with line ranges
writeWrite or create files with full content
editApply targeted string replacements to existing files
bashExecute shell commands in the project context
# The agent uses tools automatically based on your request
pi "Read the README and summarize the project"

# Tools are invoked by the agent loop — you describe the task, the agent picks the tools
pi "Fix the failing test in src/utils/parser.test.ts"

# Bash tool runs commands in the project directory
pi "Run the test suite and fix any failures"

The pi-ai package provides a unified API across all major LLM providers.

ProviderModelsConfiguration
AnthropicClaude 4, Sonnet 4, HaikuANTHROPIC_API_KEY
OpenAIGPT-4o, o3, o4-miniOPENAI_API_KEY
GoogleGemini 2.5 Pro, FlashGOOGLE_API_KEY
AWS BedrockClaude, Titan, LlamaAWS credentials
MistralMistral Large, CodestralMISTRAL_API_KEY
GroqLlama, MixtralGROQ_API_KEY
xAIGrokXAI_API_KEY
OpenRouterMulti-provider routingOPENROUTER_API_KEY
OllamaLocal modelsOLLAMA_HOST (default: localhost:11434)
Azure OpenAIGPT-4o, o3Azure credentials
import { createClient } from "pi-ai";

// Create a client with a specific provider
const client = createClient({
  provider: "anthropic",
  model: "claude-sonnet-4-20250514",
  apiKey: process.env.ANTHROPIC_API_KEY,
});

// Simple completion
const response = await client.complete({
  messages: [{ role: "user", content: "Hello, world!" }],
});

// Streaming completion
const stream = await client.stream({
  messages: [{ role: "user", content: "Explain TypeScript generics" }],
});

for await (const chunk of stream) {
  process.stdout.write(chunk.text);
}
import { defineTool } from "pi-ai";

const searchTool = defineTool({
  name: "web_search",
  description: "Search the web for information",
  parameters: {
    query: { type: "string", description: "Search query" },
  },
  execute: async ({ query }) => {
    // Implementation
    return { results: [] };
  },
});

const response = await client.complete({
  messages: [{ role: "user", content: "Search for TypeScript 5.7 features" }],
  tools: [searchTool],
});
// pi-ai tracks token usage and cost per request
const response = await client.complete({
  messages: [{ role: "user", content: "Hello" }],
});

console.log(response.usage.inputTokens);   // Tokens consumed
console.log(response.usage.outputTokens);  // Tokens generated
console.log(response.usage.cost);          // Estimated cost in USD

The agent loop manages tool calling, message history, and context overflow.

import { createAgent } from "pi-agent-core";

const agent = createAgent({
  client,
  tools: [readTool, writeTool, editTool, bashTool],
  systemPrompt: "You are a helpful coding assistant.",
});

// Run the agent loop — handles multi-turn tool calling automatically
const result = await agent.run("Refactor the utils module for better testability");
// Spawn a subagent for a focused subtask
const subagent = agent.createSubagent({
  systemPrompt: "You are a test-writing specialist.",
  tools: [readTool, writeTool, bashTool],
});

const testResult = await subagent.run("Write unit tests for src/utils/parser.ts");

Pi Agent persists sessions so you can resume work across terminal sessions.

CommandDescription
pi --resumeResume the most recent session
pi --resume <session-id>Resume a specific session
pi sessions listList all saved sessions
pi sessions delete <id>Delete a specific session
pi sessions clearClear all saved sessions

When context grows too large, Pi Agent compacts the conversation by summarizing older messages while preserving recent context.

# Manual compaction trigger
pi compact

# Configure compaction threshold (tokens)
pi config set compaction.threshold 100000

# Compaction strategy: summarize older messages, keep recent ones intact
pi config set compaction.strategy summarize

Skills are reusable prompt templates and tool bundles packaged as pi packages.

# List available skills
pi skills list

# Install a skill package
pi skills install @pi-skills/code-review

# Use a skill in a session
pi --skill code-review "Review the changes in the last commit"
// skills/my-skill/index.ts
import { defineSkill } from "pi-agent-core";

export default defineSkill({
  name: "deploy-checker",
  description: "Verify deployment readiness",
  systemPrompt: `You are a deployment readiness checker. 
    Verify tests pass, linting is clean, and no TODOs remain.`,
  tools: [bashTool, readTool],
});

The pi-tui package provides the interactive terminal interface.

KeybindingAction
EnterSend message
Shift+EnterNew line in input
Ctrl+CCancel current generation
Ctrl+DExit the session
Ctrl+LClear the screen
/Open command palette
CommandDescription
/helpShow available commands
/model <name>Switch LLM model mid-session
/system <prompt>Update the system prompt
/clearClear conversation history
/costShow token usage and cost for current session
/compactTrigger context compaction
/toolsList available tools
# Set default provider and model
pi config set provider anthropic
pi config set model claude-sonnet-4-20250514

# Set API keys
pi config set anthropic.apiKey sk-ant-...
pi config set openai.apiKey sk-...

# View current configuration
pi config list

Project-Level Configuration (.pi/config.json)

Sección titulada «Project-Level Configuration (.pi/config.json)»
{
  "model": "claude-sonnet-4-20250514",
  "provider": "anthropic",
  "systemPrompt": "You are a senior TypeScript developer.",
  "tools": {
    "bash": {
      "allowedCommands": ["npm", "node", "git", "tsc"],
      "timeout": 30000
    }
  },
  "session": {
    "compaction": {
      "threshold": 100000,
      "strategy": "summarize"
    }
  }
}

The pi-slack package provides a Slack bot that exposes agent capabilities to your team.

# Set up the Slack bot
export SLACK_BOT_TOKEN=xoxb-...
export SLACK_APP_TOKEN=xapp-...

# Start the Slack bot
pi slack start

# Configure allowed channels
pi slack config set channels "#dev,#engineering"
FeatureDescription
Threaded conversationsEach Slack thread is a separate agent session
File sharingUpload and analyze files via Slack
Code executionRun code snippets shared in Slack
Multi-modelSwitch models per channel or thread

Run self-hosted models with vLLM integration for on-premise or GPU-constrained environments.

# Configure vLLM endpoint
pi config set vllm.endpoint http://localhost:8000
pi config set vllm.model codellama/CodeLlama-34b-Instruct-hf

# Use vLLM as the provider
pi --provider vllm "Explain this function"
# Start a vLLM pod (requires GPU)
pi vllm start --model codellama/CodeLlama-34b-Instruct-hf --gpu-memory-utilization 0.9

# Check pod status
pi vllm status

# Stop the pod
pi vllm stop
# Enable Language Server Protocol integration for richer code understanding
pi config set lsp.enabled true

# Supported languages: TypeScript, Python, Rust, Go, Java
pi config set lsp.languages "typescript,python"
# Enable browser automation tools
pi --tools browser "Navigate to localhost:3000 and check for console errors"
# Set a cost budget for the session
pi --budget 5.00 "Refactor the entire utils directory"

# Set a global daily budget
pi config set budget.daily 20.00
WorkflowCommand
Fix a bugpi "Fix the TypeError in src/api/handler.ts"
Add a featurepi "Add pagination to the /users endpoint"
Write testspi "Write comprehensive tests for the auth module"
Code reviewpi "Review the changes in the last 3 commits"
Refactorpi "Refactor database queries to use the repository pattern"
Documentationpi "Generate JSDoc comments for all exported functions"
Debuggingpi "The app crashes on startup — diagnose and fix"
VariableDescription
ANTHROPIC_API_KEYAnthropic API key for Claude models
OPENAI_API_KEYOpenAI API key
GOOGLE_API_KEYGoogle AI API key
GROQ_API_KEYGroq API key
MISTRAL_API_KEYMistral API key
XAI_API_KEYxAI API key for Grok
OPENROUTER_API_KEYOpenRouter API key
OLLAMA_HOSTOllama server URL (default: http://localhost:11434)
PI_HOMECustom config directory (default: ~/.pi)
PI_MODELOverride default model
PI_PROVIDEROverride default provider