Google ADK (Agent Development Kit)
Google ADK (Agent Development Kit) is an open-source Python framework that simplifies building, evaluating, and deploying AI agents using Google’s Gemini models. It provides tool integration, multi-agent orchestration, built-in memory management, and direct Vertex AI deployment capabilities.
Installation
Sezione intitolata “Installation”Install Google ADK from PyPI with Python 3.9 or higher:
pip install google-adk
Verify installation by checking the version:
adk --version
For development with the latest code, clone and install from source:
git clone https://github.com/google/adk.git
cd adk
pip install -e .
Install optional dependencies for specific features:
pip install google-adk[genai] # For Generative AI API
pip install google-adk[vertexai] # For Vertex AI integration
pip install google-adk[all] # All optional dependencies
Ensure you have Google Cloud credentials configured. Set your API key as an environment variable:
export GOOGLE_API_KEY="your-api-key-here"
Quick Start
Sezione intitolata “Quick Start”Create a basic agent in a Python file named simple_agent.py:
from google.adk.agents import Agent
from google.adk.tools import tool
@tool
def add(a: int, b: int) -> int:
"""Add two numbers."""
return a + b
@tool
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
agent = Agent(
name="MathBot",
model="gemini-2.0-flash",
instruction="You are a helpful math assistant.",
tools=[add, multiply]
)
Run the agent in interactive web mode:
adk web simple_agent.py
This launches a browser-based interface at http://localhost:8000 where you can interact with the agent.
Run the agent with a single prompt:
adk run simple_agent.py "What is 5 plus 3?"
For non-interactive execution with output piping:
echo "Calculate 10 times 7" | adk run simple_agent.py
Agent Configuration
Sezione intitolata “Agent Configuration”Create an Agent instance with the required parameters:
from google.adk.agents import Agent
agent = Agent(
name="DataAnalyzer",
model="gemini-2.5-pro",
instruction="""You are a data analysis expert.
Analyze the provided data and give insights.
Ask clarifying questions if needed.""",
tools=[my_tool1, my_tool2],
system_prompt="You follow instructions precisely.",
max_turns=10
)
| Parameter | Description |
|---|---|
name | Unique agent identifier |
model | Gemini model ID (gemini-2.0-flash, gemini-2.5-pro) |
instruction | System instructions for agent behavior |
tools | List of callable tool functions |
system_prompt | Additional system-level constraints |
max_turns | Maximum conversation turns before stopping |
temperature | Sampling temperature (0.0-1.0) |
top_p | Nucleus sampling parameter |
Available Gemini models for agents:
# Latest and recommended
model="gemini-2.0-flash" # Fast, multimodal
model="gemini-2.5-pro" # More capable, reasoning
model="gemini-exp-1119" # Experimental features
Configure with Vertex AI integration:
from google.adk.agents import Agent
import os
os.environ["GOOGLE_GENAI_USE_VERTEXAI"] = "true"
agent = Agent(
name="CloudAgent",
model="gemini-2.0-flash",
instruction="Cloud-native agent",
tools=[],
project_id="my-gcp-project",
location="us-central1"
)
Tool Definition
Sezione intitolata “Tool Definition”Define tools as simple Python functions decorated with @tool:
from google.adk.tools import tool
@tool
def get_weather(city: str) -> str:
"""Get current weather for a city."""
# Implementation here
return f"Weather in {city}: 72°F, Sunny"
@tool
def convert_currency(amount: float, from_currency: str, to_currency: str) -> float:
"""Convert between currencies."""
# Exchange rate logic
return amount * 1.1 # Simplified
Use FunctionTool class for advanced configurations:
from google.adk.tools import FunctionTool
def calculate_discount(price: float, discount_percent: int) -> float:
"""Calculate discounted price."""
return price * (1 - discount_percent / 100)
discount_tool = FunctionTool(
func=calculate_discount,
name="calculate_discount",
description="Calculates the final price after applying discount"
)
Built-in tools provided by ADK:
| Tool | Description | Usage |
|---|---|---|
google_search | Search Google for current information | from google.adk.tools import google_search |
code_execution | Execute Python code safely | from google.adk.tools import code_execution |
web_scraper | Extract content from web pages | from google.adk.tools import web_scraper |
Enable built-in tools when creating an agent:
from google.adk.tools import google_search, code_execution
agent = Agent(
name="ResearchBot",
model="gemini-2.0-flash",
instruction="Research topics thoroughly.",
tools=[google_search, code_execution, my_custom_tool]
)
Create parameterized tools with type hints:
from typing import Optional
from google.adk.tools import tool
@tool
def search_documentation(query: str, max_results: Optional[int] = 5) -> list:
"""Search documentation with optional result limit."""
# Search logic
return [f"Result {i}" for i in range(max_results)]
Multi-Agent Systems
Sezione intitolata “Multi-Agent Systems”Create sequential agents where one agent’s output feeds into the next:
from google.adk.agents import Agent, SequentialAgent
researcher = Agent(
name="Researcher",
model="gemini-2.0-flash",
instruction="Research and gather information",
tools=[google_search]
)
writer = Agent(
name="Writer",
model="gemini-2.0-flash",
instruction="Write clear articles based on research",
tools=[]
)
sequential_workflow = SequentialAgent(
agents=[researcher, writer],
name="ResearchWriteWorkflow"
)
Run sequential agents with a single prompt:
adk run sequential_workflow.py "Write about quantum computing"
Create parallel agents for concurrent execution:
from google.adk.agents import Agent, ParallelAgent
summary_agent = Agent(
name="Summarizer",
model="gemini-2.0-flash",
instruction="Summarize content concisely",
tools=[]
)
translator_agent = Agent(
name="Translator",
model="gemini-2.0-flash",
instruction="Translate to Spanish",
tools=[]
)
parallel_workflow = ParallelAgent(
agents=[summary_agent, translator_agent],
name="SummarizeAndTranslate"
)
Use LoopAgent for iterative refinement:
from google.adk.agents import Agent, LoopAgent
refiner = Agent(
name="ContentRefiner",
model="gemini-2.0-flash",
instruction="Improve text quality",
tools=[]
)
loop_workflow = LoopAgent(
agent=refiner,
max_iterations=3,
convergence_check=lambda x: "refined" in x.lower(),
name="IterativeRefinery"
)
Implement agent-as-tool pattern for nested delegation:
from google.adk.tools import FunctionTool
calculator = Agent(
name="Calculator",
model="gemini-2.0-flash",
instruction="Perform mathematical calculations",
tools=[add, multiply]
)
def delegate_to_calculator(query: str) -> str:
"""Delegate math questions to calculator agent."""
result = calculator.run(query)
return result
manager_agent = Agent(
name="Manager",
model="gemini-2.0-flash",
instruction="Route tasks to specialist agents",
tools=[FunctionTool(func=delegate_to_calculator)]
)
Memory and State
Sezione intitolata “Memory and State”Access session state within tools:
from google.adk.tools import tool, ToolContext
@tool
def remember_fact(fact: str, context: ToolContext) -> str:
"""Store and recall facts."""
if not hasattr(context.session_state, 'facts'):
context.session_state.facts = []
context.session_state.facts.append(fact)
return f"Remembered: {fact}"
@tool
def list_facts(context: ToolContext) -> list:
"""List all remembered facts."""
facts = getattr(context.session_state, 'facts', [])
return facts
Access conversation history:
from google.adk.tools import tool, ToolContext
@tool
def get_conversation_context(context: ToolContext) -> str:
"""Get recent conversation messages."""
history = context.conversation_history
recent = history[-5:] if len(history) > 5 else history
return "\n".join([f"{msg['role']}: {msg['content']}" for msg in recent])
Persist state across agent runs:
agent = Agent(
name="StatefulBot",
model="gemini-2.0-flash",
instruction="Remember context across conversations",
tools=[remember_fact, list_facts],
enable_session_persistence=True,
session_store_path="./agent_sessions"
)
Callbacks and Hooks
Sezione intitolata “Callbacks and Hooks”Use callbacks to monitor agent execution:
def before_model_call(agent_name: str, messages: list) -> None:
"""Called before the model processes messages."""
print(f"Agent {agent_name} sending {len(messages)} messages to model")
def after_model_call(agent_name: str, response: str) -> None:
"""Called after the model returns a response."""
print(f"Agent {agent_name} received response of length {len(response)}")
agent = Agent(
name="CallbackBot",
model="gemini-2.0-flash",
instruction="Execute with callbacks",
tools=[],
on_before_model_call=before_model_call,
on_after_model_call=after_model_call
)
Add tool execution callbacks:
def before_tool_call(tool_name: str, args: dict) -> None:
"""Called before a tool executes."""
print(f"Calling tool: {tool_name} with args: {args}")
def after_tool_call(tool_name: str, result: str) -> None:
"""Called after a tool executes."""
print(f"Tool {tool_name} returned: {result}")
agent = Agent(
name="ToolAwareBot",
model="gemini-2.0-flash",
instruction="Track tool usage",
tools=[my_tool],
on_before_tool_call=before_tool_call,
on_after_tool_call=after_tool_call
)
Implement error handling callbacks:
def on_error(error_type: str, error_message: str, context: dict) -> None:
"""Handle errors during execution."""
print(f"Error ({error_type}): {error_message}")
if error_type == "tool_execution":
print(f"Tool {context['tool_name']} failed")
agent = Agent(
name="ErrorHandlingBot",
model="gemini-2.0-flash",
instruction="Handle errors gracefully",
tools=[],
on_error=on_error
)
Evaluation
Sezione intitolata “Evaluation”Create evaluation datasets in JSON format:
[
{
"input": "What is 5 + 3?",
"expected_output": "8",
"tags": ["arithmetic"]
},
{
"input": "Translate hello to Spanish",
"expected_output": "hola",
"tags": ["translation"]
}
]
Run evaluation against a test dataset:
adk eval my_agent.py --dataset test_cases.json --output results.json
Evaluate with custom metrics:
adk eval my_agent.py \
--dataset test_cases.json \
--metric exact_match \
--metric semantic_similarity \
--output eval_results.json
Create evaluation in Python:
from google.adk.evaluation import Evaluator
evaluator = Evaluator(agent=my_agent)
test_cases = [
{"input": "What is 2+2?", "expected": "4"},
{"input": "What is 10*5?", "expected": "50"}
]
results = evaluator.evaluate(test_cases)
print(f"Pass rate: {results['pass_rate']}")
print(f"Avg latency: {results['avg_latency']}ms")
Compare multiple agent configurations:
adk eval agent_v1.py agent_v2.py \
--dataset benchmark.json \
--compare \
--output comparison.html
Deployment
Sezione intitolata “Deployment”Deploy to Cloud Run using the CLI:
adk deploy my_agent.py \
--platform cloud-run \
--project my-gcp-project \
--region us-central1
Deploy to Vertex AI Agent Engine:
adk deploy my_agent.py \
--platform vertex-ai \
--project my-gcp-project \
--agent-name "ProductionAgent"
Create a Dockerfile for containerization:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["adk", "api_server", "my_agent.py", "--port", "8000"]
Build and push container image:
docker build -t gcr.io/my-project/my-agent:latest .
docker push gcr.io/my-project/my-agent:latest
Deploy containerized agent to Cloud Run:
gcloud run deploy my-agent \
--image gcr.io/my-project/my-agent:latest \
--platform managed \
--region us-central1 \
--set-env-vars GOOGLE_API_KEY=$GOOGLE_API_KEY
Create a deployment configuration file (adk-config.yaml):
agent:
name: "ProductionBot"
model: "gemini-2.5-pro"
deployment:
platform: "vertex-ai"
project_id: "my-gcp-project"
region: "us-central1"
memory: "2Gi"
cpu: "1"
scaling:
min_instances: 1
max_instances: 10
target_utilization: 0.7
Deploy using configuration:
adk deploy my_agent.py --config adk-config.yaml
ADK CLI Commands
Sezione intitolata “ADK CLI Commands”| Command | Description | Example |
|---|---|---|
adk web | Launch interactive browser UI | adk web my_agent.py |
adk run | Execute agent with a prompt | adk run my_agent.py "Your prompt" |
adk eval | Evaluate agent against test dataset | adk eval my_agent.py --dataset tests.json |
adk deploy | Deploy agent to cloud platform | adk deploy my_agent.py --platform cloud-run |
adk api_server | Start HTTP API server | adk api_server my_agent.py --port 8000 |
adk logs | View deployment logs | adk logs my-agent --platform cloud-run |
adk list | List deployed agents | adk list --platform vertex-ai |
adk delete | Remove a deployed agent | adk delete my-agent --platform cloud-run |
Get help for any command:
adk --help
adk run --help
adk deploy --help
Configuration
Sezione intitolata “Configuration”Set up environment variables in a .env file:
GOOGLE_API_KEY="your-api-key"
GOOGLE_GENAI_USE_VERTEXAI=true
GOOGLE_CLOUD_PROJECT="my-gcp-project"
GOOGLE_CLOUD_REGION="us-central1"
ADK_LOG_LEVEL=INFO
Load environment variables in your agent file:
import os
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv("GOOGLE_API_KEY")
project_id = os.getenv("GOOGLE_CLOUD_PROJECT")
Switch between Generative AI API and Vertex AI:
import os
# Use Generative AI API (default)
os.environ.pop("GOOGLE_GENAI_USE_VERTEXAI", None)
# Use Vertex AI
os.environ["GOOGLE_GENAI_USE_VERTEXAI"] = "true"
Configure model parameters:
agent = Agent(
name="ConfiguredBot",
model="gemini-2.0-flash",
instruction="Configured agent",
tools=[],
temperature=0.7,
top_p=0.95,
top_k=40,
max_output_tokens=2048
)
Debugging and Logging
Sezione intitolata “Debugging and Logging”Enable verbose logging:
adk web my_agent.py --log-level DEBUG
Access the Dev UI for trace inspection:
adk web my_agent.py --dev-ui
Then visit http://localhost:8000/traces to inspect detailed execution traces.
Log tool execution details:
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("google.adk")
# Now all ADK operations will log detailed information
agent = Agent(
name="VerboseBot",
model="gemini-2.0-flash",
instruction="Log everything",
tools=[my_tool]
)
Inspect agent execution programmatically:
result = agent.run("Test prompt")
# Access execution metadata
print(f"Model: {result.model_used}")
print(f"Tokens used: {result.tokens_used}")
print(f"Tools called: {result.tools_called}")
print(f"Execution time: {result.execution_time_ms}ms")
View deployment logs:
adk logs my-deployed-agent --platform cloud-run --tail 50
adk logs my-deployed-agent --platform vertex-ai --since "1 hour ago"
Best Practices
Sezione intitolata “Best Practices”Tool Design
- Keep tools focused and single-purpose
- Use clear, descriptive names and docstrings
- Include proper type hints for all parameters
- Return structured data when possible
- Handle errors gracefully within tools
Agent Configuration
- Use appropriate models for task complexity (flash for speed, pro for reasoning)
- Write clear, specific instructions that guide behavior
- Set
max_turnsto prevent infinite loops - Test agents locally before deployment
Memory Management
- Regularly clear old conversation history to save memory
- Use
session_persistencecarefully on high-traffic agents - Monitor memory usage in deployed agents
- Archive conversation logs periodically
Evaluation and Testing
- Create diverse test datasets covering edge cases
- Run evaluations before deploying changes
- Compare model versions using evaluation metrics
- Track performance metrics over time
Deployment
- Use environment variables for all secrets and configuration
- Start with Cloud Run before moving to Vertex AI Agent Engine
- Monitor deployed agent logs and metrics
- Implement gradual rollout with canary deployments
- Set up alerts for error rates and latency
Security
- Never hardcode API keys in code
- Use service accounts for cloud deployments
- Validate all user inputs in tools
- Sanitize tool outputs before returning to users
- Implement rate limiting for production agents
Related Tools
Sezione intitolata “Related Tools”- Vertex AI Agent Engine — Google Cloud’s managed service for deploying and scaling agents
- Google Generative AI API — REST API for Gemini models
- Cloud Run — Serverless container deployment platform
- LangChain — Python framework for LLM applications with agent support
- Anthropic Claude — Alternative LLM provider with agent capabilities
- OpenAI Assistants API — Similar multi-agent framework for GPT models