Google Agent Development Kit (ADK)

Google's open-source Python SDK for building production-ready agents — code-first, A2A-native, deploys to Vertex AI Agent Engine. v1.0.0 went GA in 2025; v1.18.0 is the current release (November 2025).

Google's official Python framework for building, evaluating, and deploying AI agents. Released alongside the protocols/a2a protocol in April 2025. By November 2025, it reached v1.18.0 with active ongoing development. Designed to complement — not replace — existing frameworks: LangGraph agents can be wrapped as A2A agents and interoperate with ADK agents.

[Source: google/adk-python GitHub, Google Developers Blog, 2026-05-03]


Key Facts

  • GitHub: google/adk-python
  • Docs: https://google.github.io/adk-docs
  • PyPI: pip install google-adk
  • v1.0.0: first stable/GA release — all core interfaces moved to async
  • v1.18.0: current release as of November 5, 2025
  • A2A native: the reference SDK for building agents that speak the protocols/a2a protocol
  • Deploy to: Vertex AI Agent Engine (via google-cloud-aiplatform >= 1.95.0)
  • Default model: Gemini (any Gemini model via Vertex AI or Google AI Studio)

Core Pattern

from google.adk.agents import LlmAgent
from google.adk.tools import google_search

root_agent = LlmAgent(
    model="gemini-2.0-flash",
    name="research_agent",
    instruction="You are a helpful research assistant. Use google_search to find current information.",
    tools=[google_search]
)

Agents are Python objects. Tools are Python functions decorated with @tool or built-in tools. The ADK handles the ReAct loop automatically.


Architecture

Agents

  • LlmAgent: the standard agent — wraps a Gemini model, tools, and instructions
  • SequentialAgent: runs sub-agents in sequence, passing state forward
  • ParallelAgent: runs sub-agents concurrently, merges results
  • LoopAgent: repeats a sub-agent until a termination condition is met

Tools

from google.adk.tools import FunctionTool

def get_stock_price(ticker: str) -> dict:
    """Fetch the current stock price for a ticker symbol."""
    # ... API call
    return {"ticker": ticker, "price": 182.45, "currency": "USD"}

price_tool = FunctionTool(get_stock_price)

Type hints and docstrings generate the tool schema. The Gemini model sees the schema and decides when to call the tool.

Sessions and Memory

ADK v1.0.0 made all session/artifact/memory service interfaces async:

from google.adk.sessions import InMemorySessionService

session_service = InMemorySessionService()
session = await session_service.create_session(
    app_name="my_agent",
    user_id="user-123"
)

For production, use VertexAiSessionService to persist sessions in Vertex AI.


Multi-Agent with A2A

ADK's primary multi-agent pattern uses the protocols/a2a protocol. Agents expose themselves as A2A services; other agents discover them via Agent Cards at /.well-known/agent.json.

from google.adk.a2a import A2AServer

# Make this agent available as an A2A service
server = A2AServer(agent=research_agent, port=8080)
await server.start()

A root agent can delegate to remote A2A agents:

from google.adk.a2a import A2AClient

remote_agent = A2AClient(url="http://research-agent:8080")
root_agent = LlmAgent(
    model="gemini-2.0-flash",
    name="orchestrator",
    tools=[remote_agent.as_tool()]
)

This lets you build networks of specialised agents built on different frameworks (LangGraph, CrewAI, AutoGen) as long as they implement A2A.


Deployment: Vertex AI Agent Engine

import vertexai
from vertexai.preview import reasoning_engines

vertexai.init(project="my-project", location="us-central1")

app = reasoning_engines.ReasoningEngine.create(
    reasoning_engines.AdkApp(agent=root_agent),
    requirements=["google-adk>=1.0.0"],
)

# Query the deployed agent
response = app.query(input="What's the latest on quantum computing?")

Agent Engine provides managed hosting, session persistence, logging, and horizontal scaling.


vs LangGraph

DimensionGoogle ADKLangGraph
Graph modelImplicit (SequentialAgent, ParallelAgent)Explicit graph (nodes, edges, state)
A2A protocolNativeVia adapter
Default modelGeminiAny (model-agnostic)
DeploymentVertex AI Agent EngineLangGraph Cloud (or self-host)
CheckpointingVia Vertex AI sessionsBuilt-in, pluggable
Framework flexibilityADK-native toolsAny LangChain/custom tool
Best forGoogle Cloud AI stacks, A2A interopFine-grained stateful workflows, HITL

Connections

Open Questions

  • When does ADK beat LangGraph? (ADK wins when you need A2A interop or are all-in on Google Cloud; LangGraph wins when you need fine-grained graph control or HITL interrupts)
  • How does ADK Python 2.0 Beta (workflows + agent teams) change the architecture? [unverified — monitor release notes]
  • Does ADK support non-Gemini models for production workloads without LiteLLM?