Skip to main content
A ready-to-run example is available here!
ACPAgent lets you use any Agent Client Protocol server as the backend for an OpenHands conversation. Instead of calling an LLM directly, the agent spawns an ACP server subprocess and communicates with it over JSON-RPC. The server manages its own LLM, tools, and execution — your code just sends messages and collects responses.

Basic Usage

from openhands.sdk.agent import ACPAgent
from openhands.sdk.conversation import Conversation

# Point at any ACP-compatible server
agent = ACPAgent(acp_command=["npx", "-y", "claude-code-acp"])

conversation = Conversation(agent=agent, workspace="./my-project")
conversation.send_message("Explain the architecture of this project.")
conversation.run()

agent.close()
The acp_command is the shell command used to spawn the server process. The SDK communicates with it over stdin/stdout JSON-RPC.
Key difference from standard agents: With ACPAgent, you don’t need an LLM_API_KEY in your code. The ACP server handles its own LLM authentication and API calls. This is delegation — your code sends messages to the ACP server, which manages all LLM interactions internally.

What ACPAgent Does Not Support

Because the ACP server manages its own tools and context, these AgentBase features are not available on ACPAgent:
  • tools / include_default_tools — the server has its own tools
  • mcp_config — configure MCP on the server side
  • condenser — the server manages its own context window
  • critic — the server manages its own evaluation
  • agent_context — configure the server directly
Passing any of these raises NotImplementedError at initialization.

How It Works

  • Subprocess delegation: ACPAgent spawns the ACP server and communicates via JSON-RPC over stdin/stdout
  • Server-managed execution: The ACP server handles its own LLM calls, tools, and context — your code just sends messages
  • Auto-approval: Permission requests from the server are automatically granted, so ensure you trust the ACP server you’re running
  • Metrics collection: Token usage and costs from the server are captured into the agent’s LLM.metrics

Configuration

Server Command and Arguments

agent = ACPAgent(
    acp_command=["npx", "-y", "claude-code-acp"],
    acp_args=["--profile", "my-profile"],      # extra CLI args
    acp_env={"CLAUDE_API_KEY": "sk-..."},       # extra env vars
)
ParameterDescription
acp_commandCommand to start the ACP server (required)
acp_argsAdditional arguments appended to the command
acp_envAdditional environment variables for the server process

Metrics

Token usage and cost data are automatically captured from the ACP server’s responses. You can inspect them through the standard LLM.metrics interface:
metrics = agent.llm.metrics
print(f"Total cost: ${metrics.accumulated_cost:.6f}")

for usage in metrics.token_usages:
    print(f"  prompt={usage.prompt_tokens}  completion={usage.completion_tokens}")
Usage data comes from two ACP protocol sources:
  • PromptResponse.usage — per-turn token counts (input, output, cached, reasoning tokens)
  • UsageUpdate notifications — cumulative session cost and context window size

Cleanup

Always call agent.close() when you are done to terminate the ACP server subprocess. A try/finally block is recommended:
agent = ACPAgent(acp_command=["npx", "-y", "claude-code-acp"])
try:
    conversation = Conversation(agent=agent, workspace=".")
    conversation.send_message("Hello!")
    conversation.run()
finally:
    agent.close()

Ready-to-run Example

This example is available on GitHub: examples/01_standalone_sdk/40_acp_agent_example.py
examples/01_standalone_sdk/40_acp_agent_example.py
"""Example: Using ACPAgent with Claude Code ACP server.

This example shows how to use an ACP-compatible server (claude-code-acp)
as the agent backend instead of direct LLM calls.  It also demonstrates
``ask_agent()`` — a stateless side-question that forks the ACP session
and leaves the main conversation untouched.

Prerequisites:
    - Node.js / npx available
    - Claude Code CLI authenticated (or CLAUDE_API_KEY set)

Usage:
    uv run python examples/01_standalone_sdk/40_acp_agent_example.py
"""

import os

from openhands.sdk.agent import ACPAgent
from openhands.sdk.conversation import Conversation


agent = ACPAgent(acp_command=["npx", "-y", "@zed-industries/claude-code-acp"])

try:
    cwd = os.getcwd()
    conversation = Conversation(agent=agent, workspace=cwd)

    # --- Main conversation turn ---
    conversation.send_message(
        "List the Python source files under openhands-sdk/openhands/sdk/agent/, "
        "then read the __init__.py and summarize what agent classes are exported."
    )
    conversation.run()

    # --- ask_agent: stateless side-question via fork_session ---
    print("\n--- ask_agent ---")
    response = conversation.ask_agent(
        "Based on what you just saw, which agent class is the newest addition?"
    )
    print(f"ask_agent response: {response}")
finally:
    # Clean up the ACP server subprocess
    agent.close()

print("Done!")
This example does not use an LLM API key directly — the ACP server (Claude Code) handles authentication on its own.
Running the Example
# Ensure Claude Code CLI is authenticated first
# (or set CLAUDE_API_KEY in your environment)
cd software-agent-sdk
uv run python examples/01_standalone_sdk/40_acp_agent_example.py

Remote Runtime Example

This example shows how to run an ACPAgent in a remote sandboxed environment via the Runtime API, using APIRemoteWorkspace:
examples/02_remote_agent_server/09_acp_agent_with_remote_runtime.py
"""Example: ACPAgent with Remote Runtime via API.

This example demonstrates running an ACPAgent (Claude Code via ACP protocol)
in a remote sandboxed environment via Runtime API. It follows the same pattern
as 04_convo_with_api_sandboxed_server.py but uses ACPAgent instead of the
default LLM-based Agent.

Usage:
  uv run examples/02_remote_agent_server/09_acp_agent_with_remote_runtime.py

Requirements:
  - LLM_BASE_URL: LiteLLM proxy URL (routes Claude Code requests)
  - LLM_API_KEY: LiteLLM virtual API key
  - RUNTIME_API_KEY: API key for runtime API access
"""

import os
import time

from openhands.sdk import (
    Conversation,
    RemoteConversation,
    get_logger,
)
from openhands.sdk.agent import ACPAgent
from openhands.workspace import APIRemoteWorkspace


logger = get_logger(__name__)


# ACP agents (Claude Code) route through LiteLLM proxy
llm_base_url = os.getenv("LLM_BASE_URL")
llm_api_key = os.getenv("LLM_API_KEY")
assert llm_base_url and llm_api_key, "LLM_BASE_URL and LLM_API_KEY required"

# Set ANTHROPIC_* vars so Claude Code routes through LiteLLM
os.environ["ANTHROPIC_BASE_URL"] = llm_base_url
os.environ["ANTHROPIC_API_KEY"] = llm_api_key

runtime_api_key = os.getenv("RUNTIME_API_KEY")
assert runtime_api_key, "RUNTIME_API_KEY required"

# If GITHUB_SHA is set (e.g. running in CI of a PR), use that to ensure consistency
# Otherwise, use the latest image from main
server_image_sha = os.getenv("GITHUB_SHA") or "main"
server_image = f"ghcr.io/openhands/agent-server:{server_image_sha[:7]}-python-amd64"
logger.info(f"Using server image: {server_image}")

with APIRemoteWorkspace(
    runtime_api_url=os.getenv("RUNTIME_API_URL", "https://runtime.eval.all-hands.dev"),
    runtime_api_key=runtime_api_key,
    server_image=server_image,
    image_pull_policy="Always",
    target_type="binary",  # CI builds binary target images
    forward_env=["ANTHROPIC_BASE_URL", "ANTHROPIC_API_KEY"],
) as workspace:
    agent = ACPAgent(
        acp_command=["claude-agent-acp"],  # Pre-installed in Docker image
    )

    received_events: list = []
    last_event_time = {"ts": time.time()}

    def event_callback(event) -> None:
        received_events.append(event)
        last_event_time["ts"] = time.time()

    conversation = Conversation(
        agent=agent, workspace=workspace, callbacks=[event_callback]
    )
    assert isinstance(conversation, RemoteConversation)

    try:
        conversation.send_message(
            "List the files in /workspace and describe what you see."
        )
        conversation.run()

        while time.time() - last_event_time["ts"] < 2.0:
            time.sleep(0.1)

        # Report cost
        cost = conversation.conversation_stats.get_combined_metrics().accumulated_cost
        print(f"EXAMPLE_COST: {cost:.4f}")
    finally:
        conversation.close()
Running the Example
export LLM_BASE_URL="https://your-litellm-proxy.example.com"
export LLM_API_KEY="your-litellm-api-key"
export RUNTIME_API_KEY="your-runtime-api-key"
export RUNTIME_API_URL="https://runtime.eval.all-hands.dev"
cd software-agent-sdk
uv run python examples/02_remote_agent_server/09_acp_agent_with_remote_runtime.py

Next Steps