ACPAgent lets you use any Agent Client Protocol server as the backend for an OpenHands conversation. Instead of calling an LLM directly, the agent spawns an ACP server subprocess and communicates with it over JSON-RPC. The server manages its own LLM, tools, and execution — your code just sends messages and collects responses.
from openhands.sdk.agent import ACPAgentfrom openhands.sdk.conversation import Conversation# Point at any ACP-compatible serveragent = ACPAgent(acp_command=["npx", "-y", "claude-code-acp"])conversation = Conversation(agent=agent, workspace="./my-project")conversation.send_message("Explain the architecture of this project.")conversation.run()agent.close()
The acp_command is the shell command used to spawn the server process. The SDK communicates with it over stdin/stdout JSON-RPC.
Key difference from standard agents: With ACPAgent, you don’t need an LLM_API_KEY in your code. The ACP server handles its own LLM authentication and API calls. This is delegation — your code sends messages to the ACP server, which manages all LLM interactions internally.
"""Example: Using ACPAgent with Claude Code ACP server.This example shows how to use an ACP-compatible server (claude-code-acp)as the agent backend instead of direct LLM calls. It also demonstrates``ask_agent()`` — a stateless side-question that forks the ACP sessionand leaves the main conversation untouched.Prerequisites: - Node.js / npx available - Claude Code CLI authenticated (or CLAUDE_API_KEY set)Usage: uv run python examples/01_standalone_sdk/40_acp_agent_example.py"""import osfrom openhands.sdk.agent import ACPAgentfrom openhands.sdk.conversation import Conversationagent = ACPAgent(acp_command=["npx", "-y", "@zed-industries/claude-code-acp"])try: cwd = os.getcwd() conversation = Conversation(agent=agent, workspace=cwd) # --- Main conversation turn --- conversation.send_message( "List the Python source files under openhands-sdk/openhands/sdk/agent/, " "then read the __init__.py and summarize what agent classes are exported." ) conversation.run() # --- ask_agent: stateless side-question via fork_session --- print("\n--- ask_agent ---") response = conversation.ask_agent( "Based on what you just saw, which agent class is the newest addition?" ) print(f"ask_agent response: {response}")finally: # Clean up the ACP server subprocess agent.close()print("Done!")
This example does not use an LLM API key directly — the ACP server (Claude Code) handles authentication on its own.
Running the Example
Copy
Ask AI
# Ensure Claude Code CLI is authenticated first# (or set CLAUDE_API_KEY in your environment)cd software-agent-sdkuv run python examples/01_standalone_sdk/40_acp_agent_example.py
"""Example: ACPAgent with Remote Runtime via API.This example demonstrates running an ACPAgent (Claude Code via ACP protocol)in a remote sandboxed environment via Runtime API. It follows the same patternas 04_convo_with_api_sandboxed_server.py but uses ACPAgent instead of thedefault LLM-based Agent.Usage: uv run examples/02_remote_agent_server/09_acp_agent_with_remote_runtime.pyRequirements: - LLM_BASE_URL: LiteLLM proxy URL (routes Claude Code requests) - LLM_API_KEY: LiteLLM virtual API key - RUNTIME_API_KEY: API key for runtime API access"""import osimport timefrom openhands.sdk import ( Conversation, RemoteConversation, get_logger,)from openhands.sdk.agent import ACPAgentfrom openhands.workspace import APIRemoteWorkspacelogger = get_logger(__name__)# ACP agents (Claude Code) route through LiteLLM proxyllm_base_url = os.getenv("LLM_BASE_URL")llm_api_key = os.getenv("LLM_API_KEY")assert llm_base_url and llm_api_key, "LLM_BASE_URL and LLM_API_KEY required"# Set ANTHROPIC_* vars so Claude Code routes through LiteLLMos.environ["ANTHROPIC_BASE_URL"] = llm_base_urlos.environ["ANTHROPIC_API_KEY"] = llm_api_keyruntime_api_key = os.getenv("RUNTIME_API_KEY")assert runtime_api_key, "RUNTIME_API_KEY required"# If GITHUB_SHA is set (e.g. running in CI of a PR), use that to ensure consistency# Otherwise, use the latest image from mainserver_image_sha = os.getenv("GITHUB_SHA") or "main"server_image = f"ghcr.io/openhands/agent-server:{server_image_sha[:7]}-python-amd64"logger.info(f"Using server image: {server_image}")with APIRemoteWorkspace( runtime_api_url=os.getenv("RUNTIME_API_URL", "https://runtime.eval.all-hands.dev"), runtime_api_key=runtime_api_key, server_image=server_image, image_pull_policy="Always", target_type="binary", # CI builds binary target images forward_env=["ANTHROPIC_BASE_URL", "ANTHROPIC_API_KEY"],) as workspace: agent = ACPAgent( acp_command=["claude-agent-acp"], # Pre-installed in Docker image ) received_events: list = [] last_event_time = {"ts": time.time()} def event_callback(event) -> None: received_events.append(event) last_event_time["ts"] = time.time() conversation = Conversation( agent=agent, workspace=workspace, callbacks=[event_callback] ) assert isinstance(conversation, RemoteConversation) try: conversation.send_message( "List the files in /workspace and describe what you see." ) conversation.run() while time.time() - last_event_time["ts"] < 2.0: time.sleep(0.1) # Report cost cost = conversation.conversation_stats.get_combined_metrics().accumulated_cost print(f"EXAMPLE_COST: {cost:.4f}") finally: conversation.close()
Running the Example
Copy
Ask AI
export LLM_BASE_URL="https://your-litellm-proxy.example.com"export LLM_API_KEY="your-litellm-api-key"export RUNTIME_API_KEY="your-runtime-api-key"export RUNTIME_API_URL="https://runtime.eval.all-hands.dev"cd software-agent-sdkuv run python examples/02_remote_agent_server/09_acp_agent_with_remote_runtime.py