A ready-to-run example is available here!The
LLMProfileStore class provides a centralized mechanism for managing LLM configurations.
Define a profile once, reuse it everywhere — across scripts, sessions, and even machines.
Benefits
- Persistence: Saves model parameters (API keys, temperature, max tokens, …) to a stable disk format.
- Reusability: Import a defined profile into any script or session with a single identifier.
- Portability: Simplifies the synchronization of model configurations across different machines or deployment environments.
How It Works
Create a Store
The store manages a directory of JSON profile files. By default it uses~/.openhands/profiles,
but you can point it anywhere.Save a Profile
Got an LLM configured just right? Save it for later.API keys are excluded by default for security. Pass
include_secrets=True to the save method if you wish to
persist them; otherwise, they will be read from the environment at load time.Good to Know
Profile names must be simple filenames (no slashes, no dots at the start).Ready-to-run Example
This example is available on GitHub: examples/01_standalone_sdk/37_llm_profile_store.py
examples/01_standalone_sdk/37_llm_profile_store.py
The model name should follow the LiteLLM convention:
provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.Mid-Conversation Model Switching
You can use a saved profile to switch the active model on a running conversation between turns. This is useful when you want to start with one model, then switch to another for later user messages while keeping the same conversation history and combined usage metrics.This example is available on GitHub: examples/01_standalone_sdk/44_model_switching_in_convo.py
examples/01_standalone_sdk/44_model_switching_in_convo.py
The model name should follow the LiteLLM convention:
provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.Next Steps
- LLM Registry - Manage multiple LLMs in memory at runtime
- LLM Routing - Automatically route to different models
- Exception Handling - Handle LLM errors gracefully

