Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@
"group": "LLM Configuration",
"pages": [
"openhands/usage/llms/llms",
"openhands/usage/llms/supported-models",
{
"group": "Providers",
"pages": [
Expand Down
305 changes: 305 additions & 0 deletions openhands/usage/llms/supported-models.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,305 @@
---
title: Supported Models
description: Complete list of all language models supported by OpenHands, including verified and unverified models from various providers.
---

<Note>
This documentation is automatically synchronized with the [OpenHands Software Agent SDK](https://github.com/OpenHands/software-agent-sdk) to ensure accuracy and completeness.
</Note>

OpenHands supports a wide range of language models through [LiteLLM](https://docs.litellm.ai/docs/providers), providing access to 100+ providers and models. This page provides a comprehensive overview of all supported models, categorized by verification status and provider.

## Model Categories

### Verified Models

These models have been thoroughly tested with OpenHands and are recommended for production use. They are actively maintained and verified to work well with OpenHands agents.

#### OpenHands Hosted Models

OpenHands provides hosted access to premium models through our API:

- `claude-sonnet-4-5-20250929` (recommended)
- `claude-haiku-4-5-20251001`
- `gpt-5-codex` (recommended)
- `gpt-5-2025-08-07` (recommended)
- `gpt-5-mini-2025-08-07`
- `claude-sonnet-4-20250514`
- `claude-opus-4-20250514`
- `claude-opus-4-1-20250805`
- `devstral-small-2507`
- `devstral-medium-2507`
- `o3`
- `o4-mini`
- `gemini-2.5-pro`
- `kimi-k2-0711-preview`
- `qwen3-coder-480b`

<Note>
For OpenHands hosted models, see the [OpenHands LLMs guide](/openhands/usage/llms/openhands-llms) for setup instructions.
</Note>

#### OpenAI Models

Verified OpenAI models that work excellently with OpenHands:

- `gpt-5-codex` (recommended for coding tasks)
- `gpt-5-2025-08-07` (recommended)
- `gpt-5-mini-2025-08-07`
- `o4-mini`
- `gpt-4o`
- `gpt-4o-mini`
- `gpt-4-32k`
- `gpt-4.1`
- `gpt-4.1-2025-04-14`
- `o1-mini`
- `o3`
- `codex-mini-latest`

#### Anthropic Models

Verified Anthropic Claude models:

- `claude-sonnet-4-5-20250929` (recommended)
- `claude-haiku-4-5-20251001`
- `claude-sonnet-4-20250514` (recommended)
- `claude-opus-4-20250514`
- `claude-opus-4-1-20250805`
- `claude-3-7-sonnet-20250219`
- `claude-3-sonnet-20240229`
- `claude-3-opus-20240229`
- `claude-3-haiku-20240307`
- `claude-3-5-haiku-20241022`
- `claude-3-5-sonnet-20241022`
- `claude-3-5-sonnet-20240620`

#### Mistral Models

Verified Mistral AI models optimized for coding:

- `devstral-small-2505`
- `devstral-small-2507` (recommended)
- `devstral-medium-2507` (recommended)

### Unverified Models

OpenHands supports hundreds of additional models through LiteLLM that haven't been specifically verified but may work well. These include models from:

- **Google**: Gemini Pro, Gemini Flash, PaLM models
- **AWS Bedrock**: Claude, Titan, Jurassic models
- **Azure OpenAI**: All OpenAI models via Azure
- **Cohere**: Command models
- **AI21**: Jurassic models
- **Hugging Face**: Open source models
- **Local providers**: Ollama, vLLM, SGLang, LM Studio
- **Other providers**: Groq, Together AI, Replicate, and more

<Warning>
Unverified models may have varying levels of compatibility with OpenHands. Performance and reliability may differ significantly from verified models.
</Warning>

## Model Features

Different models support different features that enhance the OpenHands experience:

### Reasoning Models

Models that support enhanced reasoning capabilities:

**Reasoning Effort Support:**
- OpenAI o-series: `o1`, `o3`, `o3-mini`, `o4-mini`
- OpenAI GPT-5 family: `gpt-5`, `gpt-5-mini`, `gpt-5-codex`
- Google Gemini: `gemini-2.5-flash`, `gemini-2.5-pro`

**Extended Thinking Support:**
- Anthropic Claude 4.5: `claude-sonnet-4-5`, `claude-haiku-4-5`

### Vision Models

Many models support image input for visual reasoning tasks. Vision support is automatically detected through LiteLLM's model information system.

### Tool Calling

Most modern models support native tool calling, which is essential for OpenHands agents. The SDK automatically detects and configures tool calling capabilities.

### Prompt Caching

Models that support prompt caching for improved performance and cost efficiency:

- Anthropic Claude 3.5 and 4.x series
- Claude 3 Haiku and Opus (specific versions)

## Provider Configuration

### Using Verified Models

For verified models, you can use them directly with their model names:

```python
from openhands.sdk import LLM
from pydantic import SecretStr

# OpenHands hosted model
llm = LLM(
model="claude-sonnet-4-5-20250929",
api_key=SecretStr("your-openhands-api-key"),
base_url="https://api.all-hands.dev/v1"
)

# OpenAI model
llm = LLM(
model="gpt-5-codex",
api_key=SecretStr("your-openai-api-key")
)

# Anthropic model
llm = LLM(
model="anthropic/claude-sonnet-4-5-20250929",
api_key=SecretStr("your-anthropic-api-key")
)
```

### Using Unverified Models

For unverified models, use the provider prefix format:

```python
# Google Gemini
llm = LLM(
model="gemini/gemini-pro",
api_key=SecretStr("your-google-api-key")
)

# AWS Bedrock
llm = LLM(
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
aws_access_key_id=SecretStr("your-access-key"),
aws_secret_access_key=SecretStr("your-secret-key"),
aws_region_name="us-east-1"
)

# Local model via Ollama
llm = LLM(
model="ollama/llama2",
base_url="http://localhost:11434"
)
```

## Model Selection Guidelines

### For Production Use

**Recommended models for production environments:**

1. **OpenHands Hosted**: `claude-sonnet-4-5-20250929` or `gpt-5-codex`
2. **Self-hosted**: `claude-sonnet-4-20250514` or `gpt-5-2025-08-07`
3. **Cost-effective**: `claude-haiku-4-5-20251001` or `gpt-5-mini-2025-08-07`

### For Development and Testing

**Good options for development:**

1. **Fast and affordable**: `gpt-4o-mini` or `claude-3-5-haiku-20241022`
2. **Local development**: `devstral-small-2507` via Ollama
3. **Experimentation**: Any unverified model that fits your use case

### For Specialized Tasks

**Coding-focused tasks:**
- `gpt-5-codex` (OpenAI)
- `devstral-medium-2507` (Mistral)
- `claude-sonnet-4-5-20250929` (Anthropic)

**Reasoning-heavy tasks:**
- `o3` or `o4-mini` (OpenAI)
- `gemini-2.5-pro` (Google)

**Vision tasks:**
- `gpt-4o` (OpenAI)
- `claude-sonnet-4-5-20250929` (Anthropic)
- `gemini-2.5-pro` (Google)

## Cost Considerations

Model costs vary significantly across providers and models. Consider these factors:

- **Input/Output token costs**: Larger models typically cost more per token
- **Prompt caching**: Can reduce costs for repeated prompts
- **Hosted vs. self-hosted**: OpenHands hosted models may offer better value
- **Usage patterns**: High-volume usage may benefit from dedicated instances

<Note>
For detailed pricing information, consult each provider's pricing documentation. OpenHands automatically tracks token usage and costs for all models.
</Note>

## Getting Model Lists Programmatically

You can retrieve supported models programmatically using the SDK:

```python
from openhands.sdk.llm.utils.verified_models import VERIFIED_MODELS
from openhands.sdk.llm.utils.unverified_models import get_unverified_models

# Get verified models by provider
print("Verified OpenAI models:", VERIFIED_MODELS["openai"])
print("Verified Anthropic models:", VERIFIED_MODELS["anthropic"])

# Get all unverified models
unverified = get_unverified_models()
print("Unverified models by provider:", unverified.keys())
```

## Provider-Specific Guides

For detailed setup instructions for specific providers, see:

- [OpenHands Hosted Models](/openhands/usage/llms/openhands-llms)
- [OpenAI](/openhands/usage/llms/openai-llms)
- [Anthropic (via OpenRouter)](/openhands/usage/llms/openrouter)
- [Azure OpenAI](/openhands/usage/llms/azure-llms)
- [Google Gemini](/openhands/usage/llms/google-llms)
- [Groq](/openhands/usage/llms/groq)
- [Local Models](/openhands/usage/llms/local-llms)
- [LiteLLM Proxy](/openhands/usage/llms/litellm-proxy)
- [Custom Configurations](/openhands/usage/llms/custom-llm-configs)

## Troubleshooting

### Model Not Found

If you encounter "model not found" errors:

1. Check the model name spelling and provider prefix
2. Verify your API credentials are correct
3. Ensure the model is available in your region/account
4. Try using the full provider/model format (e.g., `openai/gpt-4o`)

### Performance Issues

For poor model performance:

1. Try a verified model from the recommended list
2. Check if the model supports the features you need (tool calling, vision, etc.)
3. Adjust model parameters like temperature and max_output_tokens
4. Consider using a more powerful model for complex tasks

### Cost Management

To manage costs effectively:

1. Use smaller models for simple tasks
2. Enable prompt caching where supported
3. Set appropriate token limits
4. Monitor usage through OpenHands telemetry
5. Consider OpenHands hosted models for predictable pricing

## Contributing

If you've successfully tested a model not in the verified list, please consider contributing:

1. Test the model thoroughly with various OpenHands tasks
2. Document any special configuration requirements
3. Submit a pull request to add it to the verified models list
4. Share your experience in the OpenHands community

For the most up-to-date model support information, always refer to the [Software Agent SDK repository](https://github.com/OpenHands/software-agent-sdk/tree/main/openhands-sdk/openhands/sdk/llm/utils).