Skip to content

[BUG] LiteLLMModel does not properly use custom proxy server settings #661

@Razeral

Description

@Razeral

Checks

  • I have updated to the lastest minor and patch version of Strands
  • I have checked the documentation and this is not expected behavior
  • I have searched ./issues and there are no duplicates of my issue

Strands Version

1.4.0

Python Version

3.12.11

Operating System

macOS 15.5

Installation Method

pip

Steps to Reproduce

from strands import Agent
from strands.models.litellm import LiteLLMModel

# Create a LiteLLM model
litellm_model = LiteLLMModel(
    client_args={
        "api_key": <key>,
    },
    model_id="gemini-2.5-pro",
    params={},
)

# Create an agent with default settings
agent = Agent(
    model=litellm_model,
    system_prompt="""
    You are a helpful and friendly assistant
    """,
    tools=[],
)

query = """
Hello, what are the latest 5 news? Please list any errors encountered as well as your thinking process.
"""
response = agent(query)

Expected Behavior

Response is obtained from LLM model via custom routing through LiteLLM Proxy Server.

Actual Behavior

Attempting to use custom LiteLLM proxy server does not work as intended without additional parameters specified in both LiteLLMModel as well as in the default LiteLLM package.

The LiteLLMModel attempts to connect to the default provider source and ignores the custom proxy server settings. This will fail when specifying only the LiteLLM API key (e.g. sk--xxxxx). E.g. When trying to access Gemini models, it will look for the Google ADC.

Image

Additional Context

No response

Possible Solution

base_url needs to be specified in LiteLLMModel - this is not indicated in the docs

# Create a LiteLLM model for OpenAI
litellm_model = LiteLLMModel(
    client_args={
        "api_key": <key>,
        "base_url": <Proxy_Server_URL>,
    },
    model_id="gemini-2.5-pro",
    params={},
)

Another variable also needs to be set by importing the default litellm package

import litellm
litellm.use_litellm_proxy = True  # Enable LiteLLM proxy usage

This will properly trigger the LiteLLMModel to use the custom LiteLLM Proxy Server

Related Issues

No response

Metadata

Metadata

Assignees

Labels

area-providerRelated to model providersbugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions