Skip to content

Commit 7ab894a

Browse files
authored
Merge branch 'main' into openrouter-support
2 parents 6867de4 + d709147 commit 7ab894a

28 files changed

+1015
-148
lines changed

.github/workflows/cicd.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ jobs:
109109
rm -rf spec/fixtures/vcr_cassettes
110110
111111
echo "Running tests with real API calls..."
112-
bundle exec rspec
112+
env -u CI bundle exec rspec
113113
env:
114114
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
115115
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
/test/tmp/
1010
/test/version_tmp/
1111
/tmp/
12+
.rspec_status
1213

1314
# Used by dotenv library to load environment variables.
1415
.env

.overcommit.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ PreCommit:
88
enabled: true
99

1010
RSpec:
11-
enabled: false
11+
enabled: true
1212
command: ['bundle', 'exec', 'rspec']
1313
on_warn: fail
1414

.rspec_status

Lines changed: 0 additions & 50 deletions
This file was deleted.

CONTRIBUTING.md

Lines changed: 84 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Here's how to get started:
88

99
```bash
1010
# Clone the repository
11-
git clone https://github.com/crmne/ruby_llm.git
11+
gh repo clone crmne/ruby_llm
1212
cd ruby_llm
1313

1414
# Install dependencies
@@ -29,7 +29,9 @@ We recommend using GitHub CLI to simplify the workflow:
2929
# Create a new branch for your feature
3030
gh repo fork crmne/ruby_llm --clone
3131
cd ruby_llm
32-
git checkout -b my-new-feature
32+
33+
# Find or make an issue for the feature on GitHub and then:
34+
gh issue develop 123 --checkout # Substitute 123 with the issue number
3335

3436
# Make your changes and test them
3537
# ...
@@ -41,6 +43,86 @@ git commit
4143
gh pr create --web
4244
```
4345

46+
## Model Naming Convention & Provider Strategy
47+
48+
When adding new providers to RubyLLM, please follow these guidelines:
49+
50+
### Normalized Model IDs
51+
52+
We use a consistent approach separating **what** (model) from **where** (provider):
53+
54+
```ruby
55+
# Default way (from the native provider)
56+
chat = RubyLLM.chat(model: "claude-3-5-sonnet")
57+
58+
# Same model via different provider
59+
chat = RubyLLM.chat(model: "claude-3-5-sonnet", provider: :bedrock)
60+
```
61+
62+
### Implementing a Provider
63+
64+
If you're adding a new provider:
65+
66+
1. **Use normalized model IDs** - Don't include provider prefixes in the model ID itself
67+
2. **Add provider mapping** - Map the normalized IDs to your provider's specific format internally
68+
3. **Preserve capabilities** - Ensure models accessed through your provider report the same capabilities as their native counterparts
69+
4. **Update models.json** - Include your provider's models in models.json
70+
5. **Update aliases.json** - Add entries to aliases.json for models accessible through your provider
71+
6. **Implement refresh mechanism** - Ensure your provider supports the `list_models` method for refreshing
72+
73+
### Model Aliases
74+
75+
For providers that use complex model identifiers (like Bedrock's `anthropic.claude-3-5-sonnet-20241022-v2:0:200k`), add mappings to the global aliases.json file:
76+
77+
```json
78+
{
79+
"claude-3-5-sonnet": {
80+
"anthropic": "claude-3-5-sonnet-20241022",
81+
"bedrock": "anthropic.claude-3-5-sonnet-20241022-v2:0:200k",
82+
"openrouter": "anthropic/claude-3.5-sonnet"
83+
},
84+
"gpt-4o": {
85+
"openai": "gpt-4o-2024-05-13",
86+
"bedrock": "anthropic.gpt-4o-2024-05-13",
87+
"openrouter": "openai/gpt-4o"
88+
}
89+
}
90+
```
91+
92+
If a model can't be found with the provided ID and provider, a `ModelNotFoundError` will be raised with an informative message. Your implementation should make this error helpful by suggesting available alternatives.
93+
94+
When the same model has multiple versions and context windows e.g.
95+
96+
```
97+
anthropic.claude-3-5-sonnet-20240620-v1:0
98+
anthropic.claude-3-5-sonnet-20240620-v1:0:18k
99+
anthropic.claude-3-5-sonnet-20240620-v1:0:200k
100+
anthropic.claude-3-5-sonnet-20240620-v1:0:51k
101+
anthropic.claude-3-5-sonnet-20241022-v2:0
102+
anthropic.claude-3-5-sonnet-20241022-v2:0:18k
103+
anthropic.claude-3-5-sonnet-20241022-v2:0:200k
104+
anthropic.claude-3-5-sonnet-20241022-v2:0:51k
105+
```
106+
107+
We default all aliases to the biggest context window, and the main alias (without date) to the latest version:
108+
109+
```json
110+
"claude-3-5-sonnet": {
111+
"anthropic": "claude-3-5-sonnet-20241022",
112+
"bedrock": "anthropic.claude-3-5-sonnet-20241022-v2:0:200k",
113+
"openrouter": "anthropic/claude-3.5-sonnet"
114+
},
115+
"claude-3-5-sonnet-20241022": {
116+
"anthropic": "claude-3-5-sonnet-20241022",
117+
"bedrock": "anthropic.claude-3-5-sonnet-20241022-v2:0:200k",
118+
"openrouter": "anthropic/claude-3.5-sonnet"
119+
},
120+
"claude-3-5-sonnet-20240620": {
121+
"anthropic": "claude-3-5-sonnet-20240620",
122+
"bedrock": "anthropic.claude-3-5-sonnet-20240620-v1:0:200k"
123+
},
124+
```
125+
44126
## Running Tests
45127

46128
Tests automatically use VCR to record and replay HTTP interactions, so you don't need real API keys for testing:

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ A delightful Ruby way to work with AI. No configuration madness, no complex call
1212
<img src="https://upload.wikimedia.org/wikipedia/commons/e/ec/DeepSeek_logo.svg" alt="DeepSeek" height="40" width="120">
1313
</div>
1414

15-
<a href="https://badge.fury.io/rb/ruby_llm"><img src="https://badge.fury.io/rb/ruby_llm.svg?dummy=unused" alt="Gem Version" /></a>
15+
<a href="https://badge.fury.io/rb/ruby_llm"><img src="https://badge.fury.io/rb/ruby_llm.svg" alt="Gem Version" /></a>
1616
<a href="https://github.com/testdouble/standard"><img src="https://img.shields.io/badge/code_style-standard-brightgreen.svg" alt="Ruby Style Guide" /></a>
1717
<a href="https://rubygems.org/gems/ruby_llm"><img alt="Gem Downloads" src="https://img.shields.io/gem/dt/ruby_llm"></a>
1818
<a href="https://codecov.io/gh/crmne/ruby_llm"><img src="https://codecov.io/gh/crmne/ruby_llm/branch/main/graph/badge.svg" alt="codecov" /></a>

codecov.yml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
coverage:
2+
status:
3+
project:
4+
default:
5+
informational: true
6+
patch:
7+
default:
8+
informational: true

docs/_config.yml

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,18 @@ color_scheme: light
4040
ga_tracking:
4141
ga_tracking_anonymize_ip: true
4242

43+
# Callouts
44+
callouts:
45+
new:
46+
title: New
47+
color: green
48+
warning:
49+
title: Warning
50+
color: yellow
51+
note:
52+
title: Note
53+
color: blue
54+
4355
# Custom plugins (GitHub Pages allows these)
4456
plugins:
4557
- jekyll-remote-theme

docs/guides/chat.md

Lines changed: 30 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,33 @@ claude_chat = RubyLLM.chat(model: 'claude-3-5-sonnet-20241022')
4343
chat.with_model('gemini-2.0-flash')
4444
```
4545

46+
{: .warning-title }
47+
> Coming in v1.1.0
48+
>
49+
> The following model aliases and provider selection features are available in the upcoming version.
50+
51+
RubyLLM supports model aliases, so you don't need to remember specific version numbers:
52+
53+
```ruby
54+
# Instead of this:
55+
chat = RubyLLM.chat(model: 'claude-3-5-sonnet-20241022')
56+
57+
# You can simply write:
58+
chat = RubyLLM.chat(model: 'claude-3-5-sonnet')
59+
```
60+
61+
You can also specify a specific provider to use with a model:
62+
63+
```ruby
64+
# Use a specific provider (when the same model is available from multiple providers)
65+
chat = RubyLLM.chat(model: 'claude-3-5-sonnet', provider: 'bedrock')
66+
67+
# Or set the provider after initialization
68+
chat = RubyLLM.chat(model: 'gpt-4o')
69+
.with_provider('azure')
70+
```
71+
72+
See [Working with Models]({% link guides/models.md %}) for more details on model selection.
4673
## System Prompts
4774

4875
System prompts allow you to set specific instructions or context that guide the AI's behavior throughout the conversation. These prompts are not directly visible to the user but help shape the AI's responses:
@@ -67,6 +94,8 @@ chat.add_message role: :system, content: "Always format your code using proper R
6794
# - Creating specialized assistants
6895
```
6996

97+
If you want to set up system prompts with persistence, please refer to the [Rails integration guide]({% link guides/rails.md %}#using-system-messages)
98+
7099
## Multi-turn Conversations
71100

72101
Chats maintain conversation history automatically:
@@ -275,4 +304,4 @@ Now that you understand chat basics, you might want to explore:
275304

276305
- [Using Tools]({% link guides/tools.md %}) to let AI use your Ruby code
277306
- [Streaming Responses]({% link guides/streaming.md %}) for real-time interactions
278-
- [Rails Integration]({% link guides/rails.md %}) to persist conversations in your apps
307+
- [Rails Integration]({% link guides/rails.md %}) to persist conversations in your apps

docs/guides/models.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,36 @@ google_models = RubyLLM.models.by_provider('gemini')
6363
deepseek_models = RubyLLM.models.by_provider('deepseek')
6464
```
6565

66+
## Using Model Aliases
67+
68+
{: .warning-title }
69+
> Coming in v1.1.0
70+
>
71+
> This feature is available in the upcoming version but not in the latest release.
72+
73+
RubyLLM provides convenient aliases for popular models, so you don't have to remember specific version numbers:
74+
75+
```ruby
76+
# These are equivalent
77+
chat = RubyLLM.chat(model: 'claude-3-5-sonnet')
78+
chat = RubyLLM.chat(model: 'claude-3-5-sonnet-20241022')
79+
80+
# These are also equivalent
81+
chat = RubyLLM.chat(model: 'gpt-4o')
82+
chat = RubyLLM.chat(model: 'gpt-4o-2024-11-20')
83+
```
84+
85+
You can also specify a different provider to use with a model:
86+
87+
```ruby
88+
# Use a specific model via a different provider
89+
chat = RubyLLM.chat(model: 'claude-3-5-sonnet', provider: 'bedrock')
90+
91+
# Or set the provider after initialization
92+
chat = RubyLLM.chat(model: 'gpt-4o')
93+
.with_provider('azure')
94+
```
95+
6696
## Chaining Filters
6797

6898
You can chain multiple filters to find exactly what you need:

0 commit comments

Comments
 (0)