Skip to content

Conversation

@tpaulshippy
Copy link
Contributor

@tpaulshippy tpaulshippy commented Jun 9, 2025

What this does

Automatically opts into prompt caching in both Anthropic and Bedrock providers for Claude models that support it. And report prompt caching token counts for OpenAI and Gemini which cache automatically.

Disable prompt caching:

RubyLLM.configure do |config|
  config.cache_prompts = false # Disable prompt caching with Anthropic models
end

Caching just system prompts:

chat = RubyLLM.chat
chat.with_instructions("You are a helpful assistant.")
chat.ask("What is the capital of France?", cache: :system)

Caching just user prompts:

chat = RubyLLM.chat
chat.ask("What is the capital of France?", cache: :user)

Caching just tool definitions:

chat = RubyLLM.chat
chat.with_instructions("You are a helpful assistant.")
chat.with_tool(MyTool)
chat.ask("What is the capital of France?", cache: :tools)

Caching system prompts and tool definitions:

chat = RubyLLM.chat
chat.with_instructions("You are a helpful assistant.")
chat.with_tool(MyTool)
chat.ask("What is the capital of France?", cache: [:system, :tools])

Type of change

  • New feature

Scope check

  • I read the Contributing Guide
  • This aligns with RubyLLM's focus on LLM communication
  • This isn't application-specific logic that belongs in user code
  • This benefits most users, not just my specific use case

Quality check

  • I ran overcommit --install and all hooks pass
  • I tested my changes thoroughly
  • I updated documentation if needed
  • I didn't modify auto-generated files manually (models.json, aliases.json)

API changes

  • New public methods/classes

Related issues

Resolves #13

@tpaulshippy tpaulshippy changed the title Prompt caching Prompt caching for Claude Jun 9, 2025
@tpaulshippy tpaulshippy marked this pull request as ready for review June 9, 2025 21:44
@tpaulshippy
Copy link
Contributor Author

@crmne As I don't have an Anthropic key, I'll need you to generate the VCR cartridges for that provider. Hoping everything just works, but let me know if not.

@crmne
Copy link
Owner

crmne commented Jun 11, 2025

@tpaulshippy this would be great to have! Will you be willing to enable it on all providers?

I'll do a proper review when I can.

@tpaulshippy
Copy link
Contributor Author

My five minutes of research indicates that at least OpenAI and Gemini take the approach of automatically caching for you based on the size and structure of your request. So the only support I think we'd really need for those two is to populate the cached token counts on the response messages. Unless we want to try to support explicit caching on the Gemini API but that looks complex and not as commonly needed.

Do you know of other providers that require payload changes for prompt caching?

def with_cache_control(hash, cache: false)
return hash unless cache

hash.merge(cache_control: { type: 'ephemeral' })
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Realizing this might cause errors on older models that do not support caching. If it does, we could raise here, or just let the API validation handle it. I'm torn on whether the capabilities check complexity is worth it as these models are probably so rarely used.

@tpaulshippy
Copy link
Contributor Author

@crmne As I don't have an Anthropic key, I'll need you to generate the VCR cartridges for that provider. Hoping everything just works, but let me know if not.

Scratch that. I decided to stop being a cheapskate and just pay Anthropic their $5.

@tpaulshippy
Copy link
Contributor Author

Looking to implement this in our project and now I'm wondering if it should be an opt out rather than an opt in. If you are using unique prompts every time I guess it adds some cost to cache them but my guess is in most applications prompts will get repeated, especially system prompts.

Copy link
Owner

@crmne crmne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this feature @tpaulshippy, however there are several improvements I'd like you to make before we merge this.

On top of the ones made in the comments, and the most important one, I'd like to have prompt caching implemented in all providers.

Plus I have not fully checked the logic in providers/anthropic but the patch seems a bit heavy-handed with the amount of changes needed at first glance. Where all changes necessary or could it be done in a simpler manner?

@crmne crmne added the enhancement New feature or request label Jul 16, 2025
@tpaulshippy
Copy link
Contributor Author

tpaulshippy commented Jul 16, 2025

I'd like to have prompt caching implemented in all providers.

Did you see this? Is the request to populate the cached token counts on the response messages for OpenAI and Gemini?

@crmne
Copy link
Owner

crmne commented Jul 16, 2025

Did you see this? Is the request to populate the cached token counts on the response messages for OpenAI and Gemini?

Thank you for pointing that out, I had missed it. I think it would certainly be a nice addition to RubyLLM to have all providers have almost the same level of support of caching.

@tpaulshippy
Copy link
Contributor Author

Did you see this? Is the request to populate the cached token counts on the response messages for OpenAI and Gemini?

Thank you for pointing that out, I had missed it. I think it would certainly be a nice addition to RubyLLM to have all providers have almost the same level of support of caching.

Ok we have a bit of a naming issue. Here's the property names we get from each provider:

Anthropic
cache_creation_input_tokens
cache_read_input_tokens

OpenAI
cached_tokens

Gemini
cached_content_token_count

My reading of the docs indicates that the OpenAI and Gemini values correspond pretty closely with the cache_read_input_tokens in Anthropic.

What should we call these properties in the Message?

@crmne
Copy link
Owner

crmne commented Jul 16, 2025

For the naming, let's go with:

  • cached_tokens - maps to the cache read values from all providers (the main property developers will use)
  • cache_creation_tokens - Anthropic-specific cache creation cost (nil for other providers)

This keeps it consistent with our existing input_tokens/output_tokens pattern while handling the provider differences cleanly.

Can you update the Message properties to use these names? Thanks Paul!

@tpaulshippy tpaulshippy requested a review from crmne September 22, 2025 15:37
@sosso
Copy link

sosso commented Sep 24, 2025

One-shot prompt scenarios is our main use case, the above would work great. Caching support is also a blocker on us making the jump to RubyLLM, thanks all!

@maximevaillancourt
Copy link

maximevaillancourt commented Oct 3, 2025

Caching support is also a blocker on us making the jump to RubyLLM

No need to wait: use with_params in the meantime!

RubyLLM
  .chat(model: "claude-sonnet-4-20250514")
  .with_params(system: [{
    type: "text",
    text: "This is my very long system prompt that will get cached.",
    cache_control: { type: "ephemeral" },
  }])

@sosso
Copy link

sosso commented Oct 3, 2025

Caching support is also a blocker on us making the jump to RubyLLM

No need to wait: use with_params in the meantime!

RubyLLM
  .chat(model: "claude-sonnet-4-20250514")
  .with_params(system: [{
    type: "text",
    text: "This is my very long system prompt that will get cached.",
    cache_control: { type: "ephemeral" },
  }])

Hm, When trying that @maximevaillancourt , and later doing a .ask, my system prompt doesn't end up getting into openrouter. Are you using this approach successfully?

@maximevaillancourt
Copy link

Are you using this approach successfully?

Yes, but worth noting that I'm using claude-sonnet-4-20250514 (the Anthropic one) directly, not an OpenRouter one, maybe that explains the difference in behaviour.

@sosso
Copy link

sosso commented Oct 17, 2025

Hi @tpaulshippy @crmne -- anything we can do to help this along? Happy to help out if needed.

@codecov
Copy link

codecov bot commented Oct 20, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 89.78%. Comparing base (c5c0027) to head (fe5c1e7).
⚠️ Report is 3 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #234      +/-   ##
==========================================
+ Coverage   89.72%   89.78%   +0.06%     
==========================================
  Files          36       36              
  Lines        1761     1772      +11     
  Branches      481      487       +6     
==========================================
+ Hits         1580     1591      +11     
  Misses        181      181              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@sosso
Copy link

sosso commented Oct 20, 2025

Thanks for picking this back up!

Have you played around much with the 1h TTL, @tpaulshippy https://docs.claude.com/en/docs/build-with-claude/prompt-caching#1-hour-cache-duration

@tpaulshippy
Copy link
Contributor Author

Have you played around much with the 1h TTL, @tpaulshippy

No I haven't. Didn't seem that useful for our scenarios. Would be a good addition to this library though.

@crmne
Copy link
Owner

crmne commented Oct 21, 2025

@tpaulshippy, I really appreciate the work you poured into this.

However, I've had a nagging gut feeling the whole time. The amount of churn here never felt proportionate to the feature. This ended up rewriting a good chunk of the library for what's ultimately an Anthropic quirk.

In the end it was impossible to review this in a way that steered toward what I had in mind without actually building it myself: once I went hands-on I discovered my own earlier suggestion about with_message_params (my initial preferred provider-agnostic way of dealing with this) couldn't work because Anthropic expects the cache metadata inside the content blocks.

The exploration led to Raw Content Blocks: 869a755 - raw messages that go straight to the LLM. This way Anthropic gets its caching hooks, we can support any weird provider-specific quirk of the message contents, and we keep the core clean and provider-agnostic.

I've shipped docs https://rubyllm.com/chat/#raw-content-blocks, and updated the Rails integration, as well as making an update generator for 1.9.

Thanks again for your work and enjoy Raw Content Blocks!

@crmne crmne closed this Oct 21, 2025
@sosso
Copy link

sosso commented Oct 21, 2025

This new change is working almost perfectly for us, @crmne ! One callout: we're using the OpenRouter provider (but ultimately Anthropic models primarily within that), and while the message caching with raw blocks (using RubyLLM::Providers::Anthropic::Content in an openrouter chat) is working great, the tool with_params pattern (in the tool subclasses) is not caching definitions. I think it's because of the subclass hierarchy.

Also, Anthropic's a bit strange in that their docs have you cache the last tool, https://docs.claude.com/en/docs/build-with-claude/prompt-caching#prompt-caching-examples (Caching tool definitions).

@crmne
Copy link
Owner

crmne commented Oct 21, 2025

As mentioned in the docs, Tool's with_params is only implemented in Anthropic but I can quickly add that to the other providers!

@tpaulshippy
Copy link
Contributor Author

Maybe we should even enable that by default (and add a configuration toggle).

I really liked this idea. Any chance we could get it? Cache the last system message, last user message, and last tool by default?

@crmne
Copy link
Owner

crmne commented Oct 21, 2025

Maybe we should even enable that by default (and add a configuration toggle).

I really liked this idea. Any chance we could get it? Cache the last system message, last user message, and last tool by default?

This would mean changing the whole thing again and re-add a lot of your code only for a bit of magic in a provider quirk. Hard pass. This belongs in your app.

Also, that comment precedes the whole investigation I did.

@tpaulshippy
Copy link
Contributor Author

Ok fair enough. I bring it up because one of the strengths of this library is the ability to switch between providers and models seamlessly. Since Open AI and Gemini cache by default, setting up Anthropic to do the same would be nice.

@sosso
Copy link

sosso commented Oct 21, 2025

I think the difference is that Gemini and OpenAI don't charge the user extra for the cache writes, while Anthropic does.

@tpaulshippy
Copy link
Contributor Author

That is true. But in most use cases they charge even more if you don't cache at all. Thus, this PR.

@tpaulshippy
Copy link
Contributor Author

Even if it were opt in, a way to properly turn on caching for Anthropic in one line without having to track which will be your last tool seems like it would be nice.

@crmne
Copy link
Owner

crmne commented Oct 22, 2025

@sosso done! 9916f01

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support prompt caching

7 participants