Prompt-caching – auto-injects Anthropic cache breakpoints (90% token savings)
67 points by ermis 10 hours ago | 27 comments

numlocked 8 hours ago
As per its own FAQ this plugin is out of date and doesn’t actually do anything incremental re:caching:

> "Hasn't Anthropic's new auto-caching feature solved this?"

> Largely, yes — Anthropic's automatic caching (passing "cache_control": {"type": "ephemeral"} at the top level) handles breakpoint placement automatically now. This plugin predates that feature and originally filled that gap.

reply
orphea 8 hours ago
I don't understand and I'm curious, why a dead on arrival open source tool needs a separate domain?

  Domain Name: prompt-caching.ai
  Updated Date: 2026-03-12T20:31:44Z
  Creation Date: 2026-03-12T20:27:35Z
  Registry Expiry Date: 2028-03-12T20:27:35Z
reply
imjonse 6 hours ago
It's more likely the other way around, the .ai domain with a fairly generic and maybe future-proof name needed a quick vibecoded project to not be empty when it launches.
reply
derrida 8 hours ago
Is it perhaps because this is for claude code but there's other tools that use anthropics api like custom agents? (some i prefer to use than claude code - e.g sketch.dev what is now called shelley at exe.dev) perhaps?
reply
stingraycharles 8 hours ago
No, because this doesn’t actually “fix” any existing code. It’s only useful for helping an LLM to modify your code to adjust the caching parameters in the right place, but it doesn’t have the correct API for that.
reply
thepasch 6 hours ago
I’m pretty sure whoever made this didn’t read the website they asked their LLM to generate for them.
reply
somesnm 9 hours ago
Hasn't this been largely solved by auto-caching introduced recently by Anthropic, where you pass "cache_control": {"type": "ephemeral"} in your request and it puts breakpoints automatically? https://platform.claude.com/docs/en/build-with-claude/prompt...
reply
philipp-gayret 9 hours ago
Looking at my own usage with claude code out of the box and nothing special around caching set up. For this month according to ccusage I have in tokens 0.2M input, 0.6M output, 10M cache create, 311M cache read for 322M total tokens. Seems to me that it caches out of the box quite heavily, but if I can trim my usage somehow with these kind of tools I'd love to know.
reply
stingraycharles 8 hours ago
This is not about caching things for stuff that others built, it’s solely to modify code that you’re writing that will use Anthropic’s API endpoints.
reply
stingraycharles 9 hours ago
Yes, it has, this is a non-problem, and even if it was a problem, an MCP server would most definitely be one of the worst ways to fix it.
reply
gostsamo 8 hours ago
It is answered in the FAQ.
reply
ermis 9 hours ago
[flagged]
reply
stingraycharles 9 hours ago
Please don’t use AI for writing comments.

Also, what this adds is mostly overhead at the wrong level of abstraction, not visibility.

reply
katspaugh 9 hours ago
> This plugin is built for developers building their own applications with the Anthropic API.

> Important note for Claude Code users: Claude Code already handles prompt caching automatically for its own API calls — system prompts, tool definitions, and conversation history are cached out of the box.

Source: their GitHub

reply
jasonlotito 7 hours ago
Does anyone actually read anymore?

From the FAQ:

You're right, and it's a fair question. Claude Code does handle prompt caching automatically for its own API calls — system prompts, tool definitions, and conversation history are cached out of the box. You don't need this plugin for that.

This plugin is for a different layer: when you build your own apps or agents with the Anthropic SDK. Raw SDK calls don't get automatic caching unless you place cache_control breakpoints yourself. This plugin does that automatically, plus gives you visibility into what's being cached, hit rates, and real savings — which Claude Code doesn't expose.

> Claude Code already handles prompt caching automatically for its own API calls

Claude Code is an app. The API layer is different.

When did people start thinking that the Claude Code app and the API are the same thing?

Are these just all confused vibe coders?

reply
user34283 7 hours ago
Is this a joke?

The first thing on the page is "Automatic prompt caching for Claude Code."

Why should one expect this to actually be "Automatic prompt caching for new apps you develop with Claude Code"?

It appears to be hard to explain what this plugin does, and the authors did a terrible job; they did not even try.

reply
jasonlotito 3 hours ago
> Is this a joke?

Yes, your comment is a joke. I agree.

reply
adi_pradhan 8 hours ago
This is applicable only to the API from what i understand. Since claude code already caches quite aggressively (try npx ccusage)

Also the anthropic API did already introduce prompt-caching https://platform.claude.com/docs/en/build-with-claude/prompt...

What is new here?

reply
mijoharas 9 hours ago
I don't understand, Claude code already has automatic prompt caching built in.[0] How does this change things?

[0] https://code.claude.com/docs/en/costs

reply
fschuett 8 hours ago
Slightly off-topic, but I recently tested some tool and it turns out Opus is far cheaper than Sonnet, because it produces way less output tokens and those are what's expensive. It's also much slower than Opus (I did 9 runs to compare Haiku, Sonnet and Opus on the same problem). I also thought "oh, Sonnet is more light-weight and cheaper than Opus", no, that's actually just marketing.
reply
CGamesPlay 8 hours ago
Claude subscriptions (strangely) have a Sonnet limit which is lower than the general model limit. Using Sonnet counts against both limits, using Opus only the general limit. So the subscriptions are discouraging Sonnet use as well.
reply
Slav_fixflex 8 hours ago
Interesting – I've been using Claude heavily for building projects without writing code myself. Token costs add up fast, anything that reduces that is welcome. Has anyone tested this in production workflows?
reply
primaprashant 5 hours ago
I've found RTK CLI proxy [1] quite useful for reducing token usage

[1]: https://github.com/rtk-ai/rtk/

reply
joemazerino 6 hours ago
Firing off cache writing costs 1.2x tokens iirc. Meaning non repeatable tasks will cost more in the long run.
reply
spiderfarmer 9 hours ago
Will this work for Cowork as well?
reply
stingraycharles 9 hours ago
This is not at all an MCP server you want to use with a regular tool, as this is about low level context window management. Tbh it’s really trivial to do this, and I have no idea why OP decided to make an MCP server for this as it’s completely useless for that.

As a matter of fact, i think this is not a problem at all as Anthropic makes it extremely easy to cache stuff; you just set your preferred cache level on the last message, and Anthropic will automatically cache it under the hood. Every distinct message is another “cache” point, eg they first compute the hash of all messages, if not found, compute the hash of all messages - 1, etc.

It’s really a non problem.

reply
ermis 8 hours ago
No. Claude.ai is a consumer product — you have no access to the API layer underneath it. cache_control is an API-level feature only. This plugin works exclusively when you're making direct Anthropic API calls, either through the SDK in your own code or through MCP-compatible clients like Claude Code, Cursor, Windsurf, etc.
reply
stingraycharles 8 hours ago
How would it work when you’re making Anthropic API calls? Wouldn’t an LLM have to invoke this, and as such, somehow the LLM needs to invoke this MCP tool (which is done using a tool call ie an answer from the LLM) before sending the request to Anthropic?

I am so confused why you chose an MCP server to solve this, wouldn’t a regular API at least have some merit in how it could be used (in that it doesnt require an LLM to invoke it) ?

reply
ermis 10 hours ago
[dead]
reply
Felixbot 7 hours ago
[flagged]
reply