Compare this with instances from Hetzner or Contabo or the likes. They are 35+ times cheaper.
This means my total usage across entire month on cloudflare sandbox cannot cross even one single day of non-stop usage, just to break-even with hetzner/contabo/others.
Continuously running means you're doing it wrong / it's a bad fit. But yes the ratio is somewhat extreme.
You can get a bare metal AX162 from Hetzner for 200 EUR/mo, with 48 cores and 128GB of RAM. For 4:1 virtual:physical oversubscription, you could run 192 guests on such a machine, yielding a cost of 200/192 = 1.04 EUR/mo, and giving each guest a bit over 1GiB of RAM. Interestingly, that's not groundbreakingly cheaper than just getting one of Hetzner's virtual machines!
It was a core differentiator to never* have to worry about egress with them.
*: unless it's so large that it borders on abuse or require a larger plan
We rolled out our own that does pretty much the same thing but perhaps more because our solution can also mount persistent storage that can be carried between multiple runners. It does take 1-5 seconds to boot the environment (firecracker vms). If this sandbox is faster I will instruct the team to consider for fast starup.
This is also very similar to Vercel's sandbox thing. The same technology?
What I don't like about this approach is the github repo bootstrap setup. Is it more convenient compared to docker images pushed to some registry? Perhaps. But docker benefits from having all the artefacts prebuilt in advance, which in our case is quite a bit.
I'd say 1-5 secs is fast. Curious to know what use cases require faster boot up, and today suffer from this latency?
Last week I was on a call with a customer. They where running OpenAI side-by-side with our solution. I was pleased that we managed to fulfil the request under a minute while OpenAI took 4.5 minutes.
The LLM is not the biggest contributor to latency in my opinion.
We boot VMs (using Firecracker) at ~20-50ms.
Obviously depending on the base image/overlay/etc., your system might need resources making it a network-bound boot, but based on what you've said it seems you should be able to make your system much faster!
My focus is: - simple SDK primitives for code execution, file ops, and git checkout—no boilerplate stacks - transparent pricing (per-second compute with monthly caps, no surprise egress) - price sandboxes for 50-60% less then competitors
Would love to get your feedback in https://usesandbox.dev/ Just finalized main pieces today
The docs claim they persist the filesystem even when they move the container to an idle state but its unclear exactly what that means - https://github.com/cloudflare/sandbox-sdk/issues/102
The part that's unclear to me is how billing works for a sandbox's disk that's asleep, because container disks are ephemeral and don't survive sleep[2] but the sandbox pricing points you to containers which says "Charges stop after the container instance goes to sleep".
https://developers.cloudflare.com/sandbox/concepts/sandboxes...
https://developers.cloudflare.com/sandbox/concepts/sandboxes...
[2] https://developers.cloudflare.com/containers/faq/#is-disk-pe...
Memory: $0.0000025 per additional GiB-second vCPU: $0.000020 per additional vCPU-second Disk: $0.00000007 per additional GB-second
The smaller instance types have super low processing power by getting a fraction of a vCPU. But if you calculate the monthly cost then it comes to:
Memory: $6.48 per GB vCPU: $51.84 per vCPU (!!!) Disk: $0.18 per GB
These prices are more expensive than the already expensive prices of the big cloud providers. For example a t2d-standard-2 on GCP with 2 vCPUs and 8GB with 16GB storage would cost $63.28 per month while the standard-3 instance on CF would cost a whopping $51.84 + $103.68 + $2.90 = $158.42, about 2.5x the price.
Cloudflare Containers also don't have peristent storage and are by design intended to shut down if not used but I could then also go for a spot vm on GCP which would bring the price down to $9.27 which is less than 6% of the CF container cost and I get persistent storage plus a ton of other features on top.
What am I missing?
I coud easily spin-up a firecracker VM on-demand and put it behind an API. It boots up in under 200 milliseconds. and I get to control it however I wish to. And also, all costs are under my control.
I compared the costs with instances purchased from Hetzner or Contabo here: https://news.ycombinator.com/item?id=45613653
Bottomline: by doing this small stuff myself, I can save 35 times more.
For guide, just follow their official docs. I did those again today, literally copy-pasted shell commands one after the other, and voila.. had firecracker vm running and booting a full-fledge ubuntu vm.
It was sooo damn fast that when it started, at that moment I thought that my terminal had crashed because it's prompt changed. But nop. It was just that fast that even while literally looking at it I was not able to catch when it actually did boot-up.
By the way, two open-source projects already exist:
1. NodeJS: https://github.com/apocas/firecrackerode
2. Python: https://github.com/Okeso/python-firecracker
For the huge factor in price difference you can keep spare spot VMs on GCP idle and warm all the time and still be an order of magnitude cheaper. You have more features and flexibility with these. You can also discard them at will, they are not charged per month. Pricing granularity in GCP is per second (with 1min minimum) and you can fire up firecracker VMs within milliseconds as another commenter pointed out.
Cloudflare Sandbox have less functionality at a significantly increased price. The tradeoff is simplicity because they are more focused for a specific use case for which they don't need additional configuration or tooling. The downside is that they can't do everything a proper VM can do.
It's a fair tradeoff but I argue the price difference is very much out of balance. But then again it seems to be a feature primarily going after AI companies and there is infinite VC money to burn at the moment.
This is a on-demand managed container service with a convenient API, logging, global placement in 300+ locations, ...
AWS Lambda is probably closer in terms of product match. (sans the autoscaling)
Depending on what you do , Sandbox could be roughly on par with Lambda, or considerably cheaper.
The 1TB of included egress alone would be like 90$ on AWS.
Of course on lambda you pay per request. But you also apparently pay for Cloudflare Worker requests with Sandbox...
I reckon ... it's complicated.
Honestly, the more I think about it, my ease of sanity either wants me to use hetzner/others for golang/other binary related stuff and for the frontend to use cf workers with sveltekit
That way we could have the best in both worlds and probably glue together somethings using proto-buf or something but I guess people don't like managing two codebases but I think that sveltekit is a pleasure to work with and can easily be learnt by anybody in 3-4 weeks and maybe some more for golang but yeah I might look more into cf containers/gcp or whatever but my heart wants hetzner for backend with golang if need be and to try to extract as much juice as I can in cf workers with sveltekit in the meanwhile.
Thoughts on my stack?
Instead of having to code this up using typescript, is there an MCP server or API endpoint I can use?
Basically, I want to connect an MCP server to an agent, tell it it can run typescript code in order to solve a problem or verify something.
It's the same SDK stuff from earlier this year right? https://developers.cloudflare.com/changelog/2025-06-24-annou...
Love the evocative animations and it manages to still be readable and well organized.
As far as I can tell it's all or nothing right now:
I want to run untrusted code (from users or LLMs) in these containers, and I'd like to avoid someone malicious using my container to launch attacks against other sites from them.As such, I'd like to be able to allow-list just specific network points. Maybe I'm OK with the container talking to an API I provide but not to the world at wide. Or perhaps I'm OK with it fetching data from npm and PyPI but I don't want it to be able to access anything else (a common pattern these days, e.g. Claude's Code Interpreter does this.)
If these aren't enabled for containers / sandboxes yet, I bet they will be soon
It was announced as part of the code mode blog post:
https://blog.cloudflare.com/code-mode/
API docs: https://developers.cloudflare.com/workers/runtime-apis/bindi...
1. https://github.com/ossillate-inc/packj/blob/main/packj/sandb...
Networking as a whole can easily be controlled by the OS or any intermediate layer. For controlling access to specific sites you need to either filter it at the DNS level, which can be trivially bypassed, or bake something into the application binary itself. But if you are enabling untrusted code and giving that code access to a TCP channel then it is effectively impossible to restrict what it can or cannot access.
Then inject HTTP_PROXY and HTTPS_PROXY environment variables so tools running in the sandbox know what to use.
Little Snitch does this pretty well: https://www.obdev.at/products/littlesnitch/index.html
My uneducated question, why not BPF? It's the actual original use case. Declare a filter rule (using any DSL you like), enforce it within the sandbox, move processing to the "real" firewall/kernel where applicable, etc.
(Certainly this would prevent things like package manager installations, etc... but if you're in a use case where you really want to sandbox things, you wouldn't want people to have e.g. NPM access as I'm sure there are ways to use that for exfiltration/C&C!)
Do you mean they force you to use their DNS? What about DOH(s)? What about just skipping domain lookup entirely and using a raw IP address?