There are no "leaked" keys if google hasn't been calling them a secret.
They should ideally prevent all keys created before Gemini from accessing Gemini. It would be funny(though not surprising) if their leaked key "discovery" has false positives and starts blocking keys from Gemini.
This is going to break so many applications. No wonder they don't want to admit this is a problem. This is, like, whole-number percentage of Gemini traffic, level of fuck-up.
Jesus, and the keys leak cached context and Gemini uploads. This might be the worst security vulnerability Google has ever pushed to prod.
The problem here is that people create an API key for use X, then enable Gemini on the same project to do something else, not realizing that the old key now allows access to Gemini as well.
Takeaway: GCP projects are free and provide strong security boundaries, so use them liberally and never reuse them for anything public-facing.
Also, for APIs with quotas you have to be careful not to use multiple GCP projects for a single logical application, since those quotas are tracked per application, not per account. It is definitely not Google's intent that you should have one GCP project per service within a single logical application.
You can do what you're describing but it's not the model Google is expecting you to use, and you shouldn't have to do that.
It seems what happened here is that some extremely overzealous PM, probably fueled by Google's insane push to maximize Gemini's usage, decided that the Gemini API on GCP should be default enabled to make it easier for people to deploy, either being unaware or intentionally overlooking the obvious security implications of doing so. It's a huge mistake.
It sent me to a url: https://console.cloud.google.com/google/maps-apis/onboard;fl...
which auto-generated an API key for me to paste into things ASAP.
---
Get Started on Google Maps Platform You're all set to develop! Here's the API key you would need for your implementation. API key can be referenced in the Credentials section.
At $DAYJOB, we had a (not very special) special arrangement with GCP, and I never heard of anyone who was unable to create a project in our company's orgs [0].
Given how Google never, ever wants to have a human do customer support, I expect a robot will quickly auto-approve requests for "number of projects" quota increases. I know that's how it worked at work.
[0] ...with the exception of errors caused by GCP flakiness and other malfunction, of course.
So many organizations have the IAM "Project creator" role assigned to everyone at the org level. I think it's even a default.
I can somewhat follow this line of thinking, it’s pretty intentional and clear what you’re doing when you flip on APIs in the Google cloud site.
But I can’t wrap my mind around what is an API key. All the Google cloud stuff I’ve done the last couple years involves a lot of security stuff and permissions (namely, using Gemini, of all things. The irony…).
Somewhat infamously, there’s a separate Gemini API specifically to get the easy API key based experience. I don’t understand how the concept of an easy API key leaked into Google Cloud, especially if it is coupled to Gemini access. Why not use that to make the easy dev experience? This must be some sort of overlooked fuckup. You’d either ship this and API keys for Gemini, or neither. Doing it and not using it for an easier dev experience is a head scratcher.
app-scripts creates projects as well but maps just generates api keys in the current project
--- Get Started on Google Maps Platform You're all set to develop! Here's the API key you would need for your implementation. API key can be referenced in the Credentials section.
To this day I am unable to access the models they say I should be able to.
I still get 2.5 only, despite enabling previews in the google cloud config etc etc.
The access seems to randomly turn on and off and swaps depending on the auth used (Oauth, api-key, etc)
The entire gemini-cli repo looks like it is full of slop with 1000 devs trying to be the first to pump every issue into claude and claim some sort of clout.
It is an absolute shit show and not a good a look.
Of course, I bring this up because they could just version their API keys, completely solving this problem and preventing future ones like it.
Versioning data formats is wrongthink over there, so I’m guessing they just… won’t.
How did this get past any kind of security review at all? It’s like using usernames as passwords.
When Gemini came around, rather than that service being disabled by default for those keys, Gemini was enabled, allowing exploiters to easily utilize these keys (Ex. a "public" key stored in an APK file)
The problem described here is that developer X creates an API key intended for Maps or something, developer Y turns on Gemini, and now X's key can access Gemini without either X or Y realizing that this is the case.
The solution is to not reuse GCP projects for multiple purposes, especially in prod.
You are also wrong in saying there are no projects that could reasonably have a safe api key made unsafe by this exploit.
One example, a service that has firebase auth must publish the key (Google's docs recommend). Later, you add gen ai to that service, managing access using IAM/service accounts (the proper way). You've now elevated the Firebase Auth Key to be a Gemini key. Really undeniably poor from Google.
[Edit: It's likely that you intended to reply to this comment: https://news.ycombinator.com/item?id=47163147 ]
It shouldn't be enabled by default on either one.
Of course, Google is full of smart anti-fraud experts, they just handle 80% of this shit on the back-end, so they don't care about the front-end pain.
That said, I’d actually argue there’s an evolutionary explanation behind this where at a certain size, and more importantly complexity, an oversight like this becomes even more likely, not less.
I think this was much less likely to happen without the needless obfuscation. If the only purpose is to identify what project the data is for, and you're trusting the client to report that value, and counseling the client to use that value in a way that trivially exposes it to everyone... what is the point of making it look like cryptic garbage? Just use the account signup name or something, and don't call it a "key" in your query parameters. Keys are supposed to unlock stuff. A name tag is not a key.
An oversimplified version is this: So there are two core very critical components to the mid/late-phase tech megacorp strategy, you need to protect the core money printing product at all cost first and sustain that fiercely over a long period of time (decade+), then use any and all profits to find/fund the next cash cow, looking for optionality. While doing that, grow the market or consume a larger share of market. Google benefited from mainly the latter two and all while the internet blew up globally, funneling even more money into the machine.
It’s no secret that nearly every Google product that wasn’t search, lost them money. They were searching for the next big thing. They likely were some of the first to see AI as exactly that but moved too slowly to commercialize. Likely because of bureaucracy risk and also perhaps some sense of altruism in knowing the cataclysmic impacts AI could have. There have been plenty of former Google employees confirming this.
They also used to do things just to be cool, but those days have been long gone since Larry Page tapped out (and probably a few years before that, about a decade). Since then they’ve almost completely lost sight of what made them so successful that nobody even knows their vision or identity as a company today. These don’t correlate to market cap but they do silently lead to stagnation.
Their brand protects them from quite a lot but it’s not invincible.
Imagine for a moment the there is no oversight. Every intern can ship prod code with their own homemade crypto.
How do you, in a retail business, agree to accept credentials that anyone can mint for free?
I mean obviously it happened. But… this doesn’t even seem like a compliance mistake. It’s a business-level mistake.
This resonates so well and I love it. I'm stealing this
Things get stupid for sure. But I have never once seen “hey let’s do away with access controls for high-COGS services”.
There's usually a small handful of people that care more than they should, keeping the company afloat, but it's despite the company's policies, not because of them.
Isn't that squarely at odds with Google's supposed AI prowess? Is the rot really so severe that their advances in AI (including things they've yet to make public) are insufficient to overcome it? Or are the capabilities of Gemini and AI systems in general being oversold?
I pretty much sure that if anyone asked Gemini "Is it good idea to retroactively opt-in new services into for old API keys?" it would suggest it's bad idea. Problem is that no one asked.
Which is what makes this so notable. Did the security review not catch this, or did they choose to launch anyways because it was too hard to fix and speed was of the essence?
But there's a second insight that seems tough for a security review to catch. You have to realize that even though you can't do anything obviously malicious with the API, there is a billing problem.
I’m very careful with Google and co since they’re so intent on infinite scaling access to your wallet
(Or at least didn't at the time I've tried to use it. That may have changed, but we don't know when the GP tried it either.)
If a company like Google, with its ability to attract the best of the best, cannot handle the complexity of security and safety with SaaS/PaaS products, at what point do we say that perhaps this sector needs much more oversight?
Do you have a link?
It’s pretty much a daily occurrence in all three of the big cloud subs that people still learning get wiped out because the clouds refuse to provide appropriate safeguards
The extensive experience with Enterprise Authentication that the decades of use of Active Directory has given Microsoft may mean that their SSO and Enterprise Authentication stuff is the best out of those on offer. I wouldn't know about that... I just made (and destroyed) VMs and was often driven to frustration whenever Azure failed to reliably perform that simple task.
Then I saw the disclosure at the end and didn't get the sense that the flaw was fixed, so then I was still thinking... Is it responsible for them to be sharing this?
I'm glad that they did, because I can audit my own projects, but a bad actor may also be glad that they did.
The fact that we're hearing this first from a third-party and not from Google themselves is extremely problematic.
A slew of recommendations, one of them being:
Disable Dormant Keys: Audit your active keys and decommission any that show no activity over the last 30 days.
(Although I don't think this even addresses the underlying issue)
SSNs were a good potential identifier, until the people that needed security cheaped out and started using SSNs as a bad implementation of security. Now they're bad at both purposes!
1. You never know how much a single API request will cost or did cost for the gemini api
2. It takes anywhere between 12-24 hours to tell you how much they will charge you for past aggregate requests
3. No simple way to set limits on payment anywhere in google cloud
4. Either they are charging for the batch api before even returning a result, or their "minimal" thinking mode is burning through 15k tokens for a simple image description task with <200 output tokens. I have no way of knowing which of the two it is. The tokens in the UI are not adding up to the costs, so I can only assume its the first.
5. Incomplete batch requests can't be retrieved if they expire, despite being charged.
6. A truly labyrinthine ui experience that makes modern gacha game developers blush
All I have learned here is to never, ever use a google product.
Distributed “shared nothing” API handling should make usage available to accounting, and the API handling orchestrator should have a hook that allows accounting to revoke or flag a key.
This gets the accounting transactions and key availability management out of the request handling.
https://docs.cloud.google.com/billing/docs/how-to/budgets
They are still not a spending cap of course.
But the fact that permissions are not hardened at time of creation is bonkers to me.
Google Maps has one, even. And Stripe.
I like that. Easy to tell if you should keep the key a secret or not.
The only purpose of the keys Maps/Stripe encourage you to publicly put into your website is to guarantee it is talking to _your_ Google/Stripe account not someone else's. Obviously once you put them in your client they are of zero value in helping Google/Stripe identify you. The fact that Google allows you to use the same type of key they also use elsewhere to identify _you_ not _them_ was always incredibly bad design. Google already have the 'Project ID' which would have been the best thing to use.
Malpractice/I can't believe they're just rolling forward
You can't maliciously embed it in a site you control to either steal map usage or run up their bill because other people's web browsers will send the correct host header.
That means you can use a botnet or similar to request it using a a script. But if you are botnetting Google will detect you very quickly.
Re host header seems an odd way for Google to do it, surely they would have fixed that by now? I guess not a huge problem as attackers would have to proxy traffic or something to obscure the host headers sent by real clients? Any links on how people exploit this?
Something that can be abused is if the key also has other Maps APIs enabled, like Places API, Routes API or Static APIs especially for scraping because those produce valuable info beyond just embedding a map.
The only suggestions I have are:
- If you want to totally hide the key, proxy all the requests through some server.
- Restrict the key to your website.
- Don't enable any API that you don't use, if you only use the Maps Javascript API to embed a map then don't enable any other Maps API for that key.
The only suggestion I see there from a quick skim that would avoid the above is for customers to set up a google maps proxy server for every usage with adds security and hides the key. That is completely impractical suggestion for the majority of users of embedded google maps.
It feels like something that would happen if you outsourced planning to an LLM.
Can't you just run up a huge bill for a developer by spamming requests with their key? I don't see how this wasn't always an issue?
Not perfect protection of course - an attacker could spam requests with all the right headers if they wanted to - but it removes one of the big motivations for copying someone else's API key.
[1] https://docs.cloud.google.com/api-keys/docs/add-restrictions...
Even if you have a key that you use for maps (not secret) someone could add the generative AI scope to it and make it now necessarily secret (even though it’s probably already publicly available)?
Changing the semantics of existing non-key keys, making them actually keys is horrendous
This whole Gemini roll-out has me reminded of the Google '+' days when they thought they were going to die if they didn't do social.
Making AI utilization appear to go up is the only thing that matters right now if you're in the boardroom at one of these companies. Whether or not that utilization was actually intended by the customer is entirely irrelevant. From here, the only remaining concern is mitigating legal issues which google seems to be immune to.
There's a long stretch from over optimizing a UI to something that is very clearly an error like what has happened here.
It is entirely believable to me that a company like Google would do the same with AI use numbers. I suspect that all these AI use factors in corporate performance reviews are about the same thing.
This could be a standard oversight too, I find Google’s documentation on this stuff to be Byzantine.
This destroys Google's right to pursue an unpaid "AI" bill as a debt.
It will be more interesting if they scan GitHub code instead. The number terrified me. Though I am not sure how many of that are live.
I mean, I get that authentication to the service is performed via other means, but what's the use of the key then?
I'm guessing it's just a matter of binding service invocations to the GCP Project to be billed, by first making sure that the authenticated principal has rights on that project, in order to protect from exfiltration. That would still be a strange use case for what gets called an "API key".
The problem that you, and many people are having in this thread, is that you are typing "API key" but, in your head, you're thinking "private API key". API keys can be secret or public, and many services have matching pairs of secret and public keys (Stripe, Chargify, etc. etc. etc.)
- 6 weeks ago Google said they would fix it
- 3 weeks ago Google said they were working on it
...but we're publishing the info anyway, so everyone can go nuts with it.
Indeed, the key doesn't change. The new capability comes from the new code.
It would not be a re-evaluation of risk, because this is a new project. The evaluation of risk is supposed to come at the moment when the new capability is implemented, and consciously tied to an existing key type, which was previously advertised as non-secret.
When you enable the Gemini API (Generative Language API) on a Google Cloud project, existing API keys in that project (including the ones sitting in public JavaScript on your website) can silently gain access to sensitive Gemini endpoints. No warning. No confirmation dialog. No email notification.
Specifically, the last bit - “No warning. No confirmation dialog. No email notification.” Immediately smells like LLM generated text to me. Punchy repetition in a set of 3.If you scroll through tiktok or instagram you can see the same exact pattern in a lot of LLM generated descriptions.
That said, some specific things that feel very AI-y are the mostly short, equally-sized paragraphs with occasional punchy one-sentence paragraphs interspersed between them; the use of bold when listing things (and the number of two-element lists); there are a couple of "it's not X, it's Y"-style statements; one paragraph ends with an "they say it's X, but it's actually Y" construct; and even the phrasing of some of the headings.
None of these are necessarily individually tells of AI writing (and I suspect if you look through my own comments and blog posts on various sites, you'd find me using many of the same constructs, because they're all either effective rhetorically, or make the text clearer and easier to understand). But there's something about the concentration of them here that feels like AI - the uncanny valley feeling.
I would put money on this post at least having gone through AI review, if not having been generated by AI from human-written notes. I understand why people do that, but I also think it's a shame that some of the individual colour of people's writing is disappearing from these sorts of blog posts.
It’s not uncommon, as basic writing advice, to use sets of three for emphasis. That isn’t a signifier of LLM generation, in my opinion.
“The rule of three is a writing principle which suggests that a trio of entities such as events or characters is more satisfying, effective, or humorous than other numbers, hence also more memorable, because it combines both brevity and rhythm with the smallest amount of information needed to create a pattern.”
It’s how I was taught to write, but I understand that my personal experience can’t be generalized to make sweeping statements.
Do you have data that suggests it’s uncommon in human-authored blog posts and more common in LLM-generated text?
I don't think that's exactly it.
Speaking of LLM-writing in general, it seems to greatly overuse certain types of constructions or use them in uncommon contexts. So that probably isn't so much using the rule of threes, but overusing the rule of threes in certain specific ways in certain specific contexts.
"this sounds like AI"
"professional writers use this technique"
"they can't be a professional writer, they're using AI"I use groupings of 3 and try to make things punchy myself sometimes, especially when I'm writing something intended to sway others. I think the problem with this article is the way it feels like the perfect average of corporate writing. It's sort of like the "written by committee" feel that incredibly generic pop music often has.
When I write things, I often go back and edit and reword parts. Like the brushstrokes in an oil painting, the flow of thought varies between paragraphs and even sentences. LLMs only generate things from left to right (or vice versa in RTL languages, I presume). I think that gives LLM generated text a "smooth" texture that really stands out to anyone who reads a lot.
The biggest factor is simply how long you've been using LLMs to generate text, how often, how much. It's like how an experienced UI designer can instantly tell that something is off by a single pixel off upon first seeing a UI, whereas if you gave me $200 to find it within 10 minutes I might well fail.
HN Note: Yes the rule of threes is broader than just this particular pattern here, but in my opinion this common writing and communication pattern is a specific example of the rule of threes.
Punchy repetition in a set of 3. Yes. LLMs are able to capably mimic the common patterns that how to write books have suggested for the last 100 years as ways to make your writing more "impactful" and attention-grabbing. So are humans. They learned it from watching us.
I am a little bit worked up on this as I have felt insulted a couple times at having something I've written been accused of being by an LLM, in that case it was because I had written something from the viewpoint of a depressed and tired character and someone thought it had to be an LLM because they seemed detached from humanity! Success!
I too would like to be able to reliably detect when something has been written by an LLM so I can discount it out of hand, but frankly many of the attempts I see people make to detect these things seem poorly reasoned and actively detrimental.
People have learned in classes and from reading how to improve their writing. LLMs have learned from ingesting our output. If something matches a common writing 101 tip it is just as likely to be reasonably competent as it is to be non-human. The solution to escape being labelled an LLM is not to become less competent as a writer.
I have been overly verbose here, as I am somewhat worked up and angry and it is too late in the morning to go back to sleep but really too early to be awake. I know verbosity is also a symptom of being an LLM, but not giving a damn is a symptom of humanity.
>LLMs are able to capably mimic the common patterns that how to write books have suggested for the last 100 years as ways to make your writing more "impactful" and attention-grabbing. So are humans. They learned it from watching us.
Don't forget that LLMs (at least the "instruct" versions) undergo substantial post-training to align them with the authors' objectives, so they are not a 100% pure reflection of the distribution seen on the internet. For example, it's common for LLMs to respond with "You're absolutely right!" to every second message, which isn't what humans usually do. It's a result of some kind of RLHF: human labelers liked to hear that they're right, so they preferred answers containing such phrases, and those responses became amplified. People recognize LLM-generated writing because LLMs' pattern distribution is different from the actual pattern distribution found in articles written by humans.
No, I'm not being sarcastic. People have given up em-dash, which is an official punctuation you use in proper writing. And it's all a downhill from there.
The issues of style are annoying, but I find it much worse to wade through these 3000 word posts which are far longer than they need to be just because they're so damn cheap to compose.
Yes. And it's only a matter of time that the model companies start to try to train in that "human sloppiness." After all, a lot of their customers want machines that can pass for humans.
> No, I'm not being sarcastic. People have given up em-dash, which is an official punctuation you use in proper writing. And it's all a downhill from there.
I wouldn't be surprised if the internet language of people devolves into a weird constantly-changing mish-mash of slang and linguistic fads. Basically an arms race where people constantly innovate in order to stay distinct from the latest models.
But the end result of that would be probably fragmentation, isolation, and a kind of dark ages. Different communities would have different slang, and that slang would change so fast that old text would quickly become hard to understand.
> What You Should Do Right Now
> Bonus: Scan with TruffleHog.
> TruffleHog will verify whether discovered keys are live and have Gemini access, so you'll know exactly which keys are exposed and active, not just which ones match a regular expression.
I don't know exactly, but I'm sure. The cadence, the clarity, the bolding, the italics, it's all just crisp and clean structured and actionable in a way that a meandering human would not distill it down to.
AI output is not varied like real human writing. This is a very distinctive narrowing of style.
Like what happens to YouTube videos that go through the compression algorithm 20 times.
With the AI feedback loop being so fast and tight for some tasks, the focus moves on to delivery than learning. There is no incentive, space or time for learning.
My motto is - If it wasn’t worth writing, it won’t be worth reading.
A good example of writing where I’d recommend using LLM’s is product documentation. You pass the diff, the description of the task, and the context (existing documentation) with a prompt ”Update the documentation…”.
Documentation is important but it’s not prose. However, writing a comment on hacker news is.
While writing this I suddenly realized that marketers and writers probably do a better job at recognizing it than developers and engineers, so maybe all hope isn't.
For those who want to know the tells: overall cadence and frequency of patterns - especially infrequency of patterns - are the biggest ones. And that means that we can't actually give you the best tells, because they're more about what is absent than what is present. What's absent is a single sentence pattern that falls completely out of the LLM go-toes. Anything human written has at most a good mix of both. LLM-written text just entirely lacks it. Humans do use the LLM-preferred patterns, but not for every single sentence. But anyway, here we go.
> Transparently, the initial triage was frustrating; the report was dismissed as "Intended Behavior”. But after providing concrete evidence from Google's own infrastructure, the GCP VDP team took the issue seriously.
^ Fun fact - The ";" would've originally been an em-dash but was either rewritten or a rule was included for this.
> Then Gemini arrived.
^ Dramatic short sentences, a pattern with magnitudes higher LLM-frequency than human frequency, but hasn't reached the public conscious yet a la "not just X but Y".
> No warning. No confirmation dialog. No email notification.
^ Another such pattern. Not just because it's three of them, but also because of the content and repetition. Humans rarely write like that because it again sounds overly dramatic. It's something you see in fiction rather than a technical writeup. In a thriller.
> Retroactive Privilege Expansion. You created a Maps key three years ago and embedded it in your website's source code, exactly as Google instructed. Last month, a developer on your team enabled the Gemini API for an internal prototype. Your public Maps key is now a Gemini credential. Anyone who scrapes it can access your uploaded files, cached content, and rack up your AI bill. Nobody told you.
This style of scenario writing is another one.
> Nobody told you.
Absolute drama queen.
>The UI shows a warning about "unauthorized use," but the architectural default is wide open.
Again.
> The attacker never touches your infrastructure. They just scrape a key from a public webpage.
Again.
> These aren't just hobbyist side projects. The victims included major financial institutions, security companies, global recruiting firms, and, notably, Google itself.
..
> A key that was deployed years ago for a completely benign purpose had silently gained full access to a sensitive API without any developer intervention.
Surprised it hasn't gained consciousness by now. Maybe that's a future plot point.
Here's a great example to train your skills on, because it's rare in that the ratio of "human : straight from LLM" increased gradually as the article goes on: https://www.wallstreetraider.com/story.html
It started at heavy human editing (or just human-written), but less and less towards the end.
The author confirmed this upon pointing it out, FWIW [0].
Someone is complaining that
> it's all just crisp and clean structured and actionable in a way that a meandering human would not distill it down to.
but this is a security report ... people intentionally write such things carefully and crisply with multiple edits and reviews.
The problem with AI slop (to me) is more that the technical content is not good or is entirely the product of the LLM. At that point, there's no point in me reading it, I can just prompt the question if I'm interested.
This is original research which wasn't public before, so the value is still there and I didn't think whichever combination of a human and LLM that generated it did a bad job.
From TFA:
> Last month, a developer on your team enabled the Gemini API for an internal prototype. > The result: thousands of API keys that were deployed as benign billing tokens are now live Gemini credentials sitting on the public internet.
Benign, deployed openly without any access restrictions whatsoever, billing tokens can be used to bill for a service under the account it is enabled for. That's the intended behavior, literally. Maps API keys are used to give your users access to Google Maps on your credit card.
What's the problem here? Yes, the defaults could have been stricter, but it's not like it costs anything to create a bunch of internal projects that do not have good-for-billing access keys floating around open internet. People moved fast, deployed LLM generated code, broke things and then blame everyone else but themselves?
Google guidelines say "API keys" (a huge misnomer for something that is more accurately described as a project ID) are not secrets. The idea of creating an internal project goes against what the guidelines suggest. The "API keys" are customer facing identifiers.
(a legal angle might be the Unfair Contract Terms Directive in the EU, though plenty of individual countries have their own laws that may apply to my understanding. A quite equivalent situation were the "bill shock" situations for mobile phone users, where people went on vacation and arrived home to an outrageously high roaming bill that they didn't understand they incurred. This is also limited today in the EU; by law, the service must be stopped after a certain charge is incurred)
I still had to pay it or else I wouldn't have been able to use my account.
Would that have been so bad? The world might be a better place if people stopping pouring money into that cesspit.
By continue to use their services, you're encouraging the anti-consumer tactics you're complaining about.
On that note, I'll just mention that I had discovered over the last while that when you prepay $10 into your Anthropic account, either directly, or via the newer "Extra usage" in subscription plans, and then use Claude Code, they will repeatedly overbill you, putting you into a negative balance. I actually complained and they told me that they allow the "final query" to complete rather than cutting it off mid-process, which is of course silly, because Claude Code is typically used for long sessions, where the benefit of being cut off 52% into the task rather than 51% into it is essentially meaningless.
I ended up paying for these so far, but would hope that someone with more free time sues them on it.
This means that billing happens asynchronously. You may use queues, you may do batching, etc. But you won't have a realtime view of the costs
Well, that makes sense in principle, but they obviously do have some billing check that prevents me from making additional requests after that "final query". And they definitely have some check to prevent me from overutilizing my quota when I have an active monthly subscription. So whatever it is that they need to do, when I prepay $x, I'm not ok with them charging me more than that (or I would have prepaid more). It's up to them to figure this out and/or absorb the costs.
No they don't actually! They try to get close, but it's not guaranteed (for example, make that "final query" to two different regions concurrently).
Now, they could stand up a separate system with a guaranteed fixed cost, but few people want that and the cost would be higher, so it wouldn't make the money back.
You can do it on your end though: run every request sequentially through a service and track your own usage, stopping when reaching your limit.
If you have a cap and then your thing hits the front page and suddenly has 10000% more legitimate traffic than usual, and you want the legitimate traffic, they're going to get an error page instead of what you want. If there is no cap, you're going to get a large bill. People hate both of those things and will complain regardless of which one actually happens.
The main thing Google is screwing up here is not giving you the choice between them.
This is one of the reasons people have suggested using a different provider for backups.
Google will probably have me go through five bots and if, by some kind of miracle, I manage to have a human on the phone, they will probably explain to me that I should have read the third paragraph of the fourth page of the self service doc and it's obviously my fault.
But have you considered it from the companies POV? Charging whatever you like and its always the customers fault is a pretty sweet deal. Up next in the innovation pipeline is charging customers extra fees for something or other. It'll be great!
But not in the causal sense of the word but in the legal "the company didn't folly the legal required base line of acting with due diligence".
In general companies are required to act with diligence, this is also e.g. where punitive damages come in to produce a insensitive to companies to act with diligence or they might need to pay far above the actual damages done.
This is also why in some countries for negligence the executives related to the negligent decisions up to the CEO can be hold _personally_ liable. (Through mostly wrt. cases of negligence where people got physically harmed/died; And mostly as an alternative approach to keeping companies diligent, i.e. instead of punitive damages.).
The main problem is that in many cases companies do wriggle their way out of it with a mixture of "make pretend" diligence, lawyer nonsense dragging thing out and early settlements.
Not illegal, but it should make enforcing payment illegal.
If you put in "surely" and people think it's quite wrong then they might downvote. It's not personal.
laughs in European
Your attorney can push for whatever illegal thing they can think of, it doesn't mean you will get it.
It is not illegal to include legal fees in damages.
The default "American rule" is that each party pays their own legal fees, unless there is a relevant fee shifting rule.