And looking at her main website https://www.citationneeded.news/ there is a tip jar but it doesn't accept crypto. I was expecting her to take at least the major coins like Ada, Eth and BTC, but she's consistent with her views.
The joke is, LLM Horrors is anti-LLM, Web3 is Going Just Great is anti Web3. The equivalent for Tesla would be Tesla putting a ICE inside their model 2 if they didn't believe in EVs.
> Conclusion: Always set billing caps and alerts on cloud API keys.
Sadly, way easier said than done in the case of GCP. Been a proper reason for me to avoid GCP deployments with LLM use-cases for smaller projects.
I remember looking into this a while back assuming it would be a sane feature to expect. But for some reason it's surprisingly non-trivial with GCP to set budgets. Especially if the only thing you want is a Gemini API key with finite spending.
IIRC you could either set (rate) limits on quotas, but quotas are extremely granular (like, per region per model) meaning you need to both set tons of values and understand which quotas to relax. Or alternatively do some bubblegum-and-ducktape like solution where you build an event-driven pipeline to react to cost increases in your own project.
I understand that exact budgets are hard to enforce in real-time, especially for their more complex infra offerings.
However, (1) even if it's not exactly real-time, but instead enforced every hour that's already going to go a long way, and (2) PAYG LLM usage is billed rather linearly by the amount of tokens you use, so if there would be an easy way to set a dollar-amount and have that expressed as budgets that would already get you part of the way there.
Anyway, the current state of GCP budgeting it makes me avoid it for production usage until I'm ready to commit spending significant effort to harden it. For all small projects, the free tier tokens are a safe bet, but their extremely low rate-limits make them rarely a good fit.
Thankfully Google has some basic protection for it. I accidentally commited my google api token, as part of some OTEL trace JSON file, and within a few minutes my key was automatically locked by google, and marked as leaked (with exact link pointing where it has happened).
"some basic protection" it wasn't always like this. A few years back you could easily get api keys for any web service by typing certain keywords on github and that included all google APIs, but since the Microsoft acquisition it's not as simple anymore....
It is used to exchange goods and services without the consent of the owner. Kind of like picking up a wallet full of cash off the ground (with or without) identification.
I'd guess they are selling access to other people somehow. Like it used to be the case that a stolen phone would rack up enormous overseas call charges until it was reported and disabled.
If your goal is to just burn as much money as possible, as fast as possible, simply spamming expensive image/video generation requests would probably do the trick, if the key's rate limits are high enough.
There's also a practice that primarily seems to occur in china where stolen keys are resold via proxy services. A single key can provide access to thousands of users, racking up costs very fast (again, assuming the rate limits are high enough).
I understand that cloud resources and automatically stopping them beyond a certain spend is problematic and challenging in many ways, e.g. do you just destroy provisioned computer, storage, data?
But for those stupid API keys the corporations have zero excuse not to have configurable limits with a sensible default.
One caveat to alerts (and automatically acting on alerts) is that there are delays[0] between costs being incurred and alerted. I can't find a Google source for what the delay is, but a source online say it could be "24 hours [to] a few days."[1]
This has been a major reason why I reach for OpenAI models before Gemini, but also why I'd rather use services like RunPod for training jobs. For a small boostrapped company like mine, it feels terrifyingly easy to rack up a company-ending AI bill.
The cloud companies try to limit these accidents through cranking your quotas down to nothing, but this also means that my small company can't just whip up 8xH100 easily without major ceremony, and I have routinely been rejected the GPUs quotas I needed for projects.
Accidentally leaving that kind of node on for the 24 hours that it might take to get an alert would rack up a $2,000+ bill, compared to $500 on RunPod, which will also stop the instance when you run out of money.
I've loved working with major cloud providers at growing VC-funded startups that have credits, TAMs and bigger budgets for errors. But hyperscalers are fairly difficult for a pre-scale bootstrapped business, and arguably not designed or optimized for it.
There is a way to trigger a script when a budget is hit, but they don't make it easy. You set up a billing notification that triggers a script, which can disable resources (like APIs) automatically.
Those budget alerts usually aren't instant though, they only fire when the cloud gets around to reconciling your usage some number of hours or even days after the damage is done. It's better than nothing but with runaway spending you can still blow way past your limit.
There is not any practical way to do this effectively.
There are several, rather tedious and incomplete, hacks that you can apply to attempt to prevent billable actions after limits are hit.
But to be frank - they're cop-outs for a real spending cap.
You'd hope these companies would address this themselves - but it's not profitable for them to resolve (it's somewhat involved and requires them to allow people to pay them less)... So my strong vote is to make the contracts that allow this sort of "un-cappable" spending for automated actions void in court.
It is worth noting that both products have had "student" tiers or similar, that had fixed credit limits with a cliff.
Therefore, they've implemented hard-limits. So not offering hard-limits is a business decision, NOT a technical one. They're essentially hiding functionality they have.
Make of that as you will. Anyone justifying it, should be me with skepticism.
Soft limits would be ideal (x/day with maximum peak of x/minute), but hey, that's literally negative value to them (work to code, CPU time to implement, less income out of "mistakes")
I've heard that Google keeps Google Drive data around for up to two years if your subscription expired and your account is over quota. They could certainly do the same with other cloud storage.
If I reduce my gdrive subscription they don’t simply delete what I have over the new (lower) limit. There is a grace period and it’s standard practice. Why should it be any different in this case?
There is, and it would cause an outage while still not achieving the supposed goal of not going over budget. You don't want to be killing your customer's production over potential misconfigurations/forgotten budgets. Especially when you'd continue to bill them for the storage and other static things like IPs.
It's so much easier for them to have support wave accidental overuses.
My understanding is that AWS budget actions also operate on a delay. I love using AWS at work but I'm never giving it my personal credit card as long as I can't turn off auto-billing.
https://github.com/coollabsio/llmhorrors.com/blob/main/CLAUD...
The whole website seems to be focused on promoting the author and their projects more than sharing the information. Just link to the original.
https://www.reddit.com/r/googlecloud/comments/1reqtvi/82000_...
Posted to HN twice recently.
https://news.ycombinator.com/item?id=47231708
https://news.ycombinator.com/item?id=47184182
https://www.web3isgoinggreat.com/
> Conclusion: Always set billing caps and alerts on cloud API keys.
Sadly, way easier said than done in the case of GCP. Been a proper reason for me to avoid GCP deployments with LLM use-cases for smaller projects.
I remember looking into this a while back assuming it would be a sane feature to expect. But for some reason it's surprisingly non-trivial with GCP to set budgets. Especially if the only thing you want is a Gemini API key with finite spending.
IIRC you could either set (rate) limits on quotas, but quotas are extremely granular (like, per region per model) meaning you need to both set tons of values and understand which quotas to relax. Or alternatively do some bubblegum-and-ducktape like solution where you build an event-driven pipeline to react to cost increases in your own project.
I understand that exact budgets are hard to enforce in real-time, especially for their more complex infra offerings.
However, (1) even if it's not exactly real-time, but instead enforced every hour that's already going to go a long way, and (2) PAYG LLM usage is billed rather linearly by the amount of tokens you use, so if there would be an easy way to set a dollar-amount and have that expressed as budgets that would already get you part of the way there.
Anyway, the current state of GCP budgeting it makes me avoid it for production usage until I'm ready to commit spending significant effort to harden it. For all small projects, the free tier tokens are a safe bet, but their extremely low rate-limits make them rarely a good fit.
its googles blunder that they allowed public tokens to be used for paid functionality.
There's also a practice that primarily seems to occur in china where stolen keys are resold via proxy services. A single key can provide access to thousands of users, racking up costs very fast (again, assuming the rate limits are high enough).
But for those stupid API keys the corporations have zero excuse not to have configurable limits with a sensible default.
As far as I saw you can only set up billing alerts, no hard limit.
This has been a major reason why I reach for OpenAI models before Gemini, but also why I'd rather use services like RunPod for training jobs. For a small boostrapped company like mine, it feels terrifyingly easy to rack up a company-ending AI bill.
The cloud companies try to limit these accidents through cranking your quotas down to nothing, but this also means that my small company can't just whip up 8xH100 easily without major ceremony, and I have routinely been rejected the GPUs quotas I needed for projects.
Accidentally leaving that kind of node on for the 24 hours that it might take to get an alert would rack up a $2,000+ bill, compared to $500 on RunPod, which will also stop the instance when you run out of money.
I've loved working with major cloud providers at growing VC-funded startups that have credits, TAMs and bigger budgets for errors. But hyperscalers are fairly difficult for a pre-scale bootstrapped business, and arguably not designed or optimized for it.
[0] https://docs.cloud.google.com/billing/docs/how-to/disable-bi... [1] https://support.terra.bio/hc/en-us/articles/360057589931-How...
https://docs.cloud.google.com/billing/docs/how-to/control-us...
Google Cloud is easy to set up soft budget alerts via email though, something that I had to use third party service for with AWS.
There are several, rather tedious and incomplete, hacks that you can apply to attempt to prevent billable actions after limits are hit.
But to be frank - they're cop-outs for a real spending cap.
You'd hope these companies would address this themselves - but it's not profitable for them to resolve (it's somewhat involved and requires them to allow people to pay them less)... So my strong vote is to make the contracts that allow this sort of "un-cappable" spending for automated actions void in court.
Therefore, they've implemented hard-limits. So not offering hard-limits is a business decision, NOT a technical one. They're essentially hiding functionality they have.
Make of that as you will. Anyone justifying it, should be me with skepticism.
There is a free tier but that varies per service and anyway will not limit anything. It works as if it just gives you some credit to offset the costs.
[0] https://www.geeksforgeeks.org/cloud-computing/aws-educate-st...
They also offered (may still offer) the same thing with AWS Academy.
'By the way old chap, you have gone over your storage limit. Do you want to buy more or delete some stuff?'
Why does my AWS counselor sound British. Am I in eu-west-2?
It's so much easier for them to have support wave accidental overuses.
you can set up a cloud function to monitor billing limits and automatically disable billing for a project if it exceeds the limits though
[1] https://news.ycombinator.com/item?id=47156925