Besides the PDF processing value add, Cloudinary effectively acts like S3 here, serving assets directly to the web client. Like S3, it has support for signed/expiring URLs. However, Fiverr opted to use public URLs, not signed ones, for sensitive client-worker communication.
Moreover, it seems like they may be serving public HTML somewhere that links to these files. As a result, hundreds are in Google search results, many containing PII.
Example query: site:fiverr-res.cloudinary.com form 1040
In fact, Fiverr actively buys Google Ads for keywords like "form 1234 filing" despite knowing that it does not adequately secure the resulting work product, causing the preparer to violate the GLBA/FTC Safeguards Rule.
Responsible Disclosure Note -- 40 days have passed since this was notified to the designated vulnerability email (security@fiverr.com). The security team did not reply. Therefore, this is being made public as it doesn't seem eligible for CVE/CERT processing as it is not really a code vulnerability, and I don't know anyone else who would care about it.
> “Fiverr does not proactively expose users’ private information. The content in question was shared by users in the normal course of marketplace activity to showcase work samples, under agreements and approvals between buyers and sellers. This type of content requires the buyer’s explicit consent before it can be uploaded. As always, any request to remove content is handled promptly by our team."
https://sqmagazine.co.uk/fiverr-security-flaw-private-docume...
It sounds like they are trying to claim the users involved published the links and that's why they are on Google? But how could anyone believe that multiple users intentionally published their SSN?
Re the takedown, I'm also guessing it's from Cloudinary. Maybe HTTP Referrer based?
ChatGPT recently had a similar case with the sharing feature on conversations leading to publicly indexed convos. That incident would have also matched the implied definition of sharing here.
Each result from the query site:fiverr-res.cloudinary.com form 1040 returns 404
Utterly inexcusable that this is still up after so many hours.
Turns out during the firewall hardware migration years ago, several units firewalls were switched to audit mode (not enforcing rules). So an entire institute (health research!) had their whole subnet public with zero firewalls, both the server OS and iDRAC interfaces. iDRAC isn't even supposed to be on the same VLAN per Dell let alone on the internet.
To top it off, after making some tickets (admittedly not all as serious, ex MFP web UIs on internet) from Shodan, I got pushback from the firewall team for causing units to submit to many changes.
I also got in trouble with our Qualys analyst for undermining his work because he hadn't gotten to that units annual review yet, even though I didn't even have a Qualys login. (And even if I had found it there, since when do we wait for annual reviews to fix that?)
It took at least three weeks internally to get it fixed, and by that I mean only the iDRAC IP blocked with the server itself still wide open.
And that's only because I mentioned it to my manager (awesome guy and not formally responsible for firewall rules) after an unrelated no firewall host incident came through and he authorized an emergency rule.
They also have an ISO 27001 certificate (they try to claim a bunch of AWSs certs by proxy on their security page, which is ironic as they say AWS stores most of their data while apparently all uploads are on this).
Then they would install WordPress plugins to make the site worse and claim even more "work" was needed.
I documented the entire thing, including my own credentials, and sent it off to Fiverr. Fiverr's response was everything was fine and there was nothing they could do about it, even though it was obvious fraud.
Google never did anything about it either, nor did Shopify.
Given how they handled such a minor situation like that... I guess it shouldn't be surprising they're just asleep at the switch for a major one like this.
Sure, and now they could have their credentials revoked, potential be legally liable, and never find work in this field again which would prevent them from cocking up another company this way
Wouldn't change a thing, other than add another hassle you have to pay for to do your job.
This is the result of carelessness, not someone who didn't know that private data should be private because they weren't certified.
Would the certification require someone to take an official certification test for the framework used?
And therefore we’re only allowed to use frameworks which have certification tests available?
If you want to write some new software, do you have to generate a certification for it and get that approved so people are allowed to use it?
Sounds like a great way to force us all to use Big Company approved software because they’re the only ones with pockets deep enough to play all of the certification games
If I had my way, the certification process starts at the bottom of the stack, ie. you should be expected to have a functional knowledge of assembly instructions, memory management, registers, the call stack, and build up from there. Not that we need to write assembly on a daily basis, but all of the abstractions are built on top of that, and you cannot realistically engineer secure software if you don't understand what is being abstracted away. If you do understand the things being abstracted away, you have the fundamentals necessary to do good work with any programming language or framework. Throw in another certification starting from networking fundamentals if your job involves that. 30 years ago, most professional programmers had this level of understanding as table stakes, so we can hardly say it's an unrealistic burden that's impossible to meet.
Would it be a higher barrier to entry that massively cuts the size of the field working on sensitive software and slows software development down, yes. That is exactly what we need. There was a time when people built bridges that collapsed, then we implemented standards and expected engineers to do real work to make sure that didn't happen. Is that work expensive and expertise-intensive, yes, do bridges still collapse, only very rarely. We are witnessing software bridge collapses on a weekly basis, which should be seen as completely unacceptable. The harm is less obvious than when everyone on a bridge dies, but I do think that routinely leaking millions of people's sensitive data is causing serious harm and likely does lead to people dying in second-order effects.
That said, there are perhaps some factors you are overlooking which matter.
The first is that no amount of certification solves the actual problem (which is that security mistakes are made, often in new and novel ways.)
Secondly the amount of software being needed (and produced) is immense. Bridges require engineers, but the demand for new bridges is tiny. The demand for new software is enormous, and the current rate of production requires many more people that could ever be certified.
In other words, say you only allowed comp-sci graduates with a proper 4 year degree, covering assembly upwards etc. The supply of programmers would drop to what colleges could produce. Which is not nearly enough.
The analogy also falls down a bit on penalty-for-failure, a collapsed bridge kills people, bugs in my notepad app might lead to information leaks? Thats not the same thing.
In truth, at least for the last 35 years, the number of unqualified developers exceed qualified ones by orders of magnitude. And there still seems to be no limit to software demand.
Finally there have been no studies I am aware if that suggest that security flaws are added more frequently by non comp-sci grads compared to comp-sci grads. Anecdotally I don't see that distinction myself. (From my observation security outcomes correlate to the degree to which the individual considers security to be important.)
And, of course, security issues are not limited to programmers- management has a role to play as well. Should they be certified too?
So, I'm not convinced that your suggestion, however desirable, would solve the problem. And since it's clearly unimplementable in the real world it's a moot argument anyway.
I also wouldn't specifically associate this with college degrees. In fact I think universities are doing a shockingly bad job of producing functional software developers. But, on the other hand, you don't need a university to produce a good programmer. Software development is possibly the most open, information-available discipline in the world. Self-motivated learners can absolutely become competent on their own. The certification should be merit-based, and provide a clear path to learning the material the certification is based on. Many people will go through the effort to educate themselves and learn the required skills, especially if certified software engineers are in high demand and command a higher salary.
Regarding the penalty-for-failure, as I said, the harm is not as immediately apparent as when people die in a bridge collapse. But leaking sensitive information still leads to people dying, even if the connection is not as direct. Doxxing and blackmail frequently lead to suicide, and there are other damages that could lead to a butterfly effect culminating in a higher death rate, or, even if not death, tangible harm. This leak contained birth certificates, IDs, passports, tax documentation, passwords, all kinds of information that could be used to ruin someone's life with identity fraud. There is also, of course, some software in the world that is directly safety-critical, much of the software used in the health field for instance, which is also currently being written by the lowest bidder in many cases.
Regarding management, they don't need a certification but rather consequences for their actions. Currently the incentive structure is such that management is rewarded for cutting costs and is never punished for harming customers. Fiverr, for instance, should be facing an investigation that threatens to shut down the business given that not only did this happen in the first place, and not only did they ignore it for 40 days, but even after it went public the sensitive files were still accessible for 12+ hours (notably, after they were definitely made aware of it, given reports in this thread of people receiving replies from Fiverr about it). Maybe throw in some criminal liability for the people most responsible for a situation this horrible. Management would tighten up real quick.
I don't agree that this is unimplementable in the real world at all. If anything it's a complete abnormality that software development is the way it is, when most other skilled professions are licensed and regulated.
This is how airline pilot certificates work. And in that career, certification actually works. It's not a miracle or unexplainable.
> And therefore we’re only allowed to use frameworks which have certification tests available?
When it's safety-critical, yes, absolutely. A service that handles sensitive PII, such as the one whose "engineers" should be prosecuted for this incident, is definitionally safety-critical.
If you're afraid in that world you'd be unable to work, maybe you deserve to be.
I worked at a company where a customer called confused because when they googled our company as they did every day to login to their portal they found that drivers licenses we stored were available on the public internet.
The devs literally didn't know about direct object access and thought obfuscation was enough, didn't know about how robots.txt worked, didn't know about google webmaster shit, didn't know about sitemaps, they were just the cheapest labor the company could find who could do the thing.
This is a huge portion of outsourced labor in my experience, not because they are worse overseas in any respect, but because the people looking for cheap labor were always looking for the cheapest labor and had no idea how that applied to the actual technical work of running their business.
Thats the problem right there. The company doesn't care. No amount of personal certifications is going to fix that.
It MUST be on the companies. They should be fined out of existence for such breaches and they would quickly change tune.
Looks like this is a great opportunity for an object lesson. Let’s see how it goes…
As far as certification stuff…
Civil engineering has had licensing forever. That’s because Bad Things Happen, when they make mistakes.
I do think that it would be a good idea to score/certify critical infrastructure stuff. That might involve certification of the people that make it, but it should certainly involve penalties for the people responsible. That might include the authors, but it should probably also include the folks that decide to use the bad code.
I know that ISO 9000 is an attempt to address this kind of thing. In my opinion, it’s kind of a mess. I’ve worked in ISO 9000 shops, and it’s not much fun. The thing you learn, pretty quickly, is how to end-run the process, as it’s so heavy, that it basically stops all forward progress. It doesn’t have to, but often does.
Mistakes get made. If you design carefully, these mistakes won’t cause real damage.
I just figured out that an app I wrote, that’s been out for two years, has an embarrassing bug (mea culpa). I’ll get it fixed today.
Because I’m pretty careful, it doesn’t affect stuff like user privacy. It just introduces performance overhead, in one operation, so the fix will mean that the app will suddenly speed up.
I’m not sure that certification would have solved it. My security mindset is why user privacy wasn’t affected, and that comes from experience.
> Good judgment comes from experience. Experience comes from bad judgement.
1. Get paid more (as less fake "engineers" are available for the responsibility).
2. Push back harder (or at least document in detail) on malpractice during development. Manager did not listen to your warnings? Document it and when shit hits the fan, the manager gets the stick instead of you.
Hitting companies with monetary fines does not work. Hitting the employees with jail time will make sure they don't sign on dangerous or known problematic systems.
Manager not listening? Remind them they will face a trial if the issue does surface.
What!? So, when you can't switch jobs because the market is bad or for any other reason, your choices are: 1) quit and lose the income (which you can't afford) or 2) sign on whatever and accept the risk of jail time?
If you are certified, chances are you will have lots of choices to work.
Software devs have been insanely privileged, for the last couple of decades. That seems to be changing.
That's exactly what certification or licensure does; it imposes financial, civil, and criminal penalties for malpractice.
The liability of incurring penalties quickly outweigh the benefit of arbitraging costs with an unqualified practitioner.
If everyone knows that messing up security gets you in real trouble and the company loses real money, and it happens all the time, and it's not just "Facebook fined $x million for doing shady stuff", then I think the industry will adapt.
Like when GDPR got released and no matter if I thought we are or are not handling PII, I had to read up and double-check my assumptions just because it was being talked about all over the place and it would be embarrassing to be caught with your pants down when you didn't actually intend to do a shady thing.
They don't care. It's either never enough to make them care, or the company can just bankrupt and you go do something else.
If you or your manager has the threat of jail in the back of their mind, it's no longer just someone else's money being lost, it's personal.
> If everyone knows that messing up security gets you in real trouble and the company loses real money
There's already huge fines on paper for this, but never ever are the fines enough. It's always factored in the "cost of doing business". Also it's still someone else's money, why would an engineer care?
Please show me a GDPR fine that hit hard enough to scare companies into not fucking up? Evidently here it was not enough for Fiverr.
Edit: Just to provide an example, Takata airbags have been recalled massively (if you don't know why, look it up) but the company is now bankrupted and who is footing the bill? Their customers.
You cannot impose a fine on them, as it's bankrupt (now, but it was always the plan). They deliberately sold dangerous airbags and now what can you do so it doesn't happen again? Fine them some more? or maybe throw a few execs in jail because they knew of the problem and continued as usual.
That only gives those in power another way to push people into toeing the line. There's enough corporate authoritarianism these days as it is already. Give Stallman's "Right to Read" a read. His dystopia is exactly where we're going to be headed quickly if we keep demanding someone to "do something".
"The optimal amount of fraud is nonzero."
"Those who give up freedom for security deserve neither."
> Jobs with access to/control over millions of people's data should require some kind of genuine software engineering certification
FAANG, Fortune 500, etc., almost universally go out of their way to violate user freedom in pursuit of profit. Regulation is practically the only way to force megacorps to respect users' rights and improve their security, as evidenced by right-to-repair, surveillance/privacy, and so on.
And none of that has anything to do with users' individual rights to create, run, and modify their own software.
(Yes, regulatory capture exists, no, it doesn't mean all regulation is bad.)
Companies do that because they want to attract certain kind of customers and have enough spare manpower and money to go through this all year long.
....or they want to hold a very sensitive data that requires *proven* processes, trainings and skills.
My firm has several of these and we have to keep full compliance team and *always* have some auditor on site.
No one does it just because.
Plumbers. Electricians. Lawyers. Doctors. Hell, I have to get a license to run my own business.
Why shouldn't software come with a branch for licenses if you're working with sensitive data?
The plumber siliconed all the shower valves to the fiberglass walls without screwing them to a backplate.
Unsurprisingly the builder is now out of business.
Still get a few a week, but at least it’s public and amusing.
We are in the age of AI-slop AI-everything AI-break-it AI-fix-it.
Software companies are competing with each other on how low they can push the quality and still get away with it.
There's no reward or incentive for paying attention to the details or the quality. In fact you will get penalised for it.
I wonder if somewhere like Wired/Ars Technica/404media might pick this up?
This is too funny
https://fiverr-res.cloudinary.com/image/upload/f_pdf,q_auto/...
[0] https://www.reddit.com/r/Fiverr/comments/1slzoey/other_atten...
Might also want to add El Reg [1] to the list.
1: https://www.theregister.com/
https://www.fiverr.com/.well-known/security.txt only has "Contact: security@fiverr.com" and in their help pages they say "Fiverr operates a Bug Bounty program in collaboration with BugCrowd. If you discover a vulnerability, please reach out to security@fiverr.com to receive information about how to participate in our program."
Last year Fiverr started to push AI to the detriments of their freelancers, as well as a new "success score" metric, but never specifying how these metrics are calculated, making it very hard for freelancers to do something about it. This caused many accounts to "lose value" and thus rank lower on searches, causing a drop in income.
I've reported this on the Fiverr Freelancer Forums, let's see how long my post stays up...
wouldn't this make some other accounts rank higher on searches then? I mean it couldn't have been a problem that affected absolutely everyone so for someone it must've been a positive change.
It's very hard to improve a metric when you don't know what are the criteria affecting the metric. I've reached out to Fiverr regaring this and they never bothered to tell anyone what impacted your Success Score. "Just do better", they told me.
When I found out years down the track that I paid like $1000 for a “premium experience” to be offered 6 or so stencils like this, I was pretty furious. Luckily, I picked none of them, and made the artist draw it exactly as I later described.
The other also runs an insurance company (Lemonade) and just posted his drink to celebrate their 1B customers.
I never used their platform but tried a couple jobs on Upwork and drove Uber for 1000 trips. It is absolutely enraging how the CEO class lives day to day like they are some sort of "visionary" for taking a cut of other people's work while taking zero responsibility for even their own app's quality.
At one point the Uber app still told you to call a phone number for some support paths that had a recording telling you to use the app instead. Companies have systematically cut any kind of support, testing, and apparently security.
This also ties in nicely with the Delve debacle about how perfunctory those security certifications are.
This complaint was sent to Google, probably because the cloudinary.com URL appeared in their search results.
It's doubtful anyone at Fiverr was made aware of this - unless Google typically forwards these complaints to the actual host of the offending URL. Even then, it would go to Cloudinary who would in turn need to notify their client. Many hops with plenty of "someone else's problem" barriers for the message to overcome.
"You’re the second person to flag this issue to us
Please note that our records show no contact with Fiverr security regarding this matter ~40 days ago unlike the poster claims. We are currently working to resolve the situation"
(technically, I guess that doesn't prove anything other than it is in my Sent folder? it has a message ID but I guess only the purelymail admin could confirm that)
In any event, this should never have required an outside reminder. The indexing issue may be something non obvious. But the core decision not to use signed/expiring URLs is nothing less than good old security by obscurity.
Basically, they aren't set up for anyone to actually contact them and expect a resolution.
For sure their internal metrics are all green and solved tickets are on the rise.
I don't think it even comes down to "lying". It's possible that they genuinely believe they didn't receive contact, but given that they are verifiably completely and totally incompetent and have no right to be employed in their current role, they've earned exactly zero benefit of doubt.
“To be clear, this is not a cyber incident. Fiverr does not proactively expose users’ private information. The content in question was shared by users in the normal course of marketplace activity to showcase work samples, under agreements and approvals between buyers and sellers. This type of content requires the buyer’s consent before it can be uploaded. As always, any request to remove content is handled promptly by our team.”
I’m sure there’s a good reason for that. I do it, in a server that I publish for general use, but won’t do it, for the server that I control, as I make sure that it reads headers.
Some PHP servers ignore auth headers (and, I suspect, other APIs), so you need to set general-purpose frameworks and servers to use GET arguments, but that’s a security issue, for exactly the reason you state. Too easy to leak logins. If you use headers, then copy and pasting URLs won’t leak logins.
In any case, the token should be timed, but that’s a fairly weak precaution.
Once the leak is plugged, I would hope that Fiverr gets absolutely raked over the coals, this is egregious.
People should always have their credit frozen no matter what. It's free and only takes a few minutes to unfreeze when you need to apply for credit
Yes it's a little bit inconvenient but so is suddenly having a car, insurance, and several iPhones in your name when somebody steals your identity...
https://missouriindependent.com/2021/10/14/missouri-governor...
I know this is all Fiverr's fault for allegedly missing the responsible disclosure but now is this the ideal way for us to discuss, with these particular examples? I ask not to spare Fiverr, but I would be so mad if I were first for the result in OP or my personal info linked directly...
Sometimes users email him links to them
I stopped doing that after one guy said “why shouldn’t I use @dang when you’ll just send an email for me”
If you want dang to see your comment and reply (and remember it’s dang/tomhow now), email a link to your comment to the mods using the footer contact link along with a note
https://fiverr-res.cloudinary.com/image/upload/f_pdf,q_auto/...
It'd be like putting up an advert and then trying to sue anyone who sees it.
Surely the entire point of the court system is to determine who, if anyone, is at fault.
In this case leaving a paper trail of having accessed unauthorized confidential information looks a lot like being in the wrong, so the potential hassle is a lot higher. You can argue it's not unauthorized after all, and you'll likely win, but you may need to expand time and energy arguing in the first place. And it could be significant.
Edit: In addition, (a) accidentally opening a confidential document -> oops, close immediately; and (b) taking a screenshot could be different legally (NAL yada yada), doing the latter could make it a lot harder to defend yourself.
This is bad.
(Fiverr itself uses Bugcrowd but is private, having to first email their SOC as I did.)
"To be clear, this is not a cyber incident. Fiverr does not proactively expose users' private information. The content in question was shared by users in the normal course of marketplace activity to showcase work samples, under agreements and approvals between buyers and sellers. This type of content requires the buyer's consent before it can be uploaded. As always, any request to remove content is handled promptly by our team."
https://x.com/fiverr/status/2044389801495773339?s=20
The best to you.
People are asking for AWS help and giving root passwords to random contractors. A lot of people asking for CPA letters for loans and help with tax problems but their budget is under $100. And outright fraud posts are often seen asking for people to open bank accounts or otherwise bypass KYC.
Upwork now has an AI feature to help write job posts, so all the time you can see things like "If you want to attract freelancers like X, I can change it." So now the job posts are all written like corporate ones talking about "highly experienced in X" but pay almost nothing. Half the time the clients don't even know the words in their own post. And it charges every time someone applies to a job and then more to boost to top of list because every job gets 30+ applications supposedly.
"I want to build a website" gets converted into:
"I'm looking for someone with 5+ years of 'hands on' React experience"
Just forwarded this post to a few members of Congress.
> "I wake up happy, dancing all in the mirror and shit. My confidence is so high I’m practicing how to accept a job I haven’t even applied for yet."
I found the author on Amazon and the book still hasn't been released
this is sad
Also, a version of this appears to be currently sold on Amazon for $15 USD.
Just insane
https://cyberinsider.com/fiverr-exposes-sensitive-data-via-p...
https://cybernews.com/security/fiverr-leak-exposes-user-ids-...
Responsible Disclosure Note -- 40 days have passed since this was notified to the designated vulnerability email (security@fiverr.com). The security team did not reply.
Would be interesting if someone with an account can check if they are visible to intended users or not, and if so, if their mitigation is robust (signed URLs?).
Edit: I'm beginning to wonder if they might be locked out of their own site at this point. How hard could it be to just shut down the asset server until they get it sorted?
I'm not taking sides either way, but if you are of the all in on AI perspective as they are, shouldn't this be the ideal use case? It absolutely could have handled adding URL signing.
You really can't make this shit up: https://www.linkedin.com/feed/update/urn:li:activity:7445526...
The real question is: will Fiverr be the first company to truly crash and burn from an "AI-first" approach? Go LLM, go mayhem!
No. Nobody will care.
When I reported an issue and gotten no response, I sat on it for 6 years, reported it again and they took the whole site down without reaching out to me, never quite got it, but if people are doing this, it makes sense not to acknowledge any report and just play deaf.
Have bad actors already found it? Who knows?
So if Fiverr isn’t going to fix it then the next best thing is to warn people.
I did include "bug bounty" in the email subject since they claimed to have a private program. Other than that, no mention of any kind of compensation. It probably doesn't even have any kind of resume value since it's not an actual code flaw/CVE, just an "unlocked door."
This is not how Google works.
Today, a photo file might be hosted at:
But it used to be a little closer to: And no auth required, URL only!So? That’s indeed how Google works.
Google does not work how OP describes it.
I’ve investigated similar incidents in the past on other platforms, it was always user error causing links to be public.
“Fiverr might be hosting public HTML somewhere” seems like an entirely reasonable alternative phrase to “these links must be linked from somewhere [that Google can crawl] “, at least to someone who is only superficially familiar with how search works.
The distinction you imply is obvious is not, and your point is thus rather confusing to someone who is not you.
Even still, Fiverr could very well have GDPR/CCPA/etc liability as the host of these files, because they related to its services, it's not just a generic file host.
It’s bizarre UX if you link a file to someone and the link doesn’t work.
You need links to pages either from your own website or backlinks from other websites. Alternatively if the page is in your sitemap then Google will typically pick it up or you can manually submit it for indexing. For important pages you would typically want internal links, backlinks, and have it in your sitemap.