I found a Vulnerability. They found a Lawyer
471 points by toomuchtodo 12 hours ago | 201 comments

janalsncm 10 hours ago
Three thoughts from someone with no expertise.

1) If you make legal disclosure too hard, the only way you will find out is via criminals.

2) If other industries worked like this, you could sue an architect who discovered a flaw in a skyscraper. The difference is that knowledge of a bad foundation doesn’t inherently make a building more likely to collapse, while knowledge of a cyber vulnerability is an inherent risk.

3) Random audits by passers-by is way too haphazard. If a website can require my real PII, I should be able to require that PII is secure. I’m not sure what the full list of industries would be, but insurance companies should be categorically required to have an cyber audit, and laws those same laws should protect white hats from lawyers and allow class actions from all users. That would change the incentives so that the most basic vulnerabilities are gone, and software engineers become more economical than lawyers.

reply
witnessme 54 seconds ago
Agree with the points. Cybersec audits are mandatory for insurance companies in most countries. This list need to be expanded.
reply
godelski 7 hours ago
In other industries there are professional engineers. People who have a legal accountability. I wonder if the CS world will move that way, especially with AI. Since those engineers are the ones who sign things off.

For people unfamiliar, most engineers aren't professional engineers. There are more legal standards for your average engineer and they are legally obligated to push back against management when they think there's danger or ethics violations, but that's a high bar and very few ever get in legal trouble, only the most egregious cases. But professional engineers are the ones who check all the plans and the inspections. They're more like a supervisor. Someone who can look at the whole picture. And they get paid a lot more for their work but they're also essential to making sure things are safe. They also end up having a lot of power/authority, though at the cost of liability. Think like how in the military a doctor can overrule all others (I'm sure you've seen this in a movie). Your average military doctor or nurse can't do that but the senior ones can, though it's rare and very circumstantial.

reply
the_hoffa 7 hours ago
You'd be surprised how many SE's would love for this to happen. The biggest reason, as you said, being able to push back.

Having worked in low-level embedded systems that could be considered "system critical", it's a horrible feeling knowing what's in that code and having no actual recourse other than quitting (which I have done on few occasions because I did not want to be tied to that disaster waiting to happen).

I actually started a legal framework and got some basic bills together (mostly wording) and presented this to many of my colleagues, all agreed it was needed and loved it, and a few lawyers said the bill/framework was sound .. even had some carve-outs for "mom-n-pops" and some other "obvious" things (like allowing for a transition into it).

Why didn't I push it through? 2 reasons:

1.) I'd likely be blackballed (if not outright killed) because "the powers that be" (e.g. large corp's in software) would absolutely -hate- this ... having actual accountability AND having to pay higher wages.

2.) Doing what I wanted would require federal intervention, and the climate has not been ripe for new regulations, let alone governing bodies, in well over a decade.

Hell, I even tried to get my PE in Software, but right as I was going to start the process, the PE for Software was removed from my state (and isn't likely to ever come back).

I 100% agree we should have even a PE for Software, but it's not likely to happen any time soon because Software without accountability and regulation makes WAY too much money ... :(

reply
godelski 6 hours ago

  > You'd be surprised how many SE's would love for this to happen
I'm one of them, and for exactly the reason you say.

I worked as a physical engineer previously and I think the existence of PEs changes the nature of the game. I felt much more empowered to "talk back" to my boss and question them. It was natural to do that and even encouraged. If something is wrong everyone wants to know. It is worth disruption and even dealing with naive young engineers than it is to harm someone. It is also worth doing because it makes those engineers learn faster and it makes the products improve faster (insights can come from anywhere).

Part of the reason I don't associate my name with my account is so that I can talk more freely. I absolutely love software (and yes, even AI, despite what some might think given my comments) but I do really dislike how much deception there is in our industry. I do think it is on us as employees to steer the ship. If we don't think about what we're building and the consequences of them then our ship is beholden to the tides, not us. It is up to us to make the world a better place. It is up to us to make sure that our ship is headed towards utopia rather than dystopia (even if both are more of an idea than reality). I'd argue that if it were up to the tides then we'll end up crashing into the rocks. It's much easier to avoid that if we're managing the ship routinely than in a panic when we're headed in that direction. I think software has the capacity to make the world a far better place. That we can both do good and make money at the same time. But I also think the system naturally will disempower us. When we fight against the tides things are naturally harder and may even look like we're moving slower. But I think we often confuse speed and velocity, frankly, because direction is difficult to understand or predict. Still, it is best that we try our best and not just abdicate those decisions. The world is complex, so when things work they are in an unstable equilibrium. Which means small perturbations knock us off. Like one ship getting stuck shutting down a global economy. So it takes a million people and a billion tiny actions to make things go right and stay right (easier to stay than fix). But many of the problems we hate and are frustrated by are more stable states. Things like how wealth pools up, gathered by only a few. How power does the same. And so on. Obviously my feelings extend beyond software engineering, but my belief is that if we want the world to be a better place it takes all of us. The more that are willing to do something, the easier it gets. I'd also argue that most people don't need to do anything that difficult. The benefit and detriment of a complex machine is that small actions have larger consequences. Just because you're a small cog doesn't mean you have no power. You don't need to be a big cog to change the world, although you're unlikely to get recognition.

reply
Avicebron 36 minutes ago
I also come from a more "traditional engineering" background, with PEs and a heavier sense of responsibility/ethics(?). I definitely think that's where it's going, although in my somewhat biased opinion, that's why the bar for traditional engineering in terms of students and expected skill and intuition was much higher than with CS/CE, which means the get rich quick scheme nature of it might go away.
reply
BobbyTables2 3 hours ago
I don’t think the current cost structure of software development would support a professional engineer signing their name on releases or the required skill level of the others to enable such …

We’d actually have to respect software development as an important task and not a cost to be minimized and outsourced.

reply
user3939382 60 minutes ago
We check the output of engineers tjats what infra audits and certs are for. We basically tell industry if you want to waste your money on poor engineers whose output doesn’t certify go ahead.

you could do that with civil engineering. anyone gets to design bridges. bridge is done we inspect, sorry x isn’t redundant your engineering is bad tear it down.

reply
psadauskas 8 hours ago
Regarding your 2), in other industries and engineering professions, the architect (or civil engineer, or electrical engineer) who signed off carries insurance, and often is licensed by the state.

I absolutely do not want to gatekeep beginners from being able to publish their work on the open internet, but I often wonder if we should require some sort of certification and insurance for large businesses sites that handle personal info or money. There'd be a Certified Professional Software Engineer that has to sign off on it, and thus maybe has the clout to push back on being forced to implement whatever dumb idea an MBA has to drive engagement or short-term sales.

Maybe. Its not like its worked very well lately for Boeing or Volkswagen.

reply
godelski 7 hours ago

  > I absolutely do not want to gatekeep beginners from being able to publish their work on the open internet
FWIW there is no barrier like that for your physical engineers. Even though, as you note, professional engineers exist. Most engineers aren't professional engineers though, and that's why the barrier doesn't exist. We can probably follow a similar framing. I mean it is already more common for licensing to be attached to even random software and that's not true for the engineer's equivalents.
reply
Onavo 8 hours ago
Oh there have been many cases where software engineers who are not professional engineers with the engineering mafia designation get sidelined by authorities for lacking standing. We absolutely should get rid of the engineering mafias and unions.

https://ij.org/press-release/oregon-engineer-makes-history-w...

reply
henryfjordan 8 hours ago
It's kinda wild that you don't need to be a professional engineer to store PII. The GDPR and other frameworks for PII usually do have a minimum size (in # of users) before they apply, which would help hobbyists. The same could apply for the licensure requirement.

But also maybe hobbyists don't have any business storing PII at scale just like they have no business building public bridges or commercial aircraft.

reply
knollimar 6 hours ago
I'm wary of centralizing the powers of the web like that.
reply
Xelbair 5 hours ago
Web is already mostly centralized, and corporations which should be scrutinized in way they handle security, PII and overall software issues are without oversight.

It is also a matter of respect towards professionals. If civil engineer says that something is illegal/dangerous/unfeasible their word is taken into the account and not dismissed - unlike in, broadly speaking, IT.

reply
patrakov 41 seconds ago
The question is who defines security.

I, as a self-proclaimed dictator of my empire, require all chat applications developed or deployed in my empire to send copies of all chat messages to the National Archive for backup in a readable form. I appoint Professional Software Engineers to inspect and certify apps to actually do that. Distribution of non-certified applications to the public or other forms of their deployment is prohibited and is punishable by jail time.

Sounds familiar?

reply
knollimar 5 hours ago
I just don't feel we want the overhead on software. I'm in an industry with PEs and I have beef with the way it works for physical things.

PII isn't nearly as big a deal as a life tbh. I'd rather not gatekeep PII handling behind degrees. I want more accoubtability, but PEs for software seems like it's ill-suited for the problem. Principally, software is ever evolving and distributed. A building or bridge is mostly done.

A PR is not evaluated in a vacuum

reply
closewith 4 hours ago
GDPR doesn't have any minimum size before applying. There's a household exemption for personal use, but if you have one external user, you're regulated.
reply
ash_091 58 minutes ago
I generally agree with you, but:

> If other industries worked like this, you could sue an architect who discovered a flaw in a skyscraper

To match this metaphor to TFA, the architect has to break in to someone else's apartment to prove there's a flaw. IANAL but I'm not positive that "I'm an architect and I noticed a crack in my apartment, so I immediately broke in to the apartments of three neighbours to see if they also had cracks" would be much of a defence against a trespass/B&E charge.

reply
otikik 44 minutes ago
Nah, this is more like “I put a probe camera in the crack and I ended up seeing my neighbor’s living room for a second
reply
Onavo 8 hours ago
There are jurisdictions (and cultures) where truth is not an absolute defence against defamation. In other words, it's one thing to disclose the issue to the authorities, it's another to go to the press and trumpet it on the internet. The nail that sticks out gets hammered down.

Given that this is Malta in particular, the author probably wants to avoid going there for a bit. It's a country full of organized crime and corruption where people like him would end up with convenient accidents.

reply
godelski 7 hours ago

  > it's one thing to disclose the issue to the authorities, it's another to go to the press and trumpet it on the internet.
At least in the US there is a path of escalation. Usually if you have first contacted those who have authority over you then you're fine. There's exceptions in both directions; where you aren't fine or where you can skip that step. Government work is different. For example Snowden probably doesn't get whistleblower protection because he didn't first leak to Congress. It's arguable though but also IANAL
reply
cryptonector 2 hours ago
Hey TFA, other people have gone to prison for finding monotonic user/account IDs and _testing_ their hunch to see if it's true. See, doing that puts you at great risk of violating the CFAA. Basically, the moment you knew they were allocating account IDs monotonically and with a default password was the moment you had a vulnerability that you could report without fear of prosecution, but the moment you tested that vulnerability is the moment you may have broken the law.

Writing about it is essentially confessing. You need a lawyer, and a good one. And you need to read about these things.

reply
Hnrobert42 9 hours ago
I use a different email address for every service. About 15 years ago, I began getting spam at my diversalertnetwork email address. I emailed DAN to tell them they'd been breached. They responded with an email telling me how to change my password.

I guess I should feel lucky they didn't try to have me criminally prosecuted.

reply
ipaddr 8 hours ago
That could be a hack or something the company sold to a third party.
reply
kwanbix 7 hours ago
Same with me. I started to get spam from the email I used for a Portuguese airline. They didn't even respond.
reply
stevage 10 hours ago
Since the author is apparently afraid to name the organisation in question, it seems the legal threats have worked perfectly.
reply
pavel_lishin 10 hours ago
Or maybe in the diving community, "Maltese insurance company for divers" is about as subtle as "Bird-themed social network with blue checkmarks".
reply
frederikvs 10 hours ago
I'm a diver, DAN is the only company I can name that specialises in diving insurance.

Huh, apparently they're registered in Malta, what a coincidence...

reply
bpavuk 9 hours ago
checks out with both Perplexity[0] and top Google results

[0]: https://www.perplexity.ai/search/maltese-scuba-diving-insura...

reply
saxelsen 10 hours ago
There's pretty much only one global insurer affiliated with dive schools, so this is spot on
reply
bpavuk 9 hours ago
well, it is. quick search revealed a name of a certain big player, although there are some other local companies whose policies can be extended to "extreme sports"

https://www.reddit.com/r/scuba/comments/1r9fn7u/apparently_a...

reply
kube-system 7 hours ago
Bluesky?
reply
duckmysick 7 hours ago
That's a butterfly.
reply
honeybadger1 5 hours ago
[flagged]
reply
dghlsakjg 52 minutes ago
There is precisely one large, internationally well known company that offers dive insurance and is based in Malta.

They left more than enough clues to figure out that this is DAN (Divers Alert Network) Europe.

Ironically, this will garner far more attention and focus on them than if they had disclosed this quietly without threats.

reply
tuhgdetzhh 10 hours ago
If you follow the jurisdictional trail in the post, the field narrows quickly. The author describes a major international diving insurer, an instructor driven student registration workflow, GDPR applicability, and explicit involvement of CSIRT Malta under the Maltese National Coordinated Vulnerability Disclosure Policy. That combination is highly specific.

There are only a few globally relevant diving insurers. DAN America is US based. DiveAssure is not Maltese. AquaMed is German. The one large diving insurer that is actually headquartered and registered in Malta is DAN Europe. Given that the organization is described as being registered in Malta and subject to Maltese supervisory processes, DAN Europe becomes the most plausible candidate based on structure and jurisdiction alone.

reply
da_chicken 8 hours ago
Maybe.

Or maybe they took what they know to sell to the black hats.

reply
nomel 7 hours ago
This is legal, correct?
reply
wildzzz 9 hours ago
[flagged]
reply
vaylian 11 hours ago
> Instead, I offered to sign a modified declaration confirming data deletion. I had no interest in retaining anyone’s personal data, but I was not going to agree to silence about the disclosure process itself.

Why sign anything at all? The company was obviously not interested in cooperation, but in domination.

reply
chuckadams 8 hours ago
Getting them to agree to your terms pretty much nullifies their domination strategy, and in fact becomes legally binding on them.
reply
dwedge 11 hours ago
[flagged]
reply
gchamonlive 11 hours ago
Because you are highjacking a thread. Wanna trash the site's design, you should open a top level thread instead.
reply
magicalhippo 11 hours ago
> Wanna trash the site's design, you should open a top level thread instead.

Or better, don't[1]:

Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

[1]: https://news.ycombinator.com/newsguidelines.html

reply
gchamonlive 10 hours ago
Exactly, thanks
reply
dwedge 10 hours ago
Being impossible to read is not common
reply
magicalhippo 10 hours ago
Get a better browser I'd say. Firefox Reader mode makes short work of such sites, including the submission. I use it very often, so I can enjoy the content rather than get frustrated over styling issues.
reply
dwedge 10 hours ago
Ah then I deserve it. I didn't notice from the app I was using that it wasn't all the way to the left
reply
MBCook 11 hours ago
Your response didn’t have anything to do with the parent comment. And I’m on a phone (iOS) and had no issue reading it, for the record.
reply
capitainenemo 11 hours ago
As well as contrast issues, could also be that there was a javascript error on their end (or they don't whitelist sites for JS by default). This is unfortunately one of those sites that renders a completely blank page unless you use reader mode, enable JS, or disable CSS.

If it was a random JS error, well, that reminds me of: https://www.kryogenix.org/code/browser/everyonehasjs.html

reply
0sdi 11 hours ago
Is this Divers Alert Network (DAN) Europe, and it's insurance subsidiary, IDA Insurance Limited?
reply
locusofself 10 hours ago
Another commenter basically deduced this
reply
n_u 8 hours ago
> The security research community has been dealing with this pattern for decades: find a vulnerability, report it responsibly, get threatened with legal action. It's so common it has a name - the chilling effect.

Governments and companies talk a big game about how important cybersecurity is. I'd like to see some legislation to prevent companies and governments [1] behaving with unwarranted hostility to security researchers who are helping them.

[1] https://news.ycombinator.com/item?id=46814614

reply
nilslindemann 8 hours ago
AFAIK, what this dude did - running a script which tries every password and actually accessing personal data of other people – is illegal in Germany. The reasoning is, just because a door of a car which is not yours is open you have no right to sit inside and start the motor. Even if you just want to honk the horn to inform the guy that he has left the door open.

https://www.nilsbecker.de/rechtliche-grauzonen-fuer-ethische...

reply
zaptheimpaler 5 hours ago
Maybe the law should be changed then. The companies that have this level of disregard for security in 2026 are not going to change without either a good samaritan or a data breach.
reply
tokenless 4 hours ago
He didn't have to crack the site. He could have reported up to that point.

We need a change in law but more to do with fining security breaches or requiring certification to run a site above X number of users.

reply
DANmode 3 hours ago
Showing up without a PoC complicates things.
reply
tokenless 2 hours ago
You can lead a horse to water, as they say.
reply
SpicyLemonZest 49 minutes ago
I understand why the author thought that way, but showing up with private data that the company is obligated to protect complicates things quite a lot more.

I've dealt with security issues a number of times over my career, and I'm genuinely unsure what my legal obligations would be in response to an email like this. He says the company has committed "multiple GDPR violations"; is there something I need to say in response to preserve any defenses the company may have or minimize the fines? What must I do to ensure that he does eventually delete the customer data? If I work with him before the data is deleted, or engage in joint debugging that gives him the opportunity to exfiltrate additional data, is there a risk that I could be liable for failing to protect the data from him?

There's really no option when getting an email like this other than immediately escalating to your lawyers and having them handle all further communication.

reply
habinero 17 minutes ago
It's illegal in the US, too. This is an incredibly stupid thing to do. You never, ever test on other people's accounts. Once you know about the vulnerability, you stop and report it.

Knowing the front door is unlocked does not mean you can go inside.

reply
tokenless 4 hours ago
I agree. You have to know when to stop.

No expert but I assume anything you do that is good faith usage of the site is OK. And take screenshots and report the potential problem. But making a python script to pull down data once you know? That is like getting in that car.

Real life example of fine would be you walk past a bank at midnight when it is unstaffed and the doors open so you have access to lobby (and it isnt just the night atm area). You call police on non emergency no and let them know.

reply
DANmode 8 hours ago
Hopefully no criminals turn up to do the illegal thing.
reply
lucb1e 6 hours ago
You don't need to retrieve other people's data to demonstrate the vulnerability.

It's readily evident that people have an account with a default password on the site for some amount of time, and some of them indefinitely. You know what data is in the account (as the person who creates the accounts) and you know the IDs are incremental. You can do the login request and never use the retrieved access/session token (or use a HEAD request to avoid getting body data but still see the 200 OK for the login) if you want to beat the dead horse of "there exist users who don't configure a strong password when not required to". OP evidenced that they went beyond that and saw at least the date of birth of a user on there by saying "I found underage students on your site" in the email to the organization

If laws don't make it illegal to do this kind of thing, how would you differentiate between the white hat and the black hat? The former can choose to do the minimum set of actions necessary to verify and report the weakness, while the latter writes code to dump the whole database. That's a choice

To be fair, not everyone is aware that this line exists. It's common to prove the vulnerability, and this code does that as well. It's also sometimes extra work (set a custom request method, say) to limit what the script retrieves and just not the default kind of code you're used to writing for your study/job. Going too far happens easily in that sense. So the rules are to be taken leniently and the circumstances and subsequent actions of the hacker matter. But I can see why the German the rules are this way, and the Dutch ones are similar for example

reply
DANmode 4 hours ago
> You don't need to retrieve other people's data to demonstrate the vulnerability.

If you’re reporting to a nontechnical team…which sometimes you are…sometimes you do?

reply
lucb1e 3 hours ago
If the nontechnical team is refusing to forward it to whoever maintains the system, they apparently see no problem and you could disclose it to a journalist or the public. Or you could try it via the national CERT route, have them talk to this organization and tell them it's real. In some cases you could send a proof of concept exploit that you say you haven't run, but they can, to verify the bug. You can choose to retrieve only your own record, or that of someone who gave consent. You can ask the organization "since you think the vulnerability is not real, do you mind if I retrieve 1 record for the sole purpose of sending you this data and prove it is real?"

In jurisdictions like the one I'm most familiar with, it's official national policy not to prosecute when you did the minimum necessary. In a case where you're otherwise stuck, it's entirely reasonable to retrieve 1 record for the sake of a screenshot and preventing a bigger data leak. You could also consider doctoring a screenshot based on your own data. By the time they figured out the screenshot was fake, it landed on a technical person's desk who saw that the vulnerability is real

Lots of steps to go until it's necessary to dump the database as OP did, but I'll agree it can sometimes (never happened to me) be necessary to access at least one other person's data, and more frequently that it will happen by accident

reply
habinero 9 minutes ago
Absolutely not. That's not your concern nor your problem.

They're perfectly capable of hiring incident response experts, and companies commonly have cyber insurance that'll pay for it.

"Demonstrating" is dumb and means you turn an ordinary disclosure into personal liability for you.

Blabbing about it on the internet is just the idiot cherry on the stupid cake.

reply
andrelaszlo 7 hours ago
Last year I found a vulnerability in a large annual event's ticket system, allowing me to download tickets from other users.

I had bought a ticket, which arrived as a link by email. The URL was something like example.com/tickets/[string]

The string was just the order number in base 64. The order number was, of course, sequential.

I emailed the organizer and the company that built the order system. They immediately fixed it... Just kidding. It's still wide open and I didn't hear anything from them.

I'm waiting for this year's edition. Maybe they'll have fixed it.

reply
atlgator 7 hours ago
Incrementing user IDs and a default password for everyone — so the real vulnerability was assuming the company had any security to disclose to in the first place.

At this point 'responsible disclosure' just means 'giving a company a head start on hiring a lawyer before you go public.'

reply
undebuggable 10 hours ago
> the portal used incrementing numeric user IDs

> every account was provisioned with a static default password

Hehehe. I failed countless job interviews for mistakes much less serious than that. Yet someone gets the job while making worse mistakes, and there are plenty of such systems on production handling real people's data.

reply
tracker1 9 hours ago
Literally found the same issue in a password system, on top of passwords being clear text in the database... cleared all passwords, expanded the db field to hold a longer hash (pw field was like 12 chars), setup "recover password" feature and emailed all users before End of Day.

My own suggestion to anyone reading this... version your password hashing mechanics so you can upgrade hashing methods as needed in the future. I usually use "v{version}.{salt}.{hash}" where salt and the resulting hash are a base64 string of the salt and result. I could use multiple db fields for the same, but would rather not... I could also use JSON or some other wrapper, but feel the dot-separated base64 is good enough.

I have had instances where hashing was indeed upgraded later, and a password was (re)hashed at login with the new encoding if the version changed... after a given time-frame, will notify users and wipe old passwords to require recovery process.

FWIW, I really wish there were better guides for moderately good implementations of login/auth systems out there. Too many applications for things like SSO, etc just become a morass of complexity that isn't always necesssary. I did write a nice system for a former employer that is somewhat widely deployed... I tried to get permission to open-source it, but couldn't get buy in over "security concerns" (the irony). Maybe someday I'll make another one.

reply
alright2565 7 hours ago
If you are needing to version your password hashes, then you are likely doing them incorrectly and not using a proper computationally-hard hashing algorithm.

For example, with unsuitable algorithms like sha256, you get this, which doesn't have a version field:

    import hashlib; print(f"MD5:      {hashlib.md5(b'password').hexdigest()}")
    print(f"SHA-256:  {hashlib.sha256(b'password').hexdigest()}")


    MD5:      5f4dcc3b5aa765d61d8327deb882cf99
    SHA-256:  5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8
But if you use a proper password hash, then your hashing library will automatically take care of versioning your hash, and you can just treat it as an opaque blob:

    import argon2; print(f"Argon2:   {argon2.PasswordHasher().hash('password')}")
    import bcrypt; print(f"bcrypt:   {bcrypt.hashpw(b'password', bcrypt.gensalt()).decode()}")
    from passlib.hash import scrypt; print(f"scrypt:   {scrypt.hash('password')}")


    Argon2:   $argon2id$v=19$m=65536,t=3,p=4$LZ/H9PWV2UV3YTgF3Ixrig$aXEtfkmdCMXX46a0ZiE0XjKABfJSgCHA4HmtlJzautU
    bcrypt:   $2b$12$xqsibRw1wikgk9qhce0CGO9G7k7j2nfpxCmmasmUoGX4Rt0B5umuG
    scrypt:   $scrypt$ln=16,r=8,p=1$/V8rpRTCmDOGcA5hjPFeCw$6N1e9QmxuwqbPJb4NjpGib5FxxILGoXmUX90lCXKXD4
This isn't a new thing, and as far as I'm aware, it's derived from the old apache htpasswd format (although no one else uses the leading colon)

    $ htpasswd -bnBC 10 "" password
    :$2y$10$Bh67PQAd4rqAkbFraTKZ/egfHdN392tyQ3I1U6VnjZhLoQLD3YzRe
reply
codys 3 hours ago
It's not a leading colon: It is a colon separator between the username and password, and the command used has the username as an empty string.
reply
chuckadams 8 hours ago
Several web frameworks, including Rails, Laravel, and Symfony, will automatically upgrade password hashes if the algorithm or work factor has changed since the password was last hashed.
reply
makr17 9 hours ago
Years ago I worked for a company that bought another company. Our QA folks were asked to give their site a once-over. What they found is still the butt of jokes in my circle of friends/former coworkers.

* account ids are numeric, and incrementing

* included in the URL after login, e.g. ?account=123456

* no authentication on requests after login

So anybody moderately curious can just increment to account_id=123457 to access another account. And then try 123458. And then enumerate the space to see if there is anything interesting... :face-palm: :cold-sweat:

reply
josephg 8 hours ago
I did some work ~15 years ago for a consulting company. The company pushes their own custom opensource cms into most projects - built on top of mongodb and written by the ceo. He’s a lovely guy, and good coder. But he’s totally self taught at programming and he has blind spots a mile wide. And he hates having his blind spots pointed out. He came back from a react conference once thinking the react team invented functional programming.

A friend at the company started poking around in the CMS. Turns out the login system worked by giving the user a cookie with the mongodb document id for the user they’re logged in as. Not signed or anything. Just the document id in plain text. Document IDs are (or at least were) mostly sequential, so you could just enumerate document IDs in your cookie to log in as anyone.

The ceo told us it wasn’t actually a security vulnerability. Then insisted we didn’t need to assign a CVE or tell any of our customers and users. He didn’t want to fix the code. Then when pushed he wanted to slip a fix into the next version under the cover of night and not tell anyone. Preferably hidden in a big commit with lots of other stuff.

It’s become a joke between us too. He gives self taught programmers a bad rep. These days whenever I hear a product was architected by someone who’s self taught, I always check how the login system works. It’s often enlightening.

reply
paxys 10 hours ago
When you are acting in good faith and the person/organization on the other end isn't, you aren't having a productive discussion or negotiation, just wasting your own time.

The only sensible approach here would have been to cease all correspondence after their very first email/threat. The nation of Malta would survive just fine without you looking out for them and their online security.

reply
czbond 10 hours ago
Agree - yet, security researchers and our wider community also needs to recognize that vulnerabilities are foreign to most non-technical users.

Cold approach vulnerability reports to non-technical organizations quite frankly scare them. It might be like someone you've never met telling you the door on your back bedroom balcony can be opened with a dummy key, and they know because they tried it.

Such organizations don't kmow what to do. They're scared, thinking maybe someone also took financial information, etc. Internal strife and lots of discussions usually occur with lots of wild specualation (as the norm) before any communication back occurs.

It just isn't the same as what security forward organizations do, so it often becomes as a surprise to engineers when "good deed" seems to be taken as malice.

reply
jcynix 8 hours ago
> Such organizations don't know what to do.

Maybe they should simply use some common sense? If someone could and would steal valuables, it seems highly unlikely that he/she/it would notify you before doing it.

If they would want to extort you, they would possibly do so early on. And maybe encrypt some data as a "proof of concept" ...

But some organizations seem to think that their lawyers will remedy every failure and that's enough.

reply
lucb1e 6 hours ago
> If someone could and would steal valuables, it seems highly unlikely that he/she/it would notify you before doing it.

after* doing it. Though I agree with your general point

Note the parts in the email to the organization where OP (1) mentions they found underage students among the unsecured accounts and (2) attaches a script that dumps the database, ready to go¹. It takes very little to see in access logs that they accessed records that they weren't authorized to, which makes it hard to distinguish their actions from malicious ones

I do agree that if the org had done a cursory web search, they'd have found that everything OP did (besides dumping more than one record from the database) is standard practice and that responsible disclosure is an established practice that criminals obviously wouldn't use. That OP subsequently agrees to sign a removal agreement, besides the lack of any extortion, is a further sign of good faith which the org should have taken them up on

¹ though very inefficiently, but the data protection officer that they were in touch with (note: not a lawyer) wouldn't know that and the IT person that advises them might not feel the need to mention it

reply
bpavuk 9 hours ago
cynical. worst part? best one can do in this situation. can't imagine how I could continue any further interaction with such organization.
reply
kube-system 7 hours ago
I suspect that the direction of these situations often depends on how your initial email is routed internally in these organizations. If they go to a lawyer first, you will get someone who tries to fix things with the application of the law. If it goes to an engineer first, you will get someone who tries to fix it with an application of engineering. If it were me, I would have avoided involving third party regulators in the initial contact at least.
reply
themanmaran 5 hours ago
> If it were me, I would have avoided involving third party regulators in the initial contact at least.

I'm surprised to see this take only mentioned once in this thread. I think people here are not aware of the sheer amount of fraud in the "bug bounty" space. As soon as you have a public product you get at least 1 of these attempts per week of someone trying to shake you down for a disclosure that they'll disclose after you pay them something. Typically you just report them as spam and move on.

But if I got one that had some credible evidence of them reporting me to a government agency already, I'd immediately get a lawyer to send a cease and desist.

It seems like OP was trying to be a by the book law abiding citizen, but the sheer amount of fraud in this space makes it really hard to tell the difference from a cold email.

reply
lucb1e 6 hours ago
Yes, this routing is common. German energy company recommended by a climate organization had a somewhat similar vulnerability and no security contact, so I call them up and.. mhm, yes, okay, is that l-e-g-a-l-@-company-dot-de? You don't want me to just send it to the IT department that can fix it? Okay I see, they will put it through, yes, thank you, bye for now!

Was a bit of a "oh god what am I getting into again" moment (also considering I don't speak legal-level German), but I knew they had nothing to stand on if they did file a complaint or court case so I followed through and they just thanked me for the report in the end and fixed it reasonably promptly. No stickers or maybe a discount as a customer, but oh well, no lawsuit either :)

reply
Tempest1981 5 hours ago
In the early internet days, you could email root@company.com about a website bug, and somebody might reply.
reply
estebarb 10 hours ago
If this was in Costa Rica the appropiate way was to contact PRODHAB about the leak of personal information and Costa Rica CSIRT ( csirt@micitt.go.cr ).

Here all databases with personal information must be registered there and data must be secure.

reply
Aurornis 8 hours ago
> If this was in Costa Rica the appropiate way was to contact PRODHAB about the leak of personal information and Costa Rica CSIRT ( csirt@micitt.go.cr ).

They did. It's in the article. Search for 'CSIRT'. It's one of the key points of the story.

reply
xvxvx 11 hours ago
I’ve worked in I.T. For nearly 3 decades, and I’m still astounded by the disconnect between security best practices, often with serious legal muscle behind them, and the reality of how companies operate.

I came across a pretty serious security concern at my company this week. The ramifications are alarming. My education, training and experience tells me one thing: identify, notify, fix. Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.

Anytime I see an article about a data breach, I wonder how long these vulnerabilities were known and ignored. Is that just how business is conducted? It appears so, for many companies. Then why such a focus on security in education, if it has very little real-world application?

By even flagging the issue and the potential fallout, I’ve put my career at risk. These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.

reply
dspillett 9 hours ago
> I came across a pretty serious security concern at my company this week. The ramifications are alarming. […] Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.

I was in a very similar position some years ago. After a couple of rounds of “finish X for sale Y then we'll prioritise those issue”, which I was young and scared enough to let happen, and pulling on heartstrings (“if we don't get this sale some people will have to go, we risk that to [redacted] and her new kids, can we?”) I just started fixing the problems and ignoring other tasks. I only got away with the insubordination because there were things I was the bus-count-of-one on at the time and when they tried to butter me up with the promise of some training courses, I had taken & passed some of those exams and had the rest booked in (the look of “good <deity>, he got an escape plan and is close to acting on it” on the manager's face during that conversation was wonderful!).

The really worrying thing about that period is that a client had a pen-test done on their instance of the app, and it passed. I don't know how, but I know I'd never trust that penetration testing company (they have long since gone out of business, I can't think why).

reply
tracker1 9 hours ago
I wish I could recall the name of a pen test company I worked with when I wrote my auth system... They were pretty great and found several serious issues.

At least compared to our internal digital security group would couldn't fathom, "your test is wrong for how this app is configured, that path leads to a different app and default behavior" it's not actually a failure... to a canned test for a php exploit. The app wasn't php, it was an SPA and always delivered the same default page unless in the /auth/* route.

After that my response became, show me an actual exploit with an actual data leak you can show me and I'll update my code instead of your test.

reply
xvxvx 6 hours ago
An older company I worked for went out of their way to find a pen tester that would basically rubberstamp everything and give them a pass. I actually uncovered major issues with the software during that process, to the point where it was unusable. Major components were severely out of date and open to attack. Other parts didn't even work as advertised. I didn't stick around much longer.
reply
calvinmorrison 11 hours ago
> By even flagging the issue and the potential fallout, I’ve put my career at risk.

Simple as. Not your company? not your problem? Notify, move on.

reply
dspillett 9 hours ago
I read that post as him talking about their company, in the sense of the company they were working for. If that was the case, then an exploit of an unfixed security issue could very much affect them either just as part of the company if the fallout is enough to massively harm business, or specifically if they had not properly documented their concerns so “we didn't know” could be the excuse from above and they could be blamed for not adequately communicating the problem.

For an external company “not your company, not your problem” for security issues is not a good moral position IMO. “I can't risk the fallout in my direction that I'm pretty sure will result from this” is more understandable because of how often you see whistle-blowers getting black-listed, but I'd still have a major battle with the pernickety prick that is my conscience¹ and it would likely win out in the end.

[1] oh, the things I could do if it wasn't for conscience and empathy :)

reply
Aurornis 8 hours ago
Their websites says they're a freelance cloud architect.

The article doesn't say exactly, but if they used their company e-mail account to send the e-mail it's difficult to argue it wasn't related to their business.

They also put "I am offering" language in their e-mail which I'm sure triggered the lawyers into interpreting this a different way. Not a choice of words I would recommend using in a case like this.

reply
refulgentis 11 hours ago
> These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.

I had a bit of a feral journey into tech, poor upbringing => self taught college dropout waiting tables => founded iPad point of sale startup in 2011 => sold it => Google in 2016 to 2023

It was absolutely astounding to go to Google, and find out that all this work to ascend to an Ivy League-esque employment environment...I had been chasing a ghost. Because Google, at the end of the day, was an agglomeration of people, suffered from the same incentives and disincentives as any group, and thus also had the same boring, basic, social problems as any group.

Put more concretely, couple vignettes:

- Someone with ~5 years experience saying approximately: "You'd think we'd do a postmortem for this situation, but, you know how that goes. The people involved think they're an organization-wide announcement that you're coming for them, and someone higher ranked will get involved and make sure A) it doesn't happen or B) you end up looking stupid for writing it."

- A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.

reply
bubblewand 10 hours ago
I've seen into some moderately high levels of "prestigious" business and government circles and I've yet to find any level at which everyone suddenly becomes as competent and sharp as I'd have expected them to be, as a child and young adult (before I saw what I've seen and learned that the norm is morons and liars running everything and operating terrifically dysfunctional organizations... everywhere, apparently, regardless how high up the hierarchy you go). And actually, not only is there no step at which they suddenly become so, people don't even seem to gradually tend to brighter or generally better, on average, as you move "upward"... at all! Or perhaps only weakly so.

Whatever the selection process is for gestures broadly at everything, it's not selecting for being both (hell, often not for either) able and willing to do a good job, so far as what the job is apparently supposed to be. This appears to hold for just about everything, reputation and power be damned. Exceptions of high-functioning small groups or individuals in positions of power or prestige exist, as they do at "lower" levels, but aren't the norm anywhere as far as I've been able to discern.

reply
refulgentis 8 hours ago
Ty for sharing this, I don’t talk about it often, and never in professional circles. There’s a lot of emotions and uncertainty attached to it. It’s very comforting to see someone else describe it as it is to me without being just straightforwardly misanthropic.
reply
xvxvx 11 hours ago
I would get fired at Google within seconds then. I’m more than happy to shine a light on bullshit like that.
reply
dspillett 9 hours ago
> A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.

Maybe not when it is as much as 20 seconds, but an old manager of mine would save fixing something like that for a “quick win” at some later time! He would even have artificial delays put in, enough to be noticeable and perhaps reported but not enough to be massively inconvenient, so we could take them out during the UAT process - it didn't change what the client finally got, but it seemed to work especially if they thought they'd forced us to spend time on performance issues (those talking to us at the client side could report this back up their chain as a win).

reply
pixl97 8 hours ago
There is a term for this but I can't remember what it's called.

Effectively you put in on purpose bugs for an inspector to find so they don't dig too deep for difficult to solve problems.

reply
smcin 7 hours ago
'canary', 'review canary' or something.
reply
macintux 4 hours ago
There's a related (apocryphal?) story from Interplay about adding a duck to animations so that the producer would ask for it to be removed, to make him happy, while leaving the rest alone.

https://bwiggs.com/notebook/queens-duck/

reply
smcin 2 hours ago
Yeah, that one too.
reply
b8 3 hours ago
Sounds like they were bluffing and trying to coerce the researcher in to signing an NDA. I wouldn't of signed and they wouldn't have reach in the US and presumably Germany where the researched is based. Also, I'm glad the affected vendor isn't DAN.
reply
kazinator 11 hours ago
> vulnerability in the member portal of a major diving insurer

What are the odds an insurer would reach for a lawyer? They probably have several on speed dial.

reply
cptskippy 10 hours ago
What makes you think they don't retain them in-house?
reply
kazinator 7 hours ago
What makes you think you don't need speed dial in-house? ;)
reply
tracker1 8 hours ago
Depends on the usage... in-house counsel may open up various liabilities of their own, depending on how things present.
reply
snowhale 9 hours ago
the NDA demand with a same-day deadline is such a classic move. makes it clear they were more worried about reputation than fixing anything.
reply
pixl97 8 hours ago
Reply: "sorry, before reaching out to you I already notified a major media organization with a 90 day release notice"
reply
lucb1e 6 hours ago
In case someone takes this as actual advice, I think this comment is best accompanied with a warning that this gets them to call a lawyer for sure ^^'

(OP mentions a lawyer in the title, but the post only speaks of a data protection officer, which is a very different role and doesn't even represent the organization's interests but, instead, the users', at least under GDPR where I'm from)

reply
jbreckmckye 8 hours ago
Typical shakedown tactic. I used to have a boss who would issue these ridiculous emails with lines like "you agree to respond within 24 hours else you forfeit (blah blah blah)"
reply
viccis 11 hours ago
This is somewhat related, but I know of a fairly popular iOS application for iPads that stores passwords either in plaintext or encrypted (not as digests) because they will email it to you if you click Forgot Password. You also cannot change it. I have no experience with Apple development standards, so I thought I'd ask here if anyone knows whether this is something that should be reported to Apple, if Apple will do anything, or if it's even in violation of any standards?
reply
greggsy 11 hours ago
If anything it’s just a violation of industry expectations. You as a consumer just don’t need to use the product.
reply
tracker1 9 hours ago
FWIW, some types of applications may be better served with encryption over hashing for password access. Email being one of them, given the varying ways to authenticate, it gets pretty funky to support. This is why in things like O365 you have a separate password issued for use with legacy email apps.
reply
tokyobreakfast 10 hours ago
>whether this is something that should be reported to Apple, if Apple will do anything

Lmao Apple will not do anything for actual malware when reported with receipts, besides sending you a form letter assuring you "experts will look into it, now fuck off" then never contact you again. Ask me how I know. To their credit, I suspected they ran it through useless rudimentary automated checks which passed and they were back in business like a day later.

If your expectation is they will do something about shitty coding practices half the App Store would be banned.

reply
jopsen 10 hours ago
> Apple will not do anything for actual malware when reported with receipts, besides sending you a form letter assuring you "experts will look into it, now fuck off"

Ask while you are in an EU country, request appeal and initiate Out-of-court dispute resolution.

Or better yet: let the platform suck, and let this be the year of the linux desktop on iPhone :)

reply
wizzwizz4 9 hours ago
I used to say "submit it to Plain Text Offenders: https://plaintextoffenders.com/", but the site appears defunct since… 2012‽ How time flies…
reply
projektfu 11 hours ago
Another comment says the situation was fake. I don't know, but to avoid running afoul of the authorities, it's possible to document this without actually accessing user data without permission. In the US, the Computer Fraud and Abuse Act and various state laws are written extremely broadly and were written at a time when most access was either direct dial-up or internal. The meaning of abuse can be twisted to mean rewriting a URL to access the next user, or inputting a user ID that is not authorized to you.

Generally speaking, I think case law has avoided shooting the messenger, but if you use your unauthorized access to find PII on minors, you may be setting yourself up for problems, regardless if the goal is merely dramatic effect. You can, instead, document everything and hypothesize the potential risks of the vulnerability without exposing yourself to accusation of wrongdoing.

For example, the article talks about registering divers. The author could ask permission from the next diver to attempt to set their password without reading their email, and that would clearly show the vulnerability. No kids "in harm's way".

reply
alphazard 11 hours ago
Instead of understanding all of this, and when it does or does not apply, it's probably better to disclose vulnerabilities anonymously over Tor. It's not worth the hassle of being forced to hire a lawyer, just to be a white hat.
reply
cptskippy 10 hours ago
Part of the motivation of reporting is clout and reputation. That sounds harsh or critical but for some folks their reputation directly impacts their livelihood. Sure the data controller doesn't care, but if you want to get hired or invited to conferences then the clout matters.
reply
esafak 8 hours ago
You could use public-key encryption in your reports to reveal your identity to parties of your choosing.
reply
general1465 9 hours ago
One way how to improve cybersecurity is let cyber criminals loose like predators hunting prey. Companies needs to feel fear that any vulnerability in their systems is going to be weaponized against them. Only then they will appreciate an email telling them about security issue which has not been exploited yet.
reply
_kst_ 4 hours ago
Like re-introducing wolves into Yellowstone.
reply
hbrav 9 hours ago
This is extremely disappointing. The insurer in question has a very good reputation within the dive community for acting in good faith and for providing medical information free of charge to non-members.

This sounds like a cultural mismatch with their lawyers. Which is ironic, since the lawyers in question probably thought of themselves as being risk-averse and doing everything possible to protect the organisation's reputation.

reply
dekhn 8 hours ago
I find often that conversations between lawyers and engineers are just two very different minded people talking past each other. I'm an engineer, and once I spent more time understanding lawyers, what they do, and how they do it, my ability to get them to do something increased tremendously. It's like programming in an extremely quirky programming language running on a very broken system that requires a ton of money to stay up.
reply
smcin 7 hours ago
Could you post on HN on that? Would be worth reading.

And are you only talking about cybersecurity disclosure, liability, patent applications... And the scenario when you're both working for the same party, or opposing parties?

reply
dekhn 7 hours ago
I'm talking about any situation where a principled person who is technically correct gets a threatening letter from a lawyer instead of a thank you.

If you read enough lawyer messages (they show up on HN all the time) you will see they follow a pattern of looking tough, and increasingly threatening posture. But often, the laws they cite aren't applicable, and wouldn't hold up in court or public opinion.

reply
lucb1e 5 hours ago
> they follow a pattern of looking tough, and increasingly threatening posture. But often, the laws they cite aren't applicable, and wouldn't hold up in court

And it takes years to prove that and be judged as not guilty, or if guilty (as OP would likely be for dumping the database), that the punishment should be nil due to the demonstrated good faith even if it technically violated a law

Wouldn't you say the threats are to be taken seriously in cases like OP's?

reply
dekhn 5 hours ago
No.
reply
BlueGreenMagick 8 hours ago
I'm curious to hear your take on the situation in the article.

Based on your experience, do you think there are specific ways the author could have communicated differently to elicit a better response from the lawyers?

reply
dekhn 7 hours ago
It would take a bit of time to re-read the entire chain and come up with highly specific ways. The way I read the exchange, the lawyer basically wants the programmer to shut up and not disclose the vulnerability, and is using threatening legal language. While the programmer sees themself as a responsible person doing the company a favor in a principled way.

Some things I can see. I think the way the programmer worded this sounds adversarial; I wouldn't have written it that way, but ultimately, there is nothing wrong with it: "I am offering a window of 30 days from today the 28th of April 2025 for [the organization] to mitigate or resolve the vulnerability before I consider any public disclosure."

When the lawyer sent the NDA with extra steps: the programmer could have chosen to hire a lawyer at this point to get advice. Or they could ignore this entirely (with the risk that the lawyer may sue him?), or proceed to negotiate terms, which the programmer did (offering a different document to sign).

IIUC, at that point, the lawyer went away and it's likely they will never contact this guy again, unless he discloses their name publicly and trashes their security, at which point the lawyer might sue for defamation, etc.

Anyway, my take is that as soon as the programmer got a lawyer email reply (instead of the "CTO thanking him for responsible disclosure"), he should have talked to his own lawyer for advice. When I have situations similar to this, I use the lawyer as a sounding board. i ask questions like "What is the lawyer trying to get me to do here?" and "Why are they threatening me instead of thanking me", and "What would happen if I respond in this way".

Depending on what I learned from my lawyer I can take a number actions. For example, completely ignoring the company lawyer might be a good course of action. The company doesn't want to bring somebody to court then have everybody read in a newspaper that the company had shitty security. Or writing a carefully written threatening letter- "if you sue me, I'll countersue, and in discovery, you will look bad and lose". Or- and this is one of my favorite tricks, rewriting the document to what I wanted, signing that, sending it back to them. Again, for all of those, I'd talk to a lawyer and listen to their perspective carefully.

reply
lucb1e 5 hours ago
> which the programmer did (offering a different document to sign). \n\n IIUC, at that point, the lawyer went away

The article says that the organization refused the counter-offer and doubled down instead

> he should have talked to his own lawyer for advice

Costing how much? Next I'll need a lawyer for telling the supermarket that their alarm system code was being overlooked by someone from the bushes

It's not bad legal advice and I won't discourage anyone from talking to a lawyer, but it makes things way more costly than they need be. There's a thousand cases like this already online to be found if you want to know how to handle this type of response

Sounds very usa-esque (or perhaps unusually wealthy) to retain a lawyer as "sounding board"

reply
lucb1e 5 hours ago
> This sounds like a cultural mismatch with their lawyers.

Note that the post never mentions lawyers, only the title. It sounds to me like chatgpt came up with two dozen titles and OP thought this was the most dramatic one. In the post, they mention it was a data protection officer who replied. This person has the user's interests as their goal and works for the organization only insofar as that they handle GDPR-related matters, including complaints. If I'm reading it right, they're supposed to be somewhat impartial per recital 97 of the GDPR: "data protection officers [...] should be in a position to perform their duties and tasks in an independent manner"

reply
socketcluster 7 hours ago
I found a vulnerability recently in a major online platform through HackerOne which could allow an attacker to cheaply DoS the service. I wrote up a detailed report (by hand) showing exactly how to reproduce and even explained exactly how a specially crafted request to a critical service took 10 seconds to get a response (just with a very simple, easy to reproduce example)... I then explained exactly how this vector could be scaled up to a DDoS...

They acknowledged it as a legitimate issue and marked my issue as 'useful info' but refused to pay me anything; they said that they would only pay if I physically demonstrate that it leads to a disruption of service; basically baiting me into doing something illegal! It was obvious from my description that this attack could easily be scaled up. I wasn't prepared to literally bring down the service to make my point. They didn't even offer the lowest tier of $200.

So bad. AI slop code is taking over the industry, vulnerabilities are popping up all over the place, so much so that companies are refusing to pay out bounties to humans. It's like neglect is being rewarded and diligence is being punished.

Then you read about how small the bug bounties are, even for established security researchers. It doesn't seem like a great industry. HackerOne seems like a honeypot to waste hackers' time. They reward a tiny number of hackers with big payouts to create PR to waste as many hackers' time as possible. Probably setting them up and collecting dirt on them behind the scenes. That's what it feels like at least.

reply
lucb1e 6 hours ago
This is sort of my issue with bug bounty programs: it can easily start to feel like extortion when a 'good samaritan' demands money. But they promised it to you by having a bug bounty program, then denied it. You feel rightfully cheated when the bug is legitimate, and doubly so when they acknowledge it. But demanding the money feels weird as well.

I try to go into these things with zero expectations. Having a mediating party involved from the start is a bit like OP immediately CC'ing the CERT: extra legal steps in the disclosure process. Mediating parties are usually a pain to work with, and if it's deemed "out of scope" then they typically refuse to even notify the vulnerable party (or acknowledge to you that it hasn't been disclosed). I don't want a pay day, I just want them to fix their damn bug, but there's no way to report it besides through this middle person. Literally every time I've had to use a reporting procedure (like HackerOne) has resulted in tone-deaf responses from the company or complete gatekeeping. All of those bugs exist to this day. Every time I can email a human directly, it gets fixed, and in some occasions they send a thank-you like some swag and chocolates, a t-shirt, something

Based on what I hear in the community, my HackerOne experiences have been outliers, but it might still be more effective (if you're not looking to collect bounty money) to talk to organizations directly where possible and avoid the ones that use HackerOne or another mediation party

reply
unyttigfjelltol 6 hours ago
Contacting the authorities led the company to hire lawyers— for communication with the data protection authority.

The lever lawyers have to “make it go away” is “law says so.” They’re not going to beg for mercy, they’re not going to invite you to coffee, no “bug bounty.” From their perspective if they arm-wrestle the researcher into an NDA, they patched the only known breach, retrospectively.

Perhaps it’s not prosocial or best practice, but you can clearly see how this went down from the company perspective, with a subject organization that has a tenuous grasp of cyber security concepts.

reply
zaptheimpaler 5 hours ago
I think we should stop making excuses for shitty practices. I can understand why they might do it, i can also see there are much better ways to deal with this situation.
reply
MrQuincle 9 hours ago
There should exist a vulnerability disclosure intermediary. They can function as a barrier to protect the scientist/researcher/enthousiast and do everything by the book for the different countries.
reply
guessmyname 8 hours ago
MSRC (Microsoft Security Response Center) — https://msrc.microsoft.com/

They’ll close a report as “no action” if the issue isn’t related to Microsoft products. That said, in my experience they’ve been a reasonable intermediary for a few incidents I’ve reported involving government websites, especially where Microsoft software was part of the stack in some way.

For example, I’ve reported issues in multiple countries where national ID numbers are sequential. Private companies like insurers, pension funds, and banks use those IDs to look up records, but some of them didn’t verify that the JSON Web Token (JWT) used for the session actually belonged to the person whose national ID was being queried. In practice, that meant an attacker could enumerate IDs and access other citizens’ financial and personal data.

Reporting something like that directly to a government agency can be intimidating, so I reported it to Microsoft instead, since these organizations often use Azure AD B2C for customer authentication. The vulnerability itself wasn’t in Microsoft’s products, but MSRC’s reactive engineers still took ownership of triage and helped route it to the right contacts in those agencies through their existing partnerships.

reply
lucb1e 5 hours ago
National CERTs usually take up this role. I presume OP could have anonymously disclosed to the Maltese CERT, whom they already CC'd, though you'd have to check with them specifically to see if they offer that. Hackerspaces also often do this, especially if you're a member but probably also if not and they have faith that your actions were legal (best case, you can demonstrate exactly what you did, like by showing the script you ran, as OP could)
reply
esafak 9 hours ago
Who compensates them for the risk?
reply
lucb1e 5 hours ago
What risk? It sounds to me like the worst they could get is a subpoena to produce the identity of the reporter

Besides, it's usually governmental organizations that do this sort of thing

reply
esafak 4 hours ago
The risk of lawsuits like the ones threatened to be filed against this researcher.
reply
lucb1e 3 hours ago
They can also sue the pope but I don't think the pope finds that a risk worth considering either when they didn't do any hacking, legal or otherwise. How would an organization get sued for hacking when they didn't do any hacking and are merely passing on a message?
reply
esafak 5 minutes ago
They would call it abetting. It's not as if the site doesn't know what it's disclosing.
reply
pixl97 9 hours ago
That's why you just sell it on the black market and let it be the intermediary.
reply
nickorlow 8 hours ago
The free market at work!
reply
Buttons840 10 hours ago
I've said before that we need strong legal protections for white-hat and even grey-hat security researchers or hackers. As long as they report what they have found and follow certain rules, they need to be protected from any prosecution or legal consequences. We need to give them the benefit of the doubt.

The problem is this is literally a matter of national security, and currently we sacrifice national security for the convenience of wealthy companies.

Also, we all have our private data leaked multiple times per month. We see millions of people having their private information leaked by these companies, and there are zero consequences. Currently, the companies say, "Well, it's our code, it's our responsibility; nobody is allowed to research or test the security of our code because it is our code and it is our responsibility." But then, when they leak the entire nation's private data, it's no longer their responsibility. They're not liable.

As security issues continue to become a bigger and bigger societal problem, remember that we are choosing to hamstring our security researchers. We can make a different choice and decide we want to utilize our security researchers instead, for the benefit of all and for better national security. It might cause some embarrassment for companies though, so I'm not holding my breath.

reply
krisoft 7 hours ago
> we need strong legal protections for white-hat and even grey-hat security researchers or hackers.

I have a radical idea which goes even further: we should have legaly mandated bug bounties. A law which says that if someone makes a proper disclosure of an actual exploitable security problem then your company has to pay out. Ideally we could scale the payout based on the importance of the infrastructure in question. Vulnerabilities with little lasting consequence would pay little. Serious vulnerabilities with potential to society wide physical harm could pay out a few percents of the yearly revenue of the given company. For example hacking the high score in a game would pay only little, a vulnerability which can collapse the electric grid or remotely command a car would pay a king’s ransom. Enough to incentivise a cottage industry to find problems. Hopefully resulting in a situation where the companies in question find it more profitable to find and fix the problems themselves.

I’m sure there is a potential to a lot of unintended consequences. For example i’m not sure how could we handle insider threats. One one hand insider threats are real and the companies should be protecting against them as best as they could. On the other hand it would be perverse to force companies to pay developers for vulnerabilities the developers themselves intentionally created.

reply
kopirgan 4 hours ago
Wow this more like in US. Didn't know Malta is so lawyered.
reply
zx8080 6 hours ago
Share the portal name! We want to know the ~f...~ ”heroes”!
reply
josefritzishere 10 hours ago
I find these tales of lawyerly threats completley validate the hackers actions. They reported the bug to spur the company to resolve it. Their reaction all but confirms that reporting it to them directly would not have been productive. Their management lacks good stewardship. They are not thinking about their responsibility to their customers and employees.
reply
desireco42 11 hours ago
I think the problem is the process. Each country should have a reporting authority and it should be the one to deal with security issues.

So you never report to actual organization but to the security organization, like you did. And they would be more equiped to deal with this, maybe also validate how serious this issue is. Assign a reward as well.

So you are researcher, you report your thing and can't be sued or bullied by organization that is offending in the first place.

reply
PaulKeeble 10 hours ago
If the government wasn't so famous for also locking people up that reported security issues I might agree, but boy they are actually worse.

Right now the climate in the world is whistleblowers get their careers and livihoods ended. This has been going on for quite a while.

The only practical advice is ignore it exists, refuse to ever admit to having found a problem and move on. Leave zero paper trail or evidence. It sucks but its career ending to find these things and report them.

reply
ikmckenz 11 hours ago
That’s almost what we already have with the CVE system, just without the legal protections. You report the vulnerability to the NSA, let them have their fun with it, then a fix is coordinated to be released much further down the line. Personally I don’t think it’s the best idea in the world, and entrenching it further seems like a net negative.
reply
ylk 9 hours ago
This is not how CVEs work at all. You can be pretty vague when registering it. In fact they’re usually annoyingly so and some companies are known for copy and pasting random text into the fields that completely lead you astray when trying to patch diff.

Additionally, MITRE doesn’t coordinate a release date with you. They can be slow to respond sometimes but in the end you just tell them to set the CVE to public at some date and they’ll do it. You’re also free to publish information on the vulnerability before MITRE assigned a CVE.

reply
desireco42 10 hours ago
Yeah, something like that, nothing too much, just to exclude individual to deal with evil corps
reply
janalsncm 10 hours ago
Does it have to be a government? Why not a third party non-profit? The white hat gets shielded, and the non-profit has credible lawyers which makes suing them harder than individuals.

The idea is to make it easier to fix the vulnerability than to sue to shut people up.

For credit assignment, the person could direct people to the non profit’s website which would confirm discovery by CVE without exposing too many details that would allow the company to come after the individual.

This business of going to the company directly and hoping they don’t sue you is bananas in my opinion.

reply
iamnothere 7 hours ago
This would only work if governments and companies cared about fixing issues.

Also, it would prevent researchers from gaining public credit and reputation for their work. This seems to be a big motivator for many.

reply
cptskippy 10 hours ago
Maintaining Cybersecurity Insurance is a big deal in the US, I don't know about Europe. So vulnerability disclosure is problematic for data controllers because it threatens their insurance and premiums. Today much of enterprise security is attestation based and vulnerability disclosure potentially exposes companies to insurance fraud. If they stated that they maintained certain levels of security, and a disclosure demonstratively proves they do not, that is grounds for dropping a policy or even a lawsuit to reclaim paid funds.

So it sort of makes sense that companies would go on the attack because there's a risk that their insurance company will catch wind and they'll be on the hook.

reply
lucb1e 5 hours ago
It's not generally good financial advice to pay the overhead of an insurance company for costs you can easily pay yourself (also things like phone insurance, appliance warranty extensions, etc. won't make your device last longer and the insurer knows better than you what premium covers the average repair costs plus a profit margin). If you have a decent understanding of where the line is between vulnerability disclosure and criminal activities, fronting any court fees and a little bit of lawyer time (iff you can afford these out of pocket) until you're acquitted should be the better route, assuming anyone even ever takes you to court
reply
pixl97 8 hours ago
Heh, what insurance company you use should be public information, and bug finders should report to them.
reply
FurryEnjoyer 10 hours ago
Malta has been mentioned? As a person living here I could say that workflow of the government here is bad. Same as in every other place I guess.

By the way, I had a story when I accidentally hacked an online portal in our school. It didn't go much and I was "caught" but anyways. This is how we learn to be more careful.

I believe in every single system like that it's fairly possible to find a vulnerability. Nobody cares about them and people that make those systems don't have enough skill to do it right. Data is going to be leaked. That's the unfortunate truth. It gets worse with the come of AI. Since it has zero understanding of what it is actually it will make mistakes that would cause more data leaks.

Even if you don't consider yourself as an evil person, would you still stay the same knowing real security vulnerability? Who knows. Some might take advantage. Some won't and still be punished for doing everything as the "textbook way".

reply
lucb1e 5 hours ago
Being more careful is an option, or owning up to it and saying "hey I just did this and noticed this thing unexpectedly happened, apparently you have an XSS here" (or whatever it was). In most cases, the organization you're reporting to is happy about this up-front information, and in the exceptional situation where someone decides to take it to court, there's a clear paper trail (backed up by access and email logs) of what actions were taken and why, making it obvious you did nothing wrong
reply
dboreham 8 hours ago
Messenger shooting is a common tactic with psychopaths.
reply
refulgentis 11 hours ago
Wish they named them. Usually I don't recommend it. But the combination of:

A) in EU; GDPR will trump whatever BS they want to try B) no confirmation affected users were notified C) aggro threats D) nonsensical threats, sourced to Data Privacy Officer w/seemingly 0 scruples and little experience

Due to B), there's a strong responsibility rationale.

Due to rest, there's a strong name and shame rationale. Sort of equivalent to a bad Yelp review for a restaurant, but for SaaS.

reply
mzi 11 hours ago
Dan Europe has a flow as discussed in the article and both the foundation and the regulated insurance branch is registered in Malta.
reply
Nextgrid 11 hours ago
EU GDPR has very little enforcement. So while the regulation in theory prevents that, in practice you can just ignore it. If you're lucky a token fine comes up years down the line.
reply
newzino 7 hours ago
The same-day deadline on the NDA is the tell. If they had a real legal position, they wouldn't need a signature before close of business. That's a pressure tactic designed to work on someone who doesn't know any better. The fact that he pushed back and nothing happened confirms it was a bluff.
reply
clarabennett26 10 hours ago
[dead]
reply
aicodereview42 10 hours ago
[dead]
reply
durzo22 9 hours ago
[flagged]
reply
cynicalsecurity 10 hours ago
[flagged]
reply
kspacewalk2 10 hours ago
Not sure what the name of your complex is, maybe groveling deference to legalese? Whatever it is, I'm sure I would have applied it to your entire country of origin if I knew where you're from, and if I were developmentally around the age of twelve.

He did everything exactly by the book and in the end was even nice enough to not publish the company's name, despite the legal threat being bullshit and him being entirely in the right.

reply
anonymous908213 11 hours ago
[flagged]
reply
circuit10 11 hours ago
How do you know? Some of the text has a slightly LLM-ish flavour to it (e.g. the numbered lists) but other than that I don’t see any solid evidence of that

Edit: I looked into it a bit and things seems to check out, this person has scuba diving certifications on their LinkedIn and the site seems real and high-effort. While I also don’t have solid proof that it’s not AI generated either, making accusations like this based on no evidence doesn’t seem good at all

reply
thenewnewguy 11 hours ago
Not them but the formatting screams LLM to me. Random "bolding" (rendered on this website as blue text) of phrases, the heading layout, the lists at the end (bullet point followed by bolded text), common repeats of LLM-isms like "A. Not B". None of these alone prove it but combined they provide strong evidence.

You can also see the format and pacing differs greatly from posts on their blog made before LLMs were mainstream, e.g. https://dixken.de/blog/monitoring-dremel-digilab-3d45

While I wouldn't go so far as to say the post is entirely made up (it's possible the underlying story is true) - I would say that it's very likely that OP used an LLM to edit/write the post.

reply
nsteel 10 hours ago
Hang on, they used a computer to help them create the post content?! Outrageous.
reply
jibal 4 hours ago
In addition to being irrelevant, these accusations aren't competent.
reply
gchamonlive 11 hours ago
HN's comment section new favourite sport, trying to guess if an article was generated by LLM. It's completely pointless. Why not focus on what's being said instead?
reply
SunshineTheCat 10 hours ago
I thought the same thing. With the rate LLMs are improving, it's not going to be too much longer before no one can tell.

I also enjoy all the "vibes" people list out for why they can tell, as though there was any rhyme or reason to what they're saying. Models change and adapt daily so the "heading structure" or "numbered list" ideas become outdated as you're typing them.

reply
anonymous908213 11 hours ago
[flagged]
reply
gchamonlive 10 hours ago
> This is an LLM-generated article, for anyone who might wish to save the "15 min read" labelled at the top. Recounts an entirely plausible but possibly completely made up narrative of incompetent IT, and contains no real substance.

Nothing in the original message refers to it being clickbait, the core complaint is the LLM-like tone and the lack of substance, which you also just threw it there without references ironically.

> What, exactly, is the problem with disclosing the nature of the article for people who wish to avoid spending their time in that way?

It's alright as long as it's not based on faith or guesswork.

reply
anonymous908213 10 hours ago
It is not based on guesswork. For whatever it's worth, I have gotten 7 LLM accounts banned from HN in the past week based on accurately detecting and reporting them to moderation[1]. Many of these accounts had between dozens and 100 upvotes, some with posts rated to the top of their threads that escaped detection by others. I have not once misidentified and reported an account that was genuinely human. I am aware that other people have poorly-tuned heuristics and make false accusations, but it is possible to build the skill to detect LLM output reliably, and I have done so. In the end, it is up to you whether you believe me, but I am simply trying to offer a warning for people who dislike reading generated material, nothing more.

[1] Unlike LLM-generated articles, posting LLM-generated comments is actually against the rules.

reply
gchamonlive 10 hours ago
Congrats, and thanks for your work, but you should be aware that HN comments are completely different from articles. What makes you think the skills/automations required to identify LLM generated HN comments will work seamlessly with submitted articles? You have to do a statistical analysis of this, otherwise it's just guesswork.

You also have to take into account that the medium is the message[1]. In a nutshell, the more people read LLM generated posts and interact with chatbots, the higher the influence of LLM style in their writing -- the whole "delve" comes to mind, and double dashes. So even if you have a machine that correctly identified LLM generated posts, you can't be sure it'll keep working.

[1] https://web.mit.edu/allanmc/www/mcluhan.mediummessage.pdf

reply
famouswaffles 10 hours ago
Those are a lot of words to say you guessed. And the banning comment is nice I guess but pretty meaningless. Does moderation really always report back to you when you make such an accusation ? Who's to even say all the banned accounts were LLMs ? You know what would happened if i got banned because someone accused me of being a LLM ? Nothing. I'd take it as a sign to do other things.

Let's say you are the LLM detecting genius you paint yourself to be. Well guess what? You're human and you're going to make mistakes, if you haven't made a bunch of them already. So if you have nothing better to add to a post than to guess this, you probably shouldn't say anything at all. Like you said, it's not even against the rules.

reply
jibal 3 hours ago
This looks like complete fabrication by an AI agent.
reply
anonymous908213 3 hours ago
I get it already, you fucking idiots want HN to become Moltbook. Have it your way.
reply
dolebirchwood 11 hours ago
> contains no real substance.

The same could be said of the accusation being levied here.

reply
kazinator 11 hours ago
What is the evidence that the content is entirely LLM generated, rather just LLM-assisted writing of a genuine story?
reply
tolerance 9 hours ago
You know I had a thoughtful comment written in response to this that wouldn’t post because your comment got flagged to death when I tried to submit it!

Your firebrand attitude is doing a disservice to everyone who takes vibe hunting vibecraft seriously!

The intended audience doesn’t even care that this is LLM-assisted writing. Whether the narrative is affected by AI is second to the technical details. This is technical documentation communicated through a narrative, not personal narrative about someone’s experience with a technical problem. There’s a difference!

What are you in this for?!

reply
BizarroLand 11 hours ago
Proof?
reply
toomuchtodo 11 hours ago
Can you share how you confirmed this is LLM generated? I review vulnerability reports submitting by the general public and it seems very plausible based on my experience (as someone who both reviews reports and has submitted them), hence why I submitted it. I am also very allergic to AI slop and did not get the slop vibe, nor would I knowingly submit slop posts.

I assure you, the incompetence in both securing systems and operating these vulnerability management systems and programs is everywhere. You don't need an LLM to make it up.

(my experience is roughly a decade in cybersecurity and risk management, ymmv)

reply
anonymous908213 11 hours ago
The headers alone are a huge giveaway. Spams repetitive sensatational writing tropes like "No X. No Y. No Z." and "X. Not Y" numerous times. Incoherent usage of bold type all throughout the article. Lack of any actually verifiable concrete details. The giant list of bullet points at the end that reads exactly like helpful LLM guidance. Many signals throughout the entire piece, but don't have time to do a deep dive. It's fine if you don't believe me, I'm not suggesting the article be removed. Just giving a heads-up for people who prefer not to read generated articles.

Regarding your allergy, my best guess is that it is generated by Claude, not ChatGPT, and they have different tells, so you may be sensitive to one but not the other. Regarding plausibility, that's the thing that LLMs excel at. I do agree it is very plausible.

reply
p0w3n3d 11 hours ago
I wonder if there's any probabilistic analyser that could confirm that the article is generated, or show which parts might have been generated
reply
roywiggins 10 hours ago
Pangram[0] thinks the closing part is AI generated but the opening paragraphs are human. Certainly the closing paragraphs have a bit of an LLM flavor (a header titled "The Pattern", eg)

[0] https://www.pangram.com

reply
anonymous908213 10 hours ago
There are no automated AI detectors that work. False positives and false negatives are both common, and the false positives particularly render them incredibly dangerous to use. Just like LLMs have not actually replaced competent engineers working on real software despite all the hysteria about them doing so, they also can't automate detection, and it is possible to build up stronger heuristics as a human. I am fully confident and would place a large sum of money on this article being LLM-generated if we could verify the bet, but we can't, so you'll just have to take my word for it, or not.
reply
refulgentis 11 hours ago
I'm very sensitive to this but disagree vehemently.

I saw one or two sigils (ex. a little eager to jump to lists)

It certainly has real substance and detail.

It's not, like, generic LinkedIn post quality.

You could tl;dr it to "autoincrementing user ids and a default password set = vulnerability, and the company responded poorly." and react as "Jeez, what a waste of time, I've heard 1000 of these stories."

I don't think that reaction is wrong, per se, and I understand the impulse. I feel this sort of thing more and more as I get older.

But, it fitting into a condensed structure you're familiar with isn't the same as "this is boring slop." Moby Dick is a book about some guy who wants revenge, Hamlet is about a king who dies.

Additionally, I don't think what people will interpret from what you wrote is what you meant, necessarily. Note the other reply at this time, you're so confident and dismissive that they assume you're indicating the article should be removed from HN.

reply
a3w 11 hours ago
[flagged]
reply
tverbeure 7 hours ago
> No ..., no ..., no .... Just ...

Am I the only one who can't stand this AI slop pattern?

reply
silisili 7 hours ago
Between that and 'Read that again' my heart kinda sank as I went. When if ever will this awful trend end?
reply
lucb1e 5 hours ago
It's one thing for your blog post to be full of faux writing style, but also that letter to the organization... oof. I wouldn't enjoy receiving that from someone who attached a script that dumps all users from my database and the email, as well as my access logs, confirm they ran it
reply
anal_reactor 6 hours ago
Unless the company has a bug-bounty program, never ever tell them about vulnerabilities. You'll get ignored at best and have legal issues at worst. Instead, sell them on the black market. Or better yet, just give away for free if you don't care about money. That's how companies will eventually learn to at least have official vulnerability disclosure policy.
reply
nubg 7 hours ago
> No exploits, no buffer overflows, no zero-days. Just a login form, a number, and a default password that was set for each student on creation.

ai;dr

This is AI slop.

Use your own words!

I would rather read the original prompt!

reply
lucb1e 6 hours ago
Also in the email towards the organization. Makes it sound as condescending "let me dumb it down for you to key points" to the receiver of the email as, well, as LLMs are. Bit off-putting and the story itself is also common to the point of trite. Heck, nothing even ended up happening in this case. No lawyer is mentioned outside of the title, no police complaint was filed, no civil case started, just the three emails saying he should agree to not talk about this. Scary as those demands can be (I have been at the butt end of such things as well, and every time I wish I had used Tor instead of a CIOT-traceable IP address as soon as my "huh, that's odd system behavior"-senses go off. Responsible disclosure just gives you grey hairs in the 10% of cases that respond like this, even if so far 0% actually filed a police complaint or court case)
reply
kmoser 4 hours ago
Presuming nobody had found this exploit previously, it actually is a zero-day.
reply
kazinator 11 hours ago
Why does someone with a .de website insure their diving using some company based in Malta?

Based on this interaction, you have wonder what it's like to file a claim with them.

reply
som 10 hours ago
Divers Alert Network, which is probably the most well known dive membership (and insurance) org out there is registered in Malta in Europe.
reply
vablings 10 hours ago
Absolutely horrible according to DIVE TALK

https://www.youtube.com/watch?v=O7NsjpiPK7o

Insurance company would not cover a decompression chamber for someone who has severe decompression sickness, it is a life-threatening condition that requires immediate remediation.

The idea that you possible neurological DCS and you must argue on the phone with an insurance rep about if you need to be life-flighted to the nearest chamber is just.... Mind blowing

reply
ImPostingOnHN 10 hours ago
It is probably among the standard forms required to participate in a diving class/excursion for travelers from other countries; and, Malta was probably chosen as the official HQ for legal or liability shelter reasons.
reply
f30e3dfed1c9 3 hours ago
Not clear to me why the author thinks he's the good guy in this scenario. His letter to the company might as well read "I am a busybody who downloaded private information about a person who is not me from your web site, ENTIRELY WITHOUT AUTHORIZATION from that person. Here, let me show it to you."

Why does he think he's entitled to do this? I get that his intentions are more or less good but don't see that as much excuse. What did he expect them to say? "Oh thank you wise and wonderful full-time Linux Platform Engineer"?

I appreciate that the web site in question seems to have absolutely pathetic security practices. Good reason not to do business with them. Not a good reason to do something that, in many jurisdictions at least, sounds like it constitutes a crime.

reply