I'm the first person to admit the EU has democratic deficit, but MEPs are directly elected by EU citizens and they chose this in a democratic process. The companies are certainly making a choice with this blogpost.
EU Commission reported that the false positive rate was 13-20%.
German police reported that 50% of all reports were wrong.
The system is rubbish and the EU MEPs were quite open about wanting it to go away.
However, the "13-20%" that you're quoting is a dishonest propaganda number itself. It's the false positive rate that a single small company (Yubo) reported. The reported false positive rates of other companies are between 0.32% and 1.5%, which is still a high error rate in absolute numbers.
Just to be clear: the report itself is full of uncertainty, convenient half truths and false causality. They for example completely rely on Big Tech platforms themselves to count false positives when a moderation decision was reversed. Microsoft apparently even claims that no user ever appealed against a decision ("No appeals reported"). There is no independent investigation into the effectiveness of the regulation at all, while it is in direct conflict with fundamental rights and required to be proportional to its goals.
The section about "children identified" is also a complete mess where most countries can't even report the most basic data, and it isn't clear if mass surveillance contributed anything to new cases at all. But somehow they still conclude "voluntary reporting in line with this Regulation appears to make a significant contribution to the protection of a large number of children", which seems extremely baseless.
[1] https://www.europarl.europa.eu/RegData/docs_autres_instituti...
„We can now finally say with certainty that Chat Control 1.0 will end on April 3 without replacement. The European Parliament has sent a clear signal: it is time to put an end to this ineffective and disproportionate derogation from privacy rules. Under the pretext of protecting children, millions of private messages from innocent citizens were being scanned for years without delivering adequate results. This system simply did not work and had no place in a democratic society.“
It doesn’t have to be unanimous on HN. It wasn’t even unanimous in the EUP.
But what it was is legal and democratic. And the discussion in the parliament explicitly included the fact that the companies will either have to stop, or find a different legal grounding.
The companies in this blog post are effectively admitting they are making a choice to go against the law.
As there should be.
The big tech companies have done that every time the EU passes some consumer protections, and have been spanked in court several times for the disingenuousness.
A) actually being paid in the end and
B) high enough to be of any concern to the concern.
- In 2021 the European Parliament voted in favor of a temporary regulation that allowed companies to (i.e. voluntarily) scan private communications. Let's call it Chat Control 1.0. They chose to enact this because US companies were already scanning private messages in violation of the ePrivacy Directive which had come into force in the previous year. Instead of enforcing this directive, they chose to (temporarily) legalize the scanning of private messages while preparing more permanent legislation.
- In 2024 Chat Control 1.0 was extended for another 2 years. An amendment was adopted that explicitly noted that after this time "[the regulation] shall lapse permanently".
- From 2022 to 2025 the European Commission (together with member states) has proposed mandatory scanning, later updated with a proposal for client-side scanning (defeating end to end encryption), AI classification of image and text content, age verification and a lot of other invasive measures. This is what is known as Chat Control 2.0. The European Parliament has again and again voted against this proposal.
- In 2025/2026 the European Commission finally (temporarily) backed down from Chat Control 2.0 and instead proposed to extend Chat Control 1.0 for another 2 years, but has completely failed to negotiate with parliament to adopt a text that explicitly puts fundamental rights up front, something that a majority of the European Parliament had asked for since 2021.
- In response to this, the Civil Liberties Committee of the European Parliament tabled amendments [1] that explicitly limits the regulation to the subject matter and prevents it from being used to weaken end-to-end encryption. Many of these amendments were adopted.
- Consequently, many conservative members of the European Parliament voted down the entire extension of the regulation. They apparently felt that it was better to let the regulation expire so that they gain more negotiation power to adopt a version of the regulation that the has less safeguards or contains measures like in Chat Control 2.0.
[1] https://www.europarl.europa.eu/doceo/document/LIBE-AM-784377...
"Reaffirming our commitment to mass surveillance"
That's more like it.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
Shouldn't this big liability be pushing the big tech firms to do so?
BS. It's for control and censorship and data harvesting.
Meta alone spend $2 billion lobbying for age-restriction laws, which they tried to hide by pumping it through third parties. We don't know how much the other tech giants spent.
Also hash matching is so easily bypassed you can be sure they really want to add some "AI" detector as well
That's not how that works, last I checked. AIUI it's much more fuzzy. Has to be, being scum doesn't automatically make you an idiot, and a single bit change would make plain old hashes entirely useless.
Insert your favourite dystopia to see where that ends up and how companies benefit from it.
Except for that pesky detail of hash collisions
"We tried to build an even deeper panopticon to enslave you. Drats, you and your Democratic process. We thought we'd pulled the wool over your eyes claiming it was for the kids. We'll get you next time you peons. It's just a matter of time."
Fuck you.
While I want parents to be able to protect kids in a sensible manner, selling out everything and everyone else in civilization and our core values isn't a price we should ever consider sacrificing in so-called democratic societies.
FTFY
Source? Specifically that they paid "large sums" after it came out they were child sex traffickers? Otherwise you can't (or should) expect companies to be doing private investigations prior to donating.
The problem right now is that they can be held liable for distributing CSAM content on their services and, since April 3, they can also be fined if they try to detect that content. It's an impossible situation.
Now, I'm not claiming that these companies always have noble intentions. But there's nothing nefarious here -- they just want regulatory certainty: do X, Y, and Z and you won't be fined or sued.
Implementing end-to-end encryption on relevant communication services could mitigate many risks that come with hosting user content.
It would protect users from Big Tech spying and still allow affected users to report if something sketchy is going on. Best of both worlds.
In any case, it would be a good start.