TikTok will not introduce end-to-end encryption, saying it makes users less safe
397 points by 1659447091 22 hours ago | 381 comments

Traster 14 hours ago
I think this is... fine? Am I just totally naive. I think it's fine to say "You don't really have privacy on this app" - as long as there are relatively good options of apps that do have privacy (and I think there are). TikTok is really a public by default type of social media, there's not much idea of mutual following or closed groups. So sure, you don't have privacy on tiktok, if you want it you can move to snapchat or signal or whatever platform of your choice.

Like, it's literally a platform that was run under the watchful eye of the CCP, and now the US version is some kleptocratic nightmare, so I just don't see the point in expecting some sort of principled stance out of them.

In some ways I think it's worse for places like Facebook to "care about privacy" and use E2EE but then massively under-resource policing of CSAM on their platform. If you're going to embrace 'privacy' I do think it's on you to also then put additional resources into tackling the downsides of that.

reply
londons_explore 14 hours ago
Tiktok has private messaging, and it is used by hundreds of millions of people.

IMO no consumer service should have private 1:1 messaging without e2e. Either only do public messaging (ie. Like a forum), or implement e2e.

reply
RobotToaster 12 hours ago
Tiktok has direct messages, they don't even call them private.

It's better that they're honest about this, nobody should believe for a second that WhatsApp or FB messages are truly E2EE.

DM on social media shouldn't be used for anything remotely private. It's a convenience feature, nothing more.

reply
dvngnt_ 4 hours ago
> nobody should believe for a second that WhatsApp or FB messages are truly E2EE.

Meta still tracks analytics which isn't good for privacy, but I'm not aware of any news of them or 3rd parties reading messages without consent of one of the 1st parties? Signal is probably much better though

reply
chimeracoder 2 hours ago
> Meta still tracks analytics which isn't good for privacy, but I'm not aware of any news of them or 3rd parties reading messages without consent of one of the 1st parties? Signal is probably much better though

Correct. WhatsApp uses the Signal protocol, and there is zero evidence of them reading message contents except with the consent of one of the users involved (such as a user reporting a message for moderation purposes).

(And before anyone takes issue with that last qualifier, consent from at least one party is the bar for secure communications on any platform, Signal included. If you don't trust the person you are communicating with, no amount of encryption will protect you).

Discovering a backdoor in WhatsApp for Facebook/Meta to read messages would be a career-defining finding for a security researcher, so it's not like this is some topic nobody has ever thought to investigate.

reply
pixl97 4 hours ago
>I'm not aware of any news of them

Yet. Until they say "We delete these messages after X time and they are gone gone, and we're not reading them" Assume they are reading them, or will read them and the information just hasn't got out yet.

I mean we keep finding more and more cases where companies like FB and Google were reading messages years ago and it wasn't till now we found out.

reply
nindalf 3 hours ago
> We delete these messages after X time

They never had the plaintext of the messages in the first place, so they don't need to delete them. That's what end-to-end encrypted means.

reply
hogwasher 55 minutes ago
Whether Facebook/Meta can read the plain text of the messages or not depends on whether that encryption is "zero knowledge" or not, aka: does Facebook generate and retain the private encryption key, or does it stay on the users' devices only, never visible to Facebook or stored on Facebook servers?

In the former case, Facebook can decrypt the messages at will, and the e2ee only protects against hackers, not Facebook itself, nor against law enforcement, since if Facebook has the decryption key they can be legally compelled to hand it over (and probably would voluntarily, going by their history).

reply
throw0101c 12 hours ago
> Tiktok has direct messages, they don't even call them private.

It may not be called that, but what are users expecting? Some folks may later be surprised when a warrant gets issued (e.g., from a divorce judge).

reply
giancarlostoro 12 hours ago
If you are a grown adult and dont do research on “messaging apps” (which Tik Tok is not) then thats really on you.
reply
foobarchu 7 hours ago
This viewpoint isn't a slippery slope, it's a runaway train.

"You moved into a neighborhood with lead pipes? That's on you, should have done more research" "Your vitamins contained undisclosed allergens? You're an adult, and it didn't say it DIDN'T contain those" "Passwords stolen because your provider stored them in plaintext? They never claimed to store them securely, so it's really on you"

reply
AlexandrB 2 hours ago
Legislating that everyone must always be safe regardless of what app they use is a one-way ticket to walled gardens for everything. This kind of safety is the rationale behind things like secure boot, Apple's App Store, and remote attestation.

Also consider what this means for open source. No hobbyist can ship an IM app if they don't go all the way and E2E encrypt (and security audit) the damn thing. The barriers of entry this creates are huge and very beneficial for the already powerful since they can afford to deal with this stuff from day one.

reply
sleepybrett 5 hours ago
this isn't anything new, however. No messaging has been actually private since forever, that's why encryption was invented. To keep secrets and to pass those secrets in a way that can be observed without revealing the secret.

Telephones can be tapped, people sold special boxes that would encrypt/decrypt that audio before passing it to the phone or to the ear. Mail can be opened, covertly or not. AIM was in the clear (I think at one point, fully in the clear, later probably in the clear as far as the aol servers were concerned)...

Unless the app/method is directly lying to users about being e2ee it's not a slippery slope, it's the status quo. Now there are some apps out there that I think i've seen that are lying. They are claiming they are 'encrypted' but fail to clarify that it's only private on the wire, like the aim story.. the message is encrypted while it flys to the 'switchboard' where it's plain text and then it's put wrapped in encryption on the wire to send it to the recipient.

The claim here that actually makes me chuckle is somehow trying to paint e2ee as 'unsafe' for users.

reply
oarsinsync 11 hours ago
If you are a grown adult and don't do research on "<insert any topic that could have a material negative impact on your life, but that is not currently on your radar as being a topic that could have a material negative impact on your life>" then that's really on you.

Unfortunately, this doesn't scale.

reply
hogwasher 45 minutes ago
It definitely ignores that many people don't have time. If someone is working over 40 hours per week, plus maybe doing unpaid labor taking care of kids or elders, where are people supposed to find the time and energy to brush up on a million different topics they don't even know they might not know enough about? Especially if they might also have medical issues, or hobbies, or want to have any time at all to relax.

Obviously, one way to improve the situation would be to make sure people are paid fairly and not overworked and have access to good and affordable or free childcare and elder-care and medical care, but corporations don't want that either. If anything, they're incentivised to disempower workers and keep them uninformed, and to get as much time out of them as they can for as little money as possible.

reply
wizardforhire 11 hours ago
Well it does scale… just not in the way that is good for democracy.
reply
red-iron-pine 7 hours ago
80% of the population does not and will never do that level of deep dive on apps

same discussion for any form of technology be it TVs or changing their car's oil

the deliberate app-store-ification of all things computer is also designed to keep people from asking those questions -- just download in and install, pleb.

it's why the Zoomers can't email attachments or change file types: all of the computers they grew up with were designed so they never had to understand what happens under the hood.

reply
johnisgood 4 hours ago
And I think because of all the handholding we are left worse off.
reply
hogwasher 38 minutes ago
Most people couldn't tell you how their car works, at least not enough to fix it. Is that handholding, too?

People can't be knowledgable about everything. There's just too much information in the world, and too many different skills that could be learned, and not enough time.

A carpenter can rely on power tools without understanding fully how the tools work, and it's fine, as long as the tools are made to safe standards and the user understands basic safety instructions (e.g. wear protective eyewear).

To me, making sure that apps don't screw with people, even if they don't understand how the apps work, is roughly the equivalent of making sure power drills are made safely so they don't explode in peoples' hands.

reply
s3p 2 hours ago
Way to dunk on OP I guess but nobody is playing semantics here, it's just whether people think this is a messaging channel with one intended recipient
reply
eloisant 5 hours ago
Honestly I'm tired with every app trying to become the everything app.

Now TikTok wants to be a messaging app. Snapchat has a short video feed just like TikTok. WhatsApp only has a text feed, how long until they also add a video feed?

reply
vedaba 5 hours ago
Meta already has video feeds in facebook and instagram though, I imagine they wouldn’t want to detract users from those
reply
throwaway290 12 hours ago
> nobody should believe for a second that WhatsApp or FB messages are truly E2EE

That's interesting. You think all firms that audited WhatsApp and Signal protocol used by WhatsApp and all programmers who worked there for decades and can see a lie and leak if it was true are all crooks? valid opinion I guess, but I won't call it "no one should believe for a second

(curious you didn't mention Telegram, it is actually marketed as secure and e2e and it has completely gimped "secret chats" that are off by default and used by like almost nobody.)

reply
giancarlostoro 12 hours ago
I forget if its WhatsApp that technically lets you sync chats in unencrypted form to iCloud which is the “loophole” around this, though you can lockdown your iCloud even tighter, not sure it Apple can do much if you fully lock down your iCloud, not sure if this has been legally tested? Its not a very advertised feature its just a setting.
reply
ianburrell 5 hours ago
iCloud backups are encrypted, and can be end-to-end encrypted.

Also, backups have nothing to do with the messages being end-to-end encrypted. Like if you don't use a passcode on the phone, the messages are still encrypted.

reply
oarsinsync 11 hours ago
WhatsApp iPhone syncs to iCloud unencrypted by default[1].

iMessage also syncs to iCloud unencrypted by default[2].

[1] Depends on you paying for iCloud storage, so that you have space for a full phone backup to occur.

[2] Might be "free" with "iMessage in iCloud", an option to enable separately.

reply
throwaway290 11 hours ago
> WhatsApp iPhone syncs to iCloud unencrypted by default[1].

Not true. You must choose to enable it or not when you set up new phone. On mine it does not back up

reply
monooso 9 hours ago
If you must "choose to enable" encryption, that implies it's off by default. If so, GP's statement is accurate.
reply
simsla 9 hours ago
Choose to enable backups.
reply
throwaway290 7 hours ago
No, I mean you must select yes or no. can't use WhatsApp until you make a choice yourself.
reply
gzread 11 hours ago
The Android version syncs all your chat logs to Google Drive without encryption by default. That's the backdoor.
reply
throwaway290 11 hours ago
Right now it got a switch to enable e2e for backups, but yeah I think default backup is probably a workaround...
reply
max-privatevoid 6 hours ago
I'll believe it when it's FOSS
reply
throwaway290 4 hours ago
You mean you will read all code with dependencies and compile it yourself to make sure?;) good for you. but good luck creating a popular e2e messenger then.
reply
trashb 13 hours ago
In my experience most forums have private messaging.

Additionally I think it is fine to say "we don't support e2ee". I prefer honesty to a bad (leaky) e2ee implementation, at least the user can make an informed choice.

reply
Ekaros 13 hours ago
I agree. At least take of "Yes messages are stored on our servers" is honest. And if they are accessed by anything else than limited subpoena is policy or legal issue.
reply
cucumber3732842 11 hours ago
>In my experience most forums have private messaging.

Yeah but it's kind of accepted that the forum owner could read it all if they so chose. Maybe this is a hold over from back in the old days when encryption was nowhere near default during which forums arose.

reply
Bender 8 hours ago
Adding that private self hosted forums can permit uploads of encrypted files, encrypted with a pre-shared secret or a secret shared over a private self hosted Mumble voice chat server.
reply
DoneWithAllThat 10 hours ago
And yet virtually all consumer services with 1:1 messaging lacks e2e. This is a bit of a quixotic position to take.
reply
tuwtuwtuwtuw 12 hours ago
The email protocols would like to have a chat with you.
reply
kgwxd 12 hours ago
You can bring your own encryption to that, and bring your own client to automate it.
reply
em-bee 10 hours ago
you can encrypt the content but not the metadata, not even the subject unless you use a customized client that encodes it (like deltachat which doesn't use a subject at all), but then you still have your email address exposed.

for all intents and purposes email is not e2ee.

reply
Bender 8 hours ago
Email encryption for most people is sufficient even if the metadata is exposed. One can simply state in their email encryption "Bing Bing Bong" or "Why did you not put the trash out?" which might mean to the recipient :: "check the second SFTP server" or "let the cat outside" or "Jump on my private Mumble chat server" or "Get on my private self hosted IRC server". The email message need not be encrypted for that matter.

The intended payload can be in an header-less encrypted file on a throw-away SFTP server in the tmpfs ram disk.

reply
tuwtuwtuwtuw 3 hours ago
So it's end to end encrypted except that third parties can see who you communicated with and when? Sure.
reply
Bender 3 hours ago
Exactly.
reply
unethical_ban 3 hours ago
I have never considered metadata a part of the term E2EE. It has always been about the message contents.

I understand that metadata is valuable information for spies/governments and that encrypting or hiding it is valuable for privacy. But if you use that definition, there are almost no E2EE protocols on the planet in use.

First and foremost, any protocol that uses Apple or Google push notifications is giving metadata to those organizations. Even Whatsapp, iMessage, Signal, Telegram private messages, all of that leaks metadata but the contents of messages are hidden from the provider.

reply
beeflet 3 hours ago
yeah bro genius, that sounds like a totally actionable thing people will do all the time with email. Be sure to drink your ovaltine
reply
Bender 3 hours ago
yeah bro genius

I know, right? I admit that is mostly for people on Linux desktops. People on smart phones are 100% monitored regardless of encryption or fake E2EE that platforms pinky promise is really E2EE like Signal. Shame on Moxie, he knows better.

Ovaltine has a crapload of sugar. Don't drink that horse piss.

reply
tuwtuwtuwtuw 3 hours ago
I can bring my own encryption to tiktok as well. Has roughly the same usability and usage.
reply
sleepybrett 5 hours ago
you can bring your own encryption to ANY messaging platform, doesn't mean it will be easy to use. e2ee just really makes it handy so that users don't need to preshare any keys.
reply
smugglerFlynn 6 hours ago
> as long as there are relatively good options of apps that do have privacy (and I think there are)

Once you have enormous network effect like TikTok has, you don't really have any free selection of alternative apps. You are free to use one, but you will be the only sad user over there.

Regulations are needed that would force large platforms like TikTok and Instagram to enable federation, opening them up to actual competition. This way platforms would be able to compete on monetisation and usability, instead of competing on locking in their precious users more strictly.

reply
acheron 3 hours ago
“Will we ever end the MySpace monopoly?”

> MySpace is well on the way to becoming what economists call a "natural monopoly". Users have invested so much social capital in putting up data about themselves it is not worth their changing sites, especially since every new user that MySpace attracts adds to its value as a network of interacting people.

> "In social networking, there is a huge advantage to have scale. You can find almost anyone on MySpace and the more time that has been invested in the site, the more locked in people are".

https://www.theguardian.com/technology/2007/feb/08/business....

reply
hogwasher 28 minutes ago
Sure, but then everyone moved to Facebook. The monopolist changed, but not the monopolistic market and the lack of consumer choice.

And nobody gained privacy in the process (I rather think everyone lost even more of it).

The situation currently permits only a tiny number of winning companies at a time, and the userbase is locked in even as the site becomes wildly unpopular, until some threshold of discontent is reached, and then everyone moves, and then that new site also enshittifies and the cycle repeats.

Federation is a mechanism whereby people would be able to actually choose providers as individuals and at any time, instead of having to wait years for a critical mass of upset people to build up and leave [current most popular social media site], and instead of being forced to go to [new most popular social media site].

reply
pixl97 4 hours ago
>Regulations are needed

Lolololol. No, not regulations. Regulators. With the people we currently have voted into office in the US the only regulations we are going to get are ones saying Sam and Peter must look at everything you do all the time.

Until we stop voting for more authoritarianism, expect ever increasing amounts of authoritarianism.

reply
hogwasher 27 minutes ago
I think it was clear what they meant.
reply
beeflet 3 hours ago
federation would never work. How would it work here? Either you are forcing tiktok to give pageviews to federations of spam, or you are letting tiktok decide which federations to work with, which essentially results in no federation.
reply
mihaaly 2 hours ago
I am fine TikTok remaining that 'we watch what you are doing' platforms. Those do not care can gave that if they wish, I do not mind.

But bullshitting about it is making users more safe, that is ... bullshit! Worse that that, distorting public opinion, intentionally fooling the gullible.

reply
jmull 11 hours ago
It might be fine if they presented an honest choice.

They are lying straight off though... police and safety team don't read messages only "if they needed to" to keep people safe. They do so for a large variety of other reasons, such as suppressing political dissent and asserting domination and control.

I don't think we can expect most people to understand TikTok's BS here either. I notice even a skeptic like you is uncritically echoing the dubious conflation of privacy and CSAM.

reply
hobs 10 hours ago
Anyone who doubts the requirement for e2e messaging should not be considered a skeptic, they are fully buying into whatever narrative LEO would like you to believe.
reply
dheera 4 hours ago
Fine with me too. I think many other apps (WhatsApp, FB, etc.) are using E2EE for PR purposes and are not actually good implementations of E2EE.

Good implementations of E2EE:

1. Generate the key pairs on device, and the private key is never seen by the server nor accessible via any server push triggered code.

2. If an encrypted form of the private key is sent to the server for convenience, it needs to be encrypted with a password with enough bits of entropy to prevent people who have access to the server from being able to brute force decode it.

3. Have an open-source implementation of the client app facilitating verifiability of (1) and (2)

4. Permit the users to self-compile and use the open-source implementation

If company isn't willing to do this, I'd rather they not call it E2EE and dupe the public into thinking they're safe from bad actors.

reply
keybored 2 hours ago
That it’s fine because it’s the CCP (commies see all) is a new one.

It’s at best subpar for the same reasons as if it was the usual Silicon Valley spyware.

I could leave well enough alone. But why? Because there are choices? There are five other brands of cereal that do not have 25% sugar? I’d rather be a negative nancy towards these on-purpose addictive, privacy-leaking attention pimp apps.

reply
khalic 13 hours ago
No, saying that e2e encryption makes users _less_ safe is completely dishonest, nothing is fine about this.

The logic of "anything is better than before" is also fallacious.

reply
roncesvalles 13 hours ago
Depends on your definition of "safe". Imagine an adult DMs a nude photo to a minor (or other kinds of predation).

If it's E2EE, no one except the sender and receiver know about this conversation. You want an MITM in this case to detect/block such things or at least keep record of what's going on for a subpoena.

I agree that every messaging platform in the world shouldn't be MITM'd, but every messaging platform doesn't need to be E2EE'd either.

reply
shakna 13 hours ago
The receiver has a proven and signed bundle, that they can upload to the abuse report. So the evidence has even stronger weight. They can already decrypt the message, they can still report it.
reply
michaelmior 13 hours ago
Yes, but this leaves the only way to identify this behavior as by reporting from a minor. I'm not saying I trust TikTok to only do good things with access to DMs, but I think it's a fair argument in this scenario to say that a platform has a better opportunity to protect minors if messages aren't encrypted.

I'm not saying no E2E messaging apps should exist, but maybe it doesn't need to for minors in social media apps. However, an alternative could be allowing the sharing of the encryption key with a parent so that there is the ability for someone to monitor messages.

reply
hogwasher 13 minutes ago
Are you suggesting all messaged photos should be scanned, and potentially viewed by humans, in case it depicts a nude minor? Because no matter how you do that, that would result in false positives, and either unfair auto-bans and erroneous reports to law enforcement (so no human views the images), or human employees viewing other adults' consensual nudes that were meant to be private. Or it would result in adult employees viewing nudes sent from one minor to another minor, which would also be a major breach of those minors' privacy.

There is a program whereby police can generate hashes based on CSAM images, and then those hashes can be automatically compared against the hashes of uploaded photos on websites, so as to identify known CSAM images without any investigator having to actually view the CSAM and further infringe on the victim's privacy. But that only works vs. already known images, and can be done automatically whenever an image is uploaded, prior to encryption. The encryption doesn't prevent it.

Point being, disallowing encryption sacrifices a lot, while potentially not even being that useful for catching child abusers in practice.

I'm sure some offenders could be caught this way, but it would also cause so many problems itself.

reply
danlitt 12 hours ago
> I think it's a fair argument in this scenario to say that a platform has a better opportunity to protect minors if messages aren't encrypted

Would it be a fair argument to say the police have a better opportunity to prevent crimes if they can enter your house without a warrant? People are paranoid about this sort of thing not because they think law enforcement is more effective when it is constrained. But how easily crimes can be prosecuted is only one dimension of safety.

> However, an alternative could be allowing the sharing of the encryption key with a parent

Right, but this is worlds apart from "sharing the encryption key with a private company", is it not?

reply
InsomniacL 12 hours ago
> Would it be a fair argument to say the police have a better opportunity to prevent crimes if they can enter your house without a warrant?

Police can access your home with a warrant.

Police cannot access your E2EE DMs with a warrant.

reply
danlitt 6 hours ago
Not answering my question!

> Police cannot access your E2EE DMs with a warrant.

They can and do, regularly. What they can't do is prevent you from deleting your DMs if you know you're under investigation and likely to be caught. But refusing to give up encryption keys and supiciously empty chat histories with a valid warrant is very good evidence of a crime in itself.

They also can't prevent you from flushing drugs down the toilet, but somehow people are still convicted for drug-related crimes all the time. So - yes, obviously, the police could prosecute more crimes if we gave up this protection. That's how limitations on police power work.

reply
Tadpole9181 2 hours ago
> But refusing to give up encryption keys and supiciously empty chat histories with a valid warrant is very good evidence of a crime in itself.

Uh, it absolutely isn't? WTF dystopian idea is this?

reply
allreduce 11 hours ago
And they shouldn't be able to. Police accessing DMs is more like "listening to every conversation you ever had in your house (and outside)" than "entering your house".
reply
cucumber3732842 11 hours ago
>Police cannot access your E2EE DMs with a warrant.

Well the kind of can if they nab your cell phone or other device that has a valid access token.

I think it's kind of analogous to the police getting at one's safe. You might have removed the contents before they got there but that's your prerogative.

I think this results in acceptable tradeoffs.

reply
gzread 11 hours ago
Yes, that is a fair argument and most countries allow the use of surveillance cameras in public for this reason.
reply
gzread 11 hours ago
SimpleX handles this by sending the decryption keys when the receiver reports the message.
reply
khalic 13 hours ago
Keeping children safe and prosecuting are too different concepts, only vaguely related. So no, being able to track pdfs doesn't make children safer. What keeps them safe is teaching them safe communication habits and keeping them away from things like Tiktok.

We shouldn't make the world a worse place for every one because some parents can't take care of their children.

reply
cucumber3732842 11 hours ago
>Keeping children safe and prosecuting are too different concepts, only vaguely related.

See also: That time the FBI took over a CSAM site and kept it running so they could nab a bunch of users.

reply
Ajedi32 9 hours ago
Not necessarily saying what they did was right, but I think there's a strong utilitarian argument to be made that what they did in that case was, in fact, the best way to keep children safe.

What's more dangerous? CSAM on the internet? Or actual child predators running loose?

reply
cucumber3732842 9 hours ago
That stuff spreads and re-spreads just like anything else people download off the internet. There's a pretty strong argument for shutting it down right away. IIRC most users were outside jurisdiction.
reply
integralid 8 hours ago
Even if one more person was prosecuted it was worth it. If you shut down an illegal website a new one will show up a month later, with the same people involved, and you achieved nothing.
reply
roughly 6 hours ago
What was the rate of child exploitation in the GDR?
reply
kgwxd 12 hours ago
Ugh. The kids aren't even safe from the people making, and enforcing laws. This argument should be long over for anyone with eyes or ears.
reply
philipallstar 13 hours ago
Imagine Hamas are your government and want to figure out who's gay. You don't want a MITM in case they can do this.

Pick your definition of safe.

reply
trashb 13 hours ago
In that case don't use Tiktok dm's to discuss your sexuality. I think it is strange that people feel like they have to be able to talk on sensitive topics over every interface they can get their hands on.

Similarly in "traditional" media you may not want to discuss such private conversation on a radio broadcast. Perhaps you would rather discuss it on the phone or over snail mail as there is more of an expectation of privacy on those medium.

reply
roughly 6 hours ago
Right, but it currently isn't a sensitive topic - homosexuality is, as of 2026, broadly legal in the United States. That's a relatively new state of affairs, historically speaking, and one which Afghanistan shared as recently as 2021.
reply
philipallstar 10 hours ago
I'm commenting in the context of the conversation, not in a vacuum. You could just as (in fact, much more) easily say that children shouldn't be on apps with private messaging enabled. That would help a lot more, and then we could keep e2ee.
reply
danlitt 12 hours ago
> there is more of an expectation of privacy on those medium

What does the "p" in "pm" stand for?

reply
trashb 12 hours ago
excuse me, I confused "Private messages" (pm) for "Direct messages" (dm).

I will update above

reply
danlitt 7 hours ago
I don't think you confused anything, except for the terminology the platform uses. There is an obvious expectation of privacy when sending direct messages!
reply
sleepybrett 5 hours ago
Hasn't been true ANYTIME IN HISTORY. Hell it was well understood even by children that no conversation you had on the telephone was truly private. That's why cyphers were invented.
reply
gzread 11 hours ago
it stands for "not a public timeline post"
reply
danlitt 7 hours ago
It should be obvious from how contrived your wording is that nobody thinks of them this way.
reply
miki123211 12 hours ago
This is fine if you have TLS encryption and the platform is not local.

Sure, they can fabricate some evidence and get access to your messages, in which case, valid point.

reply
miki123211 12 hours ago
It makes certain users less safe in certain situations.

E2E makes political activists and anti-chinese dissidents safer, at the cost of making children less safe. Whether this is a worthwhile tradeoff is a political, not technical decision, but if we claim that there are any absolutes here, we just make sure that we'll never be taken seriously by anybody who matters.

reply
khalic 12 hours ago
Claiming e2e makes children less safe is flat out dishonest. And the irony of you criticising “absolutes” after trying to pass one is just delicious.
reply
gzread 11 hours ago
What are children at risk of, when E2EE is used?

What are children at risk of, when E2EE is not used?

reply
roughly 6 hours ago
> What are children at risk of, when E2EE is used?

Potential exposure to abusive adults.

> What are children at risk of, when E2EE is not used?

State-sanctioned violence.

reply
reactordev 10 hours ago
This is the argument they can’t have…
reply
fendy3002 12 hours ago
well having no e2e encryption is safer than having a half-baked e2e encryption that have backdoor and can be decrypted by the provider.

and for tiktok's stance, I think they just don't want to get involved with the Chinese government related with encryption (and give false sense of privacy to user)

reply
dfxm12 10 hours ago
Trying to gaslight the public into thinking end to end encryption makes users less safe is not fine.
reply
mrexcess 9 hours ago
>I think it's fine to say "You don't really have privacy on this app"

Disagree. To analogize why: privacy isn't heated seats, *its seat belts*. Comfort features and preferences are fine to tailor to your customers and your business model. Jaguar targets a different market than Ford, and that's just fine.

Safety features should be non-negotiable for all. Both Jaguar and Ford drivers merit the utmost protection against injury in crashes. Likewise, all applications that offer user messaging functionality should offer non-defective, non-harmful versions of it. To do that, e2e privacy is absolutely necessary.

>I just don't see the point in expecting some sort of principled stance out of them.

This is the defeatism that adds momentum to a downhill trajectory. Exactly the opposite approach arrests the slide - users expecting their applications and providers to behave in principled ways, and punishing those who do not, are what keeps principles alive. Failing to expect lawful and upright behavior out of those you depend on, be they political leaders or software solutions providers, guarantees that tomorrow's behavior will be less lawful and upright than yesterday's. Stop writing these people a pass for this horrible behavior, and start holding them unreasonably accountable for it, then we'll see behavior start to change in the direction that we mostly all agree that it needs to.

The most effective protests against internet censorship came from massive grass roots movements, with users drawing a line in the sand that they will not tolerate further impositions on their freedom.

>In some ways I think it's worse for places like Facebook to "care about privacy" and use E2EE but then massively under-resource policing of CSAM on their platform.

The irony is so manifest of billions of people having their privacy stripped by politicians and business elites in the name of protecting our children, while those politicians and business elites conspire en masse to prey on and sex traffick our children. If these forces actually took those concerns seriously, rather than sensing them as an opportunity to push ulterior motives, they'd be eating each other alive, right now. Half of DC, half of Hollywood, and at least a tenth of most major college administrations would ALL be at the docket.

reply
Traster 9 hours ago
Tesla doesn't have parking sensors. They're a safety feature. There's lots of safety features in cars that are optional, we've got an entire rating system for the safety of cars.

We're talking about an app that's controlled by the CCP, I do expect them to take a principled stance - stances like Taiwan is a part of China and you can't be openly critical of the leader of the party. They don't have the same principles as you. You can force them to put in E2EE, but you can't force them to be honest about it or competent about it. I would rather know what we're getting than to push them to lie.

This is the same thing as the OpenAI/Anthropic thing. You've got Anthropic taking a principled stance and getting pain for it, and you've got OpenAI claiming to take the same stance, but somehow agreeing to the terms of the DoW. Do you think it's more likely that Anthropic carelessly caused themselves massive trouble, or do you think OpenAI is claiming to have got the concessions that clearly won't work in practice. I think it's naive to think the former.

reply
mrexcess 7 hours ago
>We're talking about an app that's controlled by the CCP, I do expect them to take a principled stance

In the area of large scale internet service providers, who do you expect to take a principled stance, and why do you expect them to take it?

If the answer is, "nobody", then why keep singling out China? And if the answer isn't "nobody", then how do we apply the same pressures and principles to TikTok and other platforms that offer messaging?

This isn't some abstract concern. We know that WESTERN journalists, activists, and others have been murdered in acts of transnational repression that either began or were focused and abetted by communications surveillance aimed toward political dissidence. It seems incredibly naive to believe that current Western political and military leadership could ever be dissuaded from taking effective action (and such surveillance and repression campaigns certainly are effective) by moral qualms unsupported by strong checks and balances of accountability. In other words - this sort of repression most likely continues happening to journalists, activists, human rights lawyers, and other political dissidents, in our society, today. Enabled by the refusal of our service providers to protect us, their users.

It seems incredibly naive - civilization threateningly so - to write a pass to anyone, let alone Larry Ellison, for opting to deliberately expose "his" users to this risk. Nothing is OK about this dereliction of responsibility towards them.

reply
xeckr 19 hours ago
Brilliant. They're repackaging the argument governments have long made about E2EE being dangerous to children.
reply
debazel 18 hours ago
Children are just too effect of a tool when building a surveillance state. We should have banned children from owning open computers a long time ago just like we do with Alcohol, Driving licenses, etc.

Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal. We already heavily restrict the freedom of children so there is plenty of precedent for this. Optionally we could provide service points to unlock devices when they turn 18 to avoid E-waste as well.

This way it's the point of sale where you provide your ID, instead of attaching it to the hardware itself and sending it out to every single SaaS on the planet to do what they wish.

reply
azinman2 17 hours ago
Would be a nightmare to implement and achieve the goal, but I have to say I think it’s more right than wrong. All of the data is very clear about the harms.

China has restrictions for social media and screen time for kids — how do they implement this?

reply
debazel 16 hours ago
I actually think this would be easier to implement than many of the current ID verification methods I've seen being pushed. We already have the infrastructure for selling age restricted goods, this is nothing new. Manufacturers that are unable to restrict their hardware in a "child" mode don't have to do anything and could simply continue selling to adults only.

It's obvious we're moving in a direction where we are going to get these restrictions in one way or another, and this is the only way I've come up with that doesn't come with serious privacy implications.

Most importantly, this solution would be simple for anyone to understand. You don't need to be a cryptography expert to understand there are child safe devices and then there are unrestricted devices for adults.

reply
vladms 15 hours ago
Would the parents comply though? Many of the restrictions work because most adults agree is OK. For example for alcohol, children could drink as much as they want at home, if adults would permit it.

If most adults would be convinced there is an issue, one probably has enough lock-down modes even nowadays, not sure it is a "technical" problem.

reply
debazel 15 hours ago
I strongly believe that most would actually. All parents I've talked to have had issues with parenting their children's online activity. They know there are harmful things they want to prevent them from accessing but it is simply to hard to configure and set up existing tools for it. (Besides every single friend they have don't have any restrictions so it all seem pointless.)

I can also see also large support for uploading ID to various services when talking about kids, but when you re-frame the question to adults, most seems to really dislike the idea immensely.

Sure there will be children with access to unrestricted devices, just like we had kids with porn mags hidden in a forest somewhere back in the day, or how that one sketchy guy was buying alcohol, etc. But I think this is an acceptable level of risk for whatever harm people want to prevent.

reply
taikon 15 hours ago
Definitely makes it easier for parents. It also normalizes screen time limits for kids. When none of your kids' friends have screen time limits, it's harder to enforce. When at least there's a few of them, it's easier to get buy-in from your kids.
reply
Ajedi32 9 hours ago
At that point it's on the parents. We can't stop parents from giving their kids alcohol or drugs either. (Not saying internet access is necessarily on the same level as that but you get the point.)
reply
oarsinsync 10 hours ago
> Would the parents comply though?

Consider that even with something as divisive as covid lockdowns and vaccines, the overwhelming majority of people complied with government instructions.

There are a minority of people currently refusing to vaccinate their children properly, and their fucking around is being found out with measles outbreaks in various countries.

Why would this be different? Why wouldn't it be a minority of parents permitting their children to drink, to smoke, to use unrestricted computing resources?

reply
expedition32 14 hours ago
Children are not the property of their parents- the government can and does take over parental responsibility.
reply
cimi_ 15 hours ago
I don’t understand how id-ing the buyer helps? What is the age restricted good here?

Are you saying that kids now buy their phones with pocket money without their parents knowing?

> It's obvious we're moving in a direction where we are going to get these restrictions in one way or another

It’s not obvious, it’s just sad. I still hope reason will prevail in this.

reply
cezart 15 hours ago
The age restricted good is an unrestricted computer.
reply
Kim_Bruning 13 hours ago
Oh, that actually seems ... bad. On the gripping hand... restricted in which way? I learned to program on the BBC B, for instance.

I keep thinking that computers that are actually made to be good for children should be a thing. Perhaps like "A Young lady's Illustrated Primer" ( https://en.wikipedia.org/wiki/The_Diamond_Age )

reply
gzread 11 hours ago
Did you buy your own BBC B though?
reply
gzread 11 hours ago
The new California law requires all operating systems to have a child mode.
reply
iamnothere 4 hours ago
Why on earth would we be looking to China as a template on how we should run free societies? Are you mad?
reply
azinman2 4 hours ago
Good ideas can come from anywhere. Shutting yourself off only does a disservice. You don’t need to replicate 100% of another society to recognize individual strengths.

https://www.technologyreview.com/2023/08/09/1077567/china-ch...

That describes something very similar to what the OP suggested.

reply
iamnothere 4 hours ago
Yeah, sounds like something from an authoritarian police state.

> Essentially, this is a cross-platform, cross-device, government-led parental control system that has been painstakingly planned out by Beijing.

> The rules are incredibly specific: kids under eight, for instance, can only use smart devices for 40 minutes every day and only consume content about “elementary education, hobbies and interests, and liberal arts education”; when they turn eight, they graduate to 60 minutes of screen time and “entertainment content with positive guidance.” Honestly, this newsletter would have to go on forever to explain all the specifics.

We don’t do this in free societies. Let the parents decide.

reply
babyshake 14 hours ago
It's a nightmare to some extent to prevent underage people from consuming alcohol if you want to phrase it that way. But we don't try to ban stores from selling alcohol because of concerns children will be drinking it. Instead we require the store checks for ID.
reply
butterbomb 6 hours ago
> how do they implement this?

Centralized power and being unafraid to use authoritarian tactics. Also the general cultural ethos of the people.

reply
philipallstar 13 hours ago
> China has restrictions for social media and screen time for kids — how do they implement this?

China is much more socially conservative, and less likely to abandon their kids to latest thing.

reply
k1musab1 16 hours ago
Passport /citizen ID linked to your WOW account, etc.
reply
TkTech 16 hours ago
Which has never worked. Korea had a system to prevent kids from gaming after midnight for something like 15 years. All it did was make Korean kids very good at memorizing their parents ID.
reply
hnfong 15 hours ago
In China they link the ID to a phone number (via mobile carriers) and the online services require you to authenticate using the phone (SMS etc.) Unless the kids are able to secretly access the parent's phone there's no low-effort way to work around the system.

I don't know about Korea but if memorizing an ID number works, then that's just a badly designed system.

I'm not sure what your argument is really, unless you're saying there's technically and absolutely no feasible way to securely verify the age of a person before allowing them to access an online service (even if you allow the government to be authoritarian)

reply
reactordev 10 hours ago
The point is, where there’s a will, there’s a way.
reply
em-bee 10 hours ago
when i signed up for mobile service or for internet service in china (i don't remember the specifics), i was given half a dozen sim cards for use in my family. so they were all tied to my or my wife's name, but used by anyone who needed one. i believe the in-laws got at least one or two, and my kids would have gotten one, had they been old enough to have their own phone. i don't know if there was any rule that would restrict who we give those cards.

the actual users of each simcard did not have to identify themselves. so at least then it wasn't about age controls, but it obviously would allow tracing the owner eventually.

reply
broken-kebab 14 hours ago
Maybe it does work exactly as intended. It gives parents more leverage to restrict their kids gaming but many parents just don't care. And it's ok I guess, the society probably needs some flexibility in raising the next gen.
reply
eru 16 hours ago
Parents are already allowed to restrict their children access to 'dangerous' things like open computers or knives.
reply
jaapz 15 hours ago
Parents are also allowed to restrict their children access to alcohol and cigarettes, but it seems a government ban on them buying those things works better
reply
iteria 4 hours ago
Alcohol is totally legal for a child to drink in my state as long as consumed privately. It's only illegal for them to buy. My parents gave me alcohol all the time in order to teach me about it and the result was that I didn't really drink when I turned 21 or have any urge to sneak it.

That's exactly how I'm doing technology. I sign my kid up for kid accounts. And I apply parental controls.

reply
AnthonyMouse 14 hours ago
Given the ease with which kids who want them can get any of those things in schools, it's not clear that the government ban is actually doing anything of significance or that the reduction in usage isn't more a result of convincing people that those things are actually bad for them so they choose not to partake despite the continued widespread availability.

Notice that consumption of those things is also down for adults even though adults are not banned from getting them.

reply
broken-kebab 14 hours ago
Doesn't seem to be a universal truth to me. As a teenager I had rather easy access to both cigarettes and alcohol in spite of usual age-restrictions legally imposed. I didn't care what gov't thinks about it. I did care about what my parents would do if I caught drunk though. That was my real barrier.
reply
reactordev 10 hours ago
There was always some guy outside willing to make $10 to buy what you needed.
reply
khalic 10 hours ago
I’m sorry in what world is age restriction effective at keeping teens away from alcohol? Are you from the 60s?
reply
thaumasiotes 15 hours ago
I don't think debazel was saying that children should have been banned from owning computers for the benefit of the children. He was saying that children should have been banned from owning computers so that the government would have no excuse to regulate what's allowed on computers.
reply
bougainvilley 11 hours ago
so we agree that governments only using the safety of children as pretext to extend their control of people's lives, otherwise there are better solution protect children of the harms of the internet.
reply
greybcg 16 hours ago
At the same time, I remember growing up in the internet's wild west and bad encounters weren't an issue for me because of the golden rule I was taught from the start: you don't give your personal information and you don't interact with complete strangers. Learning to navigate the web instead of being in a walled garden was helpful in many ways.

The better question to ask ourselves is, does the capability to gather more information also lead to more power to act on this information? If the investigative resources are spread thin already it's not like they're gonna catch more criminals with investing more there. Repelling questionable individuals off the platform with lots transparancy -is- an effective way, but just a specific tool for a symptom.

I think a part of a better solution is to give parents and children better tools to manage their social graph themselves. Essentially the real problem is discovery and warding off of social outliers in a way that doesnt out all responsibility on opaque algos or corporations.

A part of their e2e keys could be shared using an intentionally obtuse way like mailing an item or a physical "friend code". That way parents and vetted friends can have their privacy. You don't need to tie an id to someone's person to get positive confirmation on someone's poor behaviour. If someone crossed the line then parents can see it and escalate. In additon, what would happen to a child with abusive parents who can then arbitrarily restrict and deny a childs freedom to communicate? I did not have this myself, but without free access to other minds and information I would have been duller. Does a large information dragnet really serve our collective interests or are more precise tools needed?

reply
debazel 15 hours ago
> I think a part of a better solution is to give parents and children better tools to manage their social graph themselves. Essentially the real problem is discovery and warding off of social outliers in a way that doesnt out all responsibility on opaque algos or corporations.

This is actually a key consideration for the proposed implementation. The biggest issue for parents when restricting their children's online activity is that they simply don't understand the tool available for it.

By having a "child mode" iPhone, parents don't have to know any of that. They simply buy the iPhone Kids for their children and then get a plain iPhone for themselves.

If these restrictions were to actually be enforced by law as well, then it would make it very easy for teachers and other guardians to check if a device is appropriate for the child using it.

reply
novok 15 hours ago
From what I've seen, the bad effects don't necessarily just come from free access to the internet, but that everyone around them in their social group has a video camera that can covertly record, they're all immature children and thus you cannot slip up once or you get kid cancelled, and they start doing a collective dissociative freeze response in a self-imposed emergent panopticon as a result.

So if the teen phone turned into a restricted "call mom" device with no cameras and with neon yellow obvious fuck you coloring and a restricted set of apps, and police took away a full phone much like they take away cigs and beer it might be enough to break the critical mass to create this issue. They can have dedicated cameras for video club, use the family computer, have an xbox or switch and have whatever tech experience that millenials had, the last generation to not have exponential increases in anxiety , depression and sexlessness.

It's the covert camera + internet that it's the key issue.

reply
keybored 52 minutes ago
The most important principle in the modern age is the freedom to prey on wallets. You can’t give parents tools to conveniently restrict what their children do. Impressionable minds ought to live in a lord of the flies state where they are bombarded with stuff to nag to their parents about and give them FOMO about what their friends have that they don’t have.

That’s why children must be free.

reply
MrToadMan 15 hours ago
Locking down children’s devices doesn’t stop adults sharing illegal content with other adults though, so there would still be pressure to monitor communications between adults.
reply
psychoslave 15 hours ago
At some points, laws become an ineffective tool to prevent malevolent people to act in detrimental manners, no matter what it states. But prejudices of wicked states will always continue to impact more badly general public as ever more drastic laws lacking any balance become enacted.
reply
hsbauauvhabzb 15 hours ago
I don’t think they’re doing that on TikTok
reply
larodi 13 hours ago
Indeed way past time. Though no CEO would admin publicly what the addiction to attention/social media, gaming, and general screen use, causes to children. Of course this should've been regulated similarly to Alcohol, but billions would dry and it's much easier to witch-hunt marijuana, and illegal raves, right?
reply
jjmarr 17 hours ago
> Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal.

California is mandating OSes provide ages to app stores, and HN lost their mind because it's a ban on Linux.

reply
consp 16 hours ago
> California is mandating OSes provide ages to app stores,

They forgot to put in the provision which exempts apps which do not need an age rating? As in: everything os related.

Sounds like a good way to get rid of snap at least since that is where all the commercial bloat is located. Last time I did a fresh Debian install I do not remember installing any app from the os repository which would require age restrictions (afaik).

reply
jjmarr 16 hours ago
> They forgot to put in the provision which exempts apps which do not need an age rating? As in: everything os related.

That's correct. You need to provide your age to install grep.

reply
chillfox 15 hours ago
This honestly sounds like the best proposed solution I have heard.
reply
fhd2 15 hours ago
Agreed. Putting the burden on parents is quite something:

1. You end up being the bad guy, other parents don't restrict their kids internet usage etc. Some folks would argue to just not set up restrictions and trust them. But it's a slippery slope and puts kids in a weird position. They start out with innocent YouTube videos, but pretty quickly a web search or even a comment can lead them to strange places. They want to play games online, but then creeps abuse that all the time. Even if you trust them to not do anything "wrong", it's a lot to put on their shoulders.

2. If you want to put restrictions in place, even if you're an expert, the tools out there are pretty wonky. You can set up a child protection DNS, but most home routers don't make it easy (or even allow you) to set a different DNS server. And that's not particularly hard to circumvent. I suppose a proxy would be a more solid solution, but setting that up would be major yak shaving. Any "family safety" features (especially those from Microsoft) are ridiculously complicated and often quite buggy. Right now, I got the problem on my plate that I need to migrate one of my kid's accounts from a local Windows account to a Microsoft account (without them loosing all their stuff), because for local accounts, it seems the button to add the device is just missing? Naturally, the docs don't mention that, I had to do research to arrive at that hypothesis. The amount of yak shaving, setup and configuration you have to do for a reasonable setup is just nuts.

3. If you're not good with tech - I don't see how you have _any_ chance in hell to set up meaningful restrictions.

Some countries are banning social media - sure, that's one thing. But there's a _lot_ of weird places on the internet, kids will find something else. I for one would appreciate dedicated devices or modes for kids < 18. Would solve all this stuff in a heartbeat.

reply
mwigdahl 10 hours ago
After struggling with this problem for a while, we started using Qustodio. It's not perfect by any means, but it's the most broadly effective and usable tool for parental control I've found. Loads better than the confusing iOS native screen time tools.
reply
Razengan 5 hours ago
> while adults could continue as normal.

After providing their identities to prove they are adults, and having all their activities tracked wherever they go and whatever they do.

The first 18 years aren't freedom either, just the system prepping you for what's ahead.

reply
pinkmuffinere 17 hours ago
> We should have banned children

I see you Mr Quaker Oats

reply
brainzap 11 hours ago
apple SDK already can return underage/adult
reply
tayo42 18 hours ago
I can't tell if this is sarcasm or not
reply
pants2 17 hours ago
TikTok has a drug-like effect on the brain. Multiple studies show a clear link between excessive TikTok engagement and increased levels of anxiety, depression, and stress. Maybe it is time we regulate it like a drug?
reply
voidUpdate 15 hours ago
Is that because of engaging with tiktok, or because of the content on tiktok? If the app was exclusively pictures of kittens and nice flowers you saw on your commute, would it have a detrimental effect?
reply
Shitty-kitty 16 hours ago
What do you mean exactly, tax it as a vice?!
reply
keybored 50 minutes ago
It’s about 15 years ahead of its time. Too enlightened for most.
reply
travisgriggs 18 hours ago
Hyperbole of some sort. I think it works on both the positive and negative side of the axis too.
reply
nandomrumber 17 hours ago
I’ll have a packet of cigarettes, a fifth of vodka, and an unrestricted personal electro device.

ID please.

Seems entirely reasonable.

Possibility entirely ineffective, but then again I don’t often see children walking around with bottle a of booze.

reply
true_religion 17 hours ago
This is how the internet is run in countries where you need ID to connect to services. It’s not at all dystopian.
reply
haritha-j 15 hours ago
I don't understand why all teh child safety systems require age verification. Why not have a single setting on a smartphone that sends a 'child' flag to every single app or website, which then reacts accordingly? As long as you ensure that the browser can't be changed or modifed, it should be fine.
reply
gzread 11 hours ago
The California law works this way, and it doesn't even have to require the browser can't be modified
reply
kgwxd 12 hours ago
Then adults could lie about their age and benefit from the data protection laws only granted to children for some reason.
reply
pino83 14 hours ago
Does it matter. It's just some arbitrary company. They do have the freedom to decide those things however they want, right? The customer can then decide whether to switch or not.
reply
AdamN 14 hours ago
It matters because if it works and people continue using the platform then other providers will follow and the only remaining E2EE providers will be niche.
reply
hk__2 14 hours ago
Switch to what exactly?
reply
pino83 13 hours ago
If there is nothing else, then you as a customer has screwed up with it before, right? And then the entire strategy/philosophy is maybe to be reviewed?!

Or, in other words: If there is no alternative, this is due to your own faults. Either deal with it, or find ways to undo your mistakes.

reply
gzread 11 hours ago
Alternatives to private messaging on TikTok?

Uh, Signal. SimpleX. Session. XMPP/OTR. PGP.

Discussing things on TikTok, that the government must not know about, seems a bad idea.

reply
pino83 4 hours ago
Exactly. And once those ones are established: Why not have all discussions there; not just the ones where you explicitly want to hide something (for whatever dubious or legit reason).
reply
threatofrain 16 hours ago
Ultimately your neighbors must buy the argument. The reason why this argument wins is not because framing is so tricky, but because it connects with the values of your neighbors. Trying to convince people that these aren't actually their values is swimming upriver.
reply
Tepix 15 hours ago
The solution is simple: Take away the argument by blocking children's access to social media. Win-Win.
reply
medi8r 15 hours ago
TikTok is the government, morealess
reply
ThoAppelsin 17 hours ago
DMs are akin to private conversations in real life. Thus, every DM feature should entail E2EE.

It’s ok for a platform to not feature private conversations. They should just have no DM feature at all, then; make all messages publicly visible.

Private conversations are indeed not for all ages. Parents should be able to grant access to that on individual basis.

reply
kreco 13 hours ago
> They should just have no DM feature at all, then; make all messages publicly visible.

This makes no sense.

I can discuss something in a bar which is not a very private conversation, I wouldn't care if someone else hear what I'm saying. But I also don't want someone to record it and post it on the internet to be seen by the whole world.

Privacy is not just boolean you toggle somewhere.

reply
bougainvilley 11 hours ago
I suppose they mean that apps should brand their non e2ee chat features as private or personal, which is what users take as the default assumption when interacting in one to one chat.
reply
93po 7 hours ago
In a bar you're not speaking directly into a microphone that is permanently saving everything you say for later instant access by every government and advertising agency that wants to prosecute you or invade your privacy to sell you something
reply
bdamm 16 hours ago
Ah, but you see, soon TikTok will allow parents to spy on their children's DMs, and parents will love this.
reply
gzread 11 hours ago
Isn't that something we asked for? We keep asking for parents to parent their children instead of getting age verification laws, and that is what that looks like.
reply
Galanwe 13 hours ago
I fail to see the link between private conversations/DM and E2EE.

To quote a comment I made some time ago:

- You can call your service e2e encrypted even if every client has the same key bundled into the binary, and rotate it from time to time when it's reversed.

- You can call your service e2e encrypted even if you have a server that stores and pushes client keys. That is how you could access your message history on multiple devices.

- You can call your service e2e encrypted and just retrieve or push client keys at will whenever you get a government request.

E2EE only prevents naive middlemen from reading your messages.

reply
Ekaros 13 hours ago
Fundamentally actual E2EE is complicated problem. And probably not very user friendly. It is full of technical trade-offs. And mistakes are very common. Or they lead to situations that people do not want. Like if you lost your phone or it break how do you get history back... What if you also forgot password? Or it was stored in local manager...

It is phrase that sounds good. But actually doing it effectively in way that average user understand and can use system with it with minimal effort is very hard.

reply
bstsb 12 hours ago
no you couldn't. that wouldn't be considered end-to-end encrypted in any modern sense
reply
Galanwe 11 hours ago
What I described is essentially how the vast majority of E2EE messaging platforms work. And I say that having worked for one of them.
reply
quotemstr 5 hours ago
> DMs are akin to private conversations in real life

There are parents out there who would record and AI-analyze every single private conversation their kids have if only the technology enabled it.

reply
Ekaros 16 hours ago
You could have reasonable legal system where privacy is guaranteed. But you do not need end to end encryption for that to be thing. It really is orthogonal issue.
reply
theblazehen 14 hours ago
Sure, however kids these days often can't socialize irl - should kids be isolated from friends because they're unable to have any private conversations at all?

During times in which I was unable to socialize irl (eg school holidays), and unable to talk to my friends online, I can confirm that the isolation was not good for my mental health.

reply
ranyume 19 hours ago
This might be off-topic but on-topic about child safety... but I'm surprised people are being myopic about age verification. Age verification should be banned, but people ignore that nowadays most widely used online services already ask for your age and act accordingly: twitter, youtube, google in general, any online marketplace. They already got so much data on their users and optimize their algorithms for those groups in an opaque way.

So yeah, age verification should be taken down, as well as the datamining these companies do and the opaque tunning of their algorithms. It baffles me: people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.

reply
nandomrumber 17 hours ago
> people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.

Hogwash.

Where are these mythical people who aren’t concerned with both?

reply
ranyume 46 minutes ago
> Where are these mythical people who aren’t concerned with both?

People don't care about "what companies serve them". They only care if the children see sexual content (or things considered deviant). Once sexual and deviant content is filtered, they're happy to give away their children's development to the company's algos.

In effect, the people don't want to concern themselves with what their children consume, unless they're outraged by things normally taboo in their age group. Besides, if everyone is in it "it's not that wrong". They seek reactive entertainment rather than proactive engagement in their children's development.

reply
jbstack 14 hours ago
> Where are these mythical people who aren’t concerned with both?

They're called politicians.

reply
LoganDark 19 hours ago
Monitoring children's DMs is the responsibility of the parents, not megacorps. If a parent wants to install a keylogger or screen recorder on their child's PC, that's their decision. But Google should not be able to. Neither should... literally anyone else except maybe an employer on a work-provided device.
reply
ranyume 18 hours ago
> Monitoring children's DMs is the responsibility of the parents, not megacorps

Absolutely. But what responsibilities do megacorps have? Right now, everyone seems to avoid this question, and make do with megacorps not being responsible. This means: "we'll allow megacorps to be as they are and not take any responsibilities for the effects they cause to society". Instead of them taking responsibilities, we're collecting everyone's data and calling it a day by banning children from social networks... and this is because there are many interests involved (not related to child development and safety).

reply
acuozzo 18 hours ago
> But what responsibilities do megacorps have? Right now, everyone seems to avoid this question

Clear, simple, direct: Whatever was required of The Bell Telephone Company and nothing more.

reply
da_chicken 18 hours ago
So there should be a human operator manually gatekeeping every individual request to connect with another endpoint?

It's a good thing those human operators couldn't listen in to whichever conversation they wanted.

reply
acuozzo 16 hours ago
Human operators were not required of The Bell Telephone Company by law. Bell switched to mechanical switching stations as soon as doing so was economically advantageous.

(Reconsider my post. I'm arguing for no regulation.)

reply
lmz 13 hours ago
Sure. And "lawful access" intercept capabilities are also required of telcos.
reply
ranyume 18 hours ago
I'd say that at minimum social networks need to be required to show how their algorithm works and allow users control over their data. They must be able to know why a content was served to them. Nowadays social networks are so pervasive in society, affecting it and molding it to unknown interests, that this is the bare minimum for a free society.

Ideally, users should be able to modify the algorithm, so they can get just what they want, while simultaneously maximizing free speech. If something isn't illegal, it shouldn't be hidden or removed.

reply
drnick1 9 hours ago
> Nowadays social networks are so pervasive in society, affecting it and molding it to unknown interests

I think this is the real issue. We should free ourselves from "social networks" such as Tiktok, Facebook, Instagram and others. Even with direct messages truly E2EE, they create countless other privacy problems. They enable surveillance of people at scale and should be completely shunned for that reason alone.

reply
acuozzo 18 hours ago
> social networks need to be required to show how their algorithm works

Hypothetically speaking: What if it's a neural network in which each user has his/her own unique weights which are undergoing frequent retraining?

Would it not be an undue burden to necessitate the release of the weights every time they change?

Also, what value would the weights have? We haven't yet hit the point of having neural networks with interpretability.

Wouldn't enforcing algorithmic interpretability additionally be an undue burden?

> They must be able to know why a content was served to them.

What if the authors of the code are unable to tell you why?

reply
BlueTemplar 15 hours ago
The use of black boxes like neural networks is already effectively illegal in some governments for this very reason.
reply
techpression 16 hours ago
I don’t remember reading about ads in phone calls, nor the complete mapping of customers behaviors to use in contexts not being the phone call.

The apples to oranges in this comparison is probably top five on HN ever.

reply
iso1631 12 hours ago
Whatever was required of the new york times and nothing more.

If the NYT publishes and advert or editorial, it's held accountable for the contents.

reply
j16sdiz 18 hours ago
> But what responsibilities do megacorps have?

fake and scam AD.

they literally profit from those ADs. When the AD distributes malware or make scam, they don't take any responsibility

reply
LoganDark 17 hours ago
> But what responsibilities do megacorps have?

They should have a responsibility of transparency, accountability and empathy towards users. They should work for the user and in the interests of the user. But multiple constraints make this impossible in practice.

reply
gzread 11 hours ago
The simplest way that can work is for the child account to be linked to a parent account, and the parent account can see the child account's DMs.
reply
prmoustache 10 hours ago
I also think children do/should have a right to privacy and their parents do not have to know everything.

Kids should be able to write a journal or talk to friends with total trust that this information will not reach their parents.

reply
KaiserPro 15 hours ago
> Monitoring children's DMs is the responsibility of the parents, not megacorps.

Yup, but the tools provided make that easy or hard.

But putting that emotive bit to one side, Megacorps have a vested interest in not being responsible to children. They need children's eye balls to drive advertising revenue. If that means sending them corrosive shit, then so be it.

Its a bigger issue than encryption, its editorial choice.

reply
baq 17 hours ago
Mega corps should be compelled to and rewarded for allowing parents to monitor their children’s dms.
reply
DANmode 15 hours ago
> maybe an employer on a work-provided device.

The children yearn for the mines(?).

reply
iso1631 12 hours ago
I'm all for helping parents to do this. Any site requiring age verification should indicate this as a http header or whatever, and the browser I allow my child to use should respect that and the parental controls should be easy for me to engage with

Many parental controls are massive pains to get working. Apple does fairly well (although I don't get a parental pin number to unlock the phone, which is normally fine as my child will tell me, but in some circumstances it wouldn't be), but does require the parent to be on the apple ecosystem too.

EA and Microsoft however are terrible, especially as it's likely the child will be playing fortnite/minecraft and the parent won't have ever touched it. I think with minecraft we had to make something like 5 or 6 accounts across three different sites to allow online minecraft play from a nintendo switch.

reply
duped 19 hours ago
Parents shouldn't give their child access to a device that allows DMs.

That said, these platforms are making it impossible for parents to monitor anything. They're literally designed to profit off addiction in children.

reply
greygoo222 18 hours ago
Why? Plenty of children benefit from talking to other people. Some children need careful monitoring, and some children shouldn't be allowed to use DMs, but it's not universal and should be up to the parents.
reply
iso1631 10 hours ago
Control over who they can talk to (if needed), certainly monitoring of both who they talk to and in many situations what the contents are

At some point between the age of 0 and 18 the child has to be fully ready for an independent world. A cliff edge is a terrible idea, allowing 3 year olds unmonitored uncontrolled conversations with strangers is a terrible idea, but not allowing 15 year olds to talk to their friends is a terrible idea.

reply
Dban1 17 hours ago
I thought it was common knowledge to just set your birthdate to 1970 or something
reply
input_sh 15 hours ago
You can make it a nice round 2000 these days.
reply
Nursie 17 hours ago
> Age verification should be banned

Why?

> They already got so much data on their users

There are a variety of ways (see "Verifiable Credentials") that ages can be verified without handing over any data other than "Is old enough" to social media services.

reply
shakna 17 hours ago
Age verification obliviates anonymity on the internet. If everything you do, _can_ be tracked by the government, it _will_ be.

Allowing for more effective propaganda, electrol control, and lights a fire on the concept of a government _representing_ anyone.

reply
Nursie 16 hours ago
> Age verification obliviates anonymity on the internet.

How so?

Please explain in detail, because there are already schemes such as "verifiable credentials" which allow people to prove they are of age without handing over ID to online services.

reply
shakna 14 hours ago
Last time my government tried that, they failed. [0]

You need to 100% trust those verification services. And considering their success rate [1], you shouldn't.

[0] https://thinkingcybersecurity.com/DigitalID/

[1] https://discord.com/press-releases/update-on-security-incide...

reply
Nursie 14 hours ago
> You need to 100% trust those verification services.

First link - mitigation: use a well supported standard like OIDC, not a home-cooked scheme. Duh.

Second link - this is part of the problem such schemes as verifiable credentials are designed to address, random third parties collecting ID they don't need.

Yes, any system needs to be executed well. Neither of these really display that.

reply
shakna 14 hours ago
If _the government_ can't be trusted not to use a dumbass scheme, then no, it isn't a duh moment. You don't exactly get to dictate how the government implements it!

The point is that systems today, aren't really well executed. So it is unreasonable to expect them to be well executed.

If you can't trust people not to build the bomb well - then don't let them build a bomb.

reply
Nursie 13 hours ago
> You don't exactly get to dictate how the government implements it!

Who was talking about the government implementing it? I wasn't.

And also "This has been done poorly in the past so we should never attempt to do it again, better" seems an odd way to go about things. There are well put together schemes by international standards bodies in this area now. Neither of the above links followed them.

reply
shakna 13 hours ago
If neither follow them, why do you have such faith that anybody would...?
reply
Nursie 12 hours ago
I mean, your example of the ATO there isn't even an age verification thing, it's a defective clone of OIDC, so by that logic we should ban all SSO or identity delegation solutions?

Because we don't believe anyone will ever use the standards in this area, despite loads of companies and government bodies actually using OIDC already?

I'm not really sure what you're driving at.

reply
shakna 12 hours ago
> I mean, your example of the ATO there isn't even an age verification thing, it's a defective clone of OIDC, so by that logic we should ban all SSO or identity delegation solutions?

MyGovID _is_ an age verifier. Sorry. The successor after the rebrand, is called myID [0], and advertised as:

> myID is a secure way to prove who you are online.

---

> I'm not really sure what you're driving at.

Clearly. You seem to think that because it might one day be done correctly, by one group, the rest of the world is safe. However, over in this reality, we have fuck ups by governments and private corporations, who are the people the rest of the world actually deals with.

You cannot enforce these real groups, to actually follow good practices. Thus, in practice, everyone gets fucked when you bring in these laws. Because it will always be done the wrong way, by someone.

[0] https://www.myid.gov.au/

reply
Nursie 11 hours ago
> The successor after the rebrand, is called myID [0], and advertised as:

It's an identity scheme and SSO solution for accessing government services. As said at [0] in the "What is myID" section.

I sincerely hope that they're using something standard and well tested like OIDC behind the scenes this time, because otherwise it's ripe for another fuckup like the one you linked. If it is also used for age verification that appears to be secondary.

> You cannot enforce these real groups, to actually follow good practices. Thus, in practice, everyone gets fucked when you bring in these laws. Because it will always be done the wrong way, by someone.

So we need to stop the Australian government from ever using an SSO/identity solution again because it can't be trusted to do it properly, having messed up in the past, and the rest of us have had to live with the consequences. And as they aren't the only ones to have messed up, companies do it all the time too, we should also ban all identity and SSO solutions (because that's what we're talking about in this thread, banning of age verification, not mandating it).

I don't think you get to call out age validation as a uniquely hard problem that cannot possibly be made safe, but allow other identity-style services a pass. There are many areas in which we (through the government) can and do mandate good practice, both by government and private entities.

[0] https://my.gov.au/en/about/help/digital-id

reply
shakna 3 hours ago
You should probably stop pretending you know what myID is, and what it does.

Its a sovereign identity verification service. That is not limited to above PL2 verifications. There are age-only accredited entities in the registry.

Its one of the approved verification tools for the Online Safety Act 2021 . It was renamed as part of the passage of the law. You're just not forced to use it, for verification.

And yes, it does it poorly, and does not follow a standard. Its using Vanguard's PAS behind the scenes [1], with extras ServiceNow tacked on. Until they rearchitect the entire damn thing.

So... As I might have doxxed myself a little just now... No, uploading identity documents is never a safe process. Its a king's hoard in treasure before nations that never sleep.

Name a provider, and there will be a breach, and it will continue to affect the victims most of their lives.

[1] https://www.sec.gov/enforcement-litigation/administrative-pr...

reply
afiori 15 hours ago
because most implementations are not going to be like that.
reply
Nursie 15 hours ago
In the context of "Age verification should be banned" though, we're already talking about legislative intervention. If there's no particular problem with schemes that are like that then we don't necessarily need a blanket ban on age verification.

Perhaps what we're really saying is "Ban age verification that collects lots of personal information".

Or perhaps we could distil it down further to "Ban unnecessary collection and storage of PII". In which case, Congrats! You've arrived back at the GDPR :)

Which I think is a good thing, and should be strengthened further.

(Also the other response to "because most implementations are not going to be like that" is "why not?". People are already building such ecosystems.)

reply
AnthonyMouse 14 hours ago
> If there's no particular problem with schemes that are like that then we don't necessarily need a blanket ban on age verification.

There is a problem with schemes like that.

The way computer security works is, attacks always get better, they never get worse. A scheme that nobody has found any privacy holes in when it's enacted will have one found a week after.

The way governments work is, the compromise bill passes if the people who care about privacy support it because then it has the votes of the people who care about privacy and the people who want to ID everyone. But then when the vulnerability is found, the people who care about privacy can't get it fixed because they can't pass a new bill without also having the votes of the people who want to ID everyone, and those people already have what they want. More specifically, many of them then have what they really want, which is to invade everyone's privacy, as they were hoping to do once the vulnerability was found.

Which means you need it to be perfect the first time or it's already ossified and can't be fixed. But the chances of that happening in practice are zero, which means it needs to not happen at all.

reply
Nursie 14 hours ago
> There is a problem with schemes like that.

/goes on to discuss how government legislation of specific schemes is the issue, not the schemes themselves.

Then we don't legislate specific schemes? The GDPR doesn't do that, for instance, it spells out responsibilities and penalties but doesn't say "Though shalt use this specific algorithm".

Remember, this discussion started with a call to ban all age checks, which itself is a government action and restriction on the agency of private business.

There are ways that private entities can implement age checks both securely and without leaking much other information, so it seems very heavy-handed to ban them. Private entities are building such systems between themselves already, without government mandates on the specifics.

reply
Almondsetat 16 hours ago
Ok, and? Presenting your ID at a number of IRL estamblishments also heavily reduces anonymity
reply
gschizas 16 hours ago
The difference is that IRL establishments don't sell off that data to anyone else, nor do they have the ability to collate that data with data from other establishments to make a profile of you.

(at least not yet)

reply
fragmede 3 hours ago
If you think the nightclub that scans your driver's license magstripe isn't selling your data off, when they could be making money off of it? Between PatronScan,Intellicheck, Scantek, and TokenWorks, yeah a dingy bar where it's a dude visually checking isn't it, but a nightclub and quick swipe totally is.
reply
shakna 14 hours ago
But to get that ID from the bottleo, you need to hold them at gunpoint.

To get it from Discord you need to sneeze.

The internet has scale and availability, that physical locations do not.

reply
pjc50 14 hours ago
The problem with this discussion is that this is a wonk solution for wonkish times. You're trying to thread the needle between various reasonable compromises. Ironically due to social media, that is simply not how politics and lawmaking works any more. Instead it's an emotionally driven fight between various different sorts of moral panic, and the only option is to get people more mad about surveillance than "think of the children".

You might be able to get somewhere by getting a tech company on your side, but they generally also hate adult content and don't mind banning it entirely.

(people are not going to get age verification _banned_ any time soon! That's simply not going to happen!)

reply
echelon 17 hours ago
It's a slippery slope.

This is the next two steps into 1984.

Once you start mandating this, there's no going back.

The next generation will start associating wrongthink with government IDs. (Wait, we already do that, right?)

reply
sham1 17 hours ago
The Party doesn't care about the Proles, only the members of the Outer Party.

I think that it's rather funny that people like to appeal to 1984 as if the only point of Mr. Orwell was that surveillance is bad, missing the entire point about stuff like the control of the language or the idea that the only self-justification of the (Inner) Party is power for the sake of power (see also: The Theory and Practice of Oligarchical Collectivism).

I'd even go as far as to say that if "telescreens are horrible" is the only thing that someone takes away from 1984, they've frankly missed the point.

reply
fragmede 2 hours ago
Unfortunately, having totally missed the point, they still get the same number of votes as you do.
reply
Nursie 16 hours ago
> It's a slippery slope.

Is it? I thought that was a logical fallacy?

> This is the next two steps into 1984.

How so?

> Once you start mandating this, there's no going back. > The next generation will start associating wrongthink with government IDs.

Could you provide some more details on why you think this? For a start I talked about a scheme in which you don't hand over ID.

reply
consp 16 hours ago
Slippery slope can be argumental if you provide the actual argumental reasoning for it as I was thought it could be used as deductive argumentation (though that does not say much). On itself it is a fallacy.

I don't see how verifiable credentials with zero knowledge proofs provide that however.

reply
drawfloat 16 hours ago
Read another book.
reply
computerex 18 hours ago
TikTok is a front for government surveillance, so it's not really surprising that this is their position.
reply
emulatedmedia 11 hours ago
All social media should be considered a front for government surveillance
reply
dlev_pika 4 hours ago
Particularly true as oligarchs co-opt gov
reply
trashb 12 hours ago
The government are able to access your conversations, data and connections with e2ee in place already. I don't see how not having e2ee would have an effect on that ability in any way.
reply
Cider9986 6 hours ago
Please provide proof for these claims.
reply
dlev_pika 4 hours ago
L337 Hax0rs, of course
reply
theideaofcoffee 4 hours ago
You actually choose to believe that these trillion dollar tech monsters run by some of the most despicable people on the planet are being forthright in that they have no ability to do this on behalf of some government request? For something that isn't open source and can't be audited, and can be changed at the next upgrade without any oversight? I find it so much more unlikely that they can't, and that informs my normie use, mostly.
reply
rustyhancock 14 hours ago
Now it's a franchise too!

Once it gets big enough in your location you buy it for that sweet sweet intel.

reply
bbshfishe 18 hours ago
[dead]
reply
MetaWhirledPeas 6 hours ago
"makes users less safe"

They don't believe that. It makes it more difficult to deal with governments, is all. Big Brother needs your messages from time to time, and TikTok is not willing to risk getting shut down to argue against that. We can't have pesky principles getting in the way of money.

reply
gorgoiler 14 hours ago
I don’t really understand how we are supposed to believe in e2ee in closed proprietary apps. Even if some trusted auditor confirms they have plumbed in libsignal correctly, we have no way of knowing that their rendering code is free of content scanning hooks.

We know the technology exists. Apple had it all polished and ready to go for image scanning. I suppose the only thing in which we can place our faith is that it would be such an enormous scandal to be caught in the act that WhatsApp et al daren’t even try it.

(There is something to be said for e2ee: it protects you against an attack on Meta’s servers. Anyone who gets a shell will have nothing more than random data. Anyone who finds a hard drive in the data centre dumpster will have nothing more than a paperweight.)

reply
upofadown 11 hours ago
The unfortunate fact about E2EE messaging is that it is hard to do. Even if you do have reproducible builds, the user is likely to make some critical mistake. What proportion of, say, Signal users actually compare any "safety numbers" for example? There is no reason to worry about software integrity if the system is already insecure due to poor usability.

Sure, we should all be doing PGP on Tails with verified key fingerprints. But how many people can actually do that?

reply
dijit 13 hours ago
I've been making this argument for a long time, and it's never popular.

People want to believe in E2EE, it's almost like religion at this point.

Protecting people is synonymous with E2EE, even if you cant verify it, and it can be potentially broken.

I was even more controversial and singled out Signal as an example: https://blog.dijit.sh/i-don-t-trust-signal/

reply
theideaofcoffee 4 hours ago
Same, my default MO is assuming 'e2ee' is broken and unsafe by default. Anything that I truly don't want sent over the wire would be in person, in the dark, in a root cellar, underwater. Not that I've ever been in the position to relay juicy info like that. Hyperbole I know, but my trust begins at zero.
reply
trashb 12 hours ago
With e2ee please remember that it is important to define who are the ends.

Perhaps your e2ee is only securing your data in travel if their servers are considered the other end.

Also one thing people seem to misunderstand is that for most applications the conversation itself is not very interesting, the metadata (who to who, when, how many messages etc.) is 100x more valuable.

reply
AlienRobot 3 hours ago
We don't even know if the passwords aren't stored in plain text.
reply
ronsor 20 hours ago
Why would you use TikTok for private communications anyway? It's mostly a public short video sharing platform.
reply
halapro 19 hours ago
It's the kids' social network, you're just old.
reply
wiseowise 16 hours ago
> you just have intact brain

Fixed a bit.

reply
LambdaComplex 15 hours ago
As much as I want to agree with you, the people who like TikTok make up a significant amount of the population, and their opinions do matter--arguably more than yours, due to sheer numbers.

Smugly dismissing them doesn't do you any favors except for making you feel good about yourself for a few seconds.

reply
wiseowise 12 hours ago
You’d be surprised how many people don’t give a shit about TikTok. It’s just another blip in history like Facebook, Instagram, Vine, MySpace and others before them.
reply
tedd4u 7 hours ago
Regarding "why care." It's where a shockingly large portion of voters and adults get their "news."

• 43% of US 18-29 year olds regularly get news on TikTok

• Half of US adults get news on TikTok, 1 in 5 US "regularly" do so

• This is 2 points less than Twitter and two points more than Facebook

Data from Pew Research (Sep 2025): https://www.pewresearch.org/short-reads/2025/09/25/1-in-5-am...

reply
gzread 11 hours ago
All of those were extremely influential and half of them had enough power to select a president.
reply
huflungdung 16 hours ago
[dead]
reply
asveikau 19 hours ago
The way it starts is you pass videos back and forth with a friend. Then you find yourself chatting in the same app.

I'm mindful that it's less secure than other apps, but for a lot of chats it doesn't matter.

reply
g947o 19 hours ago
Says someone who has never sent a message to a friend over DM on TikTok.
reply
Barbing 18 hours ago
Hopefully
reply
knallfrosch 18 hours ago
Exactly.
reply
navigate8310 15 hours ago
Thankfully
reply
adventured 20 hours ago
You say that like the typical 18 year old has any idea what they're doing when it comes to proper encryption and communication safety. That is never going to be the case.

It's a communication channel attached to the most popular social network for young people. Obviously they're going to use it a lot. They use it for the extreme convenience.

reply
Barbing 18 hours ago
>never going to be the case.

And in a perfect world essentially shouldn’t have to be, at least inside expensive walled garden app stores.

reply
zadikian 18 hours ago
They might understand e2ee but not care.
reply
m00dy 18 hours ago
it's more than that.
reply
swiftcoder 13 hours ago
> the controversial privacy feature used by nearly all its rivals

"controversial" according to who? The NSA / GCHQ?

reply
a13o 13 hours ago
Listed in the article are the National Society for the Prevention of Cruelty to Children and the Internet Watch Foundation, which monitors and removes child sexual abuse material from the internet.

The recent Meta lawsuits also mention opposition from the National Center for Missing and Exploited Children and Meta's own executives: Monika Bickert (head of content policy) and Antigone Davis (global head of safety). Both executives mention the danger end-to-end encryption poses to children when attached to a social media graph.

https://www.reuters.com/legal/government/meta-executive-warn...

reply
swiftcoder 7 hours ago
> Both executives mention the danger end-to-end encryption poses to children when attached to a social media graph

So the fact that we welded a messaging platform onto a global-child-discovery-service is bad? Sure. Not encrypting that messaging platform is sort of closing the barn door after the horse has gone walkabout

reply
oscaracso 6 hours ago
It is a considerably larger threat for anonymous strangers to be able to establish private lines of communication with children than for them to know that Lisa Simpson (8) lives in Springfield and attends Springfield Elementary. In terms of discovery, most people are already aware that children can be found in school.
reply
swiftcoder 5 hours ago
I don't see how you arrive at that conclusion? The risk in being able to connect to a random victim somewhere in the world appears to be strictly less than being able to target a specific victim in your local geographical area to whom you could gain physical access

Hence why nobody up in arms (in either direction) about e2e encryption for Chatroulette

reply
Ajedi32 8 hours ago
Good to see this called out. The HN echo chamber has this really terrible habit of attributing any disagreement with the prevailing opinion here to big, shadowy forces with evil motives (billionaires, corporations, three letter agencies, politicians, etc) instead of facing the reality that sometimes well meaning people just have different values and priorities than us. Very rarely does that narrative get challenged directly.
reply
beaker52 4 hours ago
Someone in the UK government is furiously writing this down.
reply
hexage1814 15 hours ago
It doesn't matter. Web-based cryptography is always snake oil

https://web.archive.org/web/https://www.devever.net/~hl/webc...

reply
szmarczak 15 hours ago
> if the server operator was malicious, they could just push different client-side JavaScript

Same as with OS updates, browser updates, dependencies used by the OS, dependencies used by the browser. Also you can run malicious software such as keyloggers and you're compromised.

That argument doesn't mean E2E (even web based) is snake oil. Browsers just give you more points of failure.

reply
mr_mitm 12 hours ago
The difference is: in web based cryptography, you get the cipher text and the code to decrypt it from the same source. Hijacking OS updates is arguably much harder than hijacking one particular web server, and there is pretty much no effective defense against malicious OS updates.
reply
szmarczak 12 hours ago
I know the difference. It doesn't make E2E useless.
reply
afiori 15 hours ago
Agree, but a significant point missed in the article is that of data vulnerability. with E2EE the company db is useless to an external attacker.

For some companies (eg facebook, google, tiktok) i would be mostly worried about the company itself being untrustworthy. For others I would be mostly worried about the company being vulnerable.

reply
trashb 12 hours ago
> with E2EE the company db is useless to an external attacker.

Depends on who is defined as the other end, it may be that the company db is the other end.

reply
tuxracer 15 hours ago
It's a native app what are you talking about
reply
ftigis 14 hours ago
> It is worth noting that this law also applies to non-web applications where the service provider supposedly being secured against is also the client software distributor; thus, the “end-to-end encryption” offered by Whatsapp and Signal, amongst other proprietary services, is equally bogus. (Both Whatsapp and Signal ban use of third party clients, and enforce this policy.)
reply
bougainvilley 10 hours ago
the specificity of between web apps that is highlighted by the article is that you receive a bundled code of software every time you open or use the app, as opposed to say, the operating system or desktop apps, which are less frequently updated. (Native) mobile apps are like web apps in that they release updates almost every day.
reply
cdrnsf 5 hours ago
TikTok and other social media apps' business models are antithetical to privacy.
reply
dlev_pika 4 hours ago
Their whole model predicates on the lack of privacy, so it’s crazy to expect anything else.
reply
zzo38computer 4 hours ago
In my opinion, a separate software should be used for the end-to-end encryption than for the communication, although there are other things to do for security other than only programming the computer correctly (such as securely agreeing the keys and ciphers in person).
reply
matricaria 17 hours ago
Since when is E2EE controversial? Not using E2EE should be controversial.
reply
kristianc 15 hours ago
It's never been controversial, it's the BBC. doing it's usual job of laundering the arguments the establishment want you to hear for domestic consumption.
reply
mysterium 14 hours ago
The thing is, it _is_ controversial. At least amongst the general public.

Obviously not in somewhere like Hacker News where there’s a clear consensus, but if you asked a random sample of the UK population “should law enforcement be allowed to compel tech companies to hand over all DMs of confirmed paedophiles?”, I’d bet very good money the majority would say “yes”.

The notion that “Big Tech” can absolve themselves of the responsibility to help law enforcement find child abusers by saying “it’s all encrypted, not my problem”, does not sit well with a large sector of the population.

Whether it’s good or bad is an ultimately political question, and both sides of the debate tend to talk past each other on this topic, but it’s undeniably a controversial point within the broader population.

reply
kristianc 12 hours ago
Sure, but it comes down to framing.

If you asked 'Would you support weakening encryption in messaging apps if it helped catch some criminals, even though it could make it easier for hackers to read your messages and steal your passwords, bank details, or personal photos?' I'd bet a large proportion of the general population would say no.

But that side never gets explored, or there's an assumption that there's some way of only letting the good guys access the information.

reply
gzread 11 hours ago
We are technologists here. There is no technology that can determine if somebody is a pedophile. We can't make a system that exposes the data of pedophiles but is secure for everyone else. We think it has to be all or nothing.

But other people are not technologists. Lawyers think the law is robust enough to determine if someone is a pedophile and only issue warrants for pedophile's data and simultaneously punish anyone who leaks the data of non-pedophiles. Most of the public also believe the police and the law can do that.

When the law is set up to do that, always gets abused eventually, after a time of not getting abused. The public gets outraged and the responsible person gets a slap on the wrist, and the abuse is normalized. In other words, lawyers are wrong and it doesn't work - by our standards. That doesn't stop them thinking it does. Our definition of "you can't do that" is "it's impossible to do that." Their definition of "you can't do that" is "you can do that, but if the police find out, you will go to jail."

reply
kristianc 10 hours ago
In an ideal world it would work something like that. In reality in the UK the pattern more often is:

1. New power introduced after crisis or scandal, justified as exceptional and targeted

2. Enforcement is patchy or politically difficult. Police either lack resources or big tech platforms don't want to or can't play ball

3. Failure or abuse case becomes public and reported in chattering classes tabloid press

4. Response is not "use existing powers better" but expand powers, broaden scope, lower initial 'targeted' thresholds

5. Cycle repeats

Issue is compounded because you have politicians who will either not understand things or pretend not to understand things.

reply
dlev_pika 5 hours ago
lol

It makes sense - they extract every possible bit of personal information from your device - why would they make you believe they care about your privacy?

You want to communicate privately? TikTok is not the place, and that’s ok. shrugs

reply
maxdo 4 hours ago
People seriously discuss privacy in Chinese app . With all respect, their government will not allow you even a hint of privacy
reply
pothamk 18 hours ago
The core tension here isn’t really about encryption itself, it’s about moderation models.

Most large platforms rely heavily on server-side visibility for abuse detection, spam filtering, recommendation systems, and safety tooling. End-to-end encryption removes that visibility by design. Once a platform is built around centralized analysis of user content, adding strong E2EE later isn’t just a feature toggle — it conflicts with large parts of the existing architecture.

reply
maest 20 hours ago
Do you feel safer knowing DMs are not encrypted?
reply
sethops1 20 hours ago
Nobody should feel safe using the TikTok client, period.
reply
mullingitover 19 hours ago
Not just the TikTok client, anything made by Oracle is risky.
reply
tartoran 20 hours ago
Neither Instagram/Facebook's Messenger/WhatsApp.
reply
tamimio 19 hours ago
And signal
reply
derwiki 19 hours ago
What do you use for messaging?
reply
modernpacifist 19 hours ago
Obviously carrier pigeons carrying messages encrypted with post-quantum ciphers where keys have been sent ahead of time using USPS because no one would be so rude as to read someone elses mail.
reply
fsflover 13 hours ago
Matrix.
reply
tamimio 19 hours ago
I have been using simpleX for some time now.
reply
gzread 11 hours ago
Are you aware of the creator's political beliefs and the E2EE leak baked into the app?
reply
stephbook 18 hours ago
Do you take "yes" for an answer?

It really depends on whether you think your government is more dangerous than, say, suicide trends, grooming, scamming.

I know the answer is pretty easy for US citizens to answer right now.

reply
gradientsrneat 6 hours ago
A middle ground would be to implement E2EE but have messages signed (and ideally organized in a Merkel tree), so that if a DM is reported there's cryptographic proof that the accounts sent the messages.
reply
krickelkrackel 15 hours ago
Just like door locks are making the world less free!
reply
lucasfin000 6 hours ago
I dont think the argument is really about child safety. If it was tiktok would also be working on fixing their algorithm that can send minors toward harmful content, which is a far larger documented vector than encrypted DMs. This is about preserving access.
reply
matesz 18 hours ago
Fun fact - there is a big correlation between World Wars and compulsory education. Of course governments and big corporations "care" about children. Of course!
reply
sheept 20 hours ago
I feel like this makes sense for a platform that targets teens. Plus, I wouldn't trust TikTok to implement E2E encryption properly—who knows what they've snuck into their client.
reply
ranyume 19 hours ago
What kind of application is not targeted at both teens and adults?

Youtube, twitter, bluesky, whatsapp? Every app with a social aspect will be used by teens. And no, tiktok is not "only for teens" or "specially targeted at teens", nowadays everyone uses it and creates content on it.

reply
RajT88 19 hours ago
Came here to post this.

If you run (say) a restaurant, you get big spikes in business from TikTok videos in ways you don't get from Facebook or Instagram or others.

TikTok is the platform everyone is one right now.

reply
somenameforme 19 hours ago
I think it's very safe to assume that no major US based platform has 'real' E2E encryption. They're almost certainly all a part of PRISM by now, and it'd contradict their obligations to enable government surveillance. So the only thing that's different is not lying about it. Though I expect the other platforms are, like when denying they were part of PRISM, telling half truths and just being intentionally misleading. 'We provide complete E2E encryption [using deterministically generated keys which can be recreated on demand].'
reply
paulryanrogers 19 hours ago
Signal is open source
reply
Barbing 18 hours ago
Snowden endorsed last I heard? He doesn’t email of course.
reply
Schlagbohrer 13 hours ago
This BBC article is insanely written.
reply
blackqueeriroh 18 hours ago
There is no way to do E2EE on a traditional social media platform with user-generated content and comply with existing US law.

You can’t moderate an E2EE platform.

reply
lurk2 17 hours ago
All of Meta’s major properties (Messenger, Instagram, WhatsApp) support E2EE messaging.
reply
ntoskrnl_exe 17 hours ago
Pretty sure that for Meta the impossibility to moderate E2EE was the point. It’s cheaper to shrug than pay content moderators.
reply
rockskon 18 hours ago
Aside from the fact that you can get Metadata and that some communication frequently happens outside of E2EE - what US law do you believe mandates moderation? I'm curious.
reply
tbrockman 18 hours ago
What law do you believe supports your perspective?
reply
_el1s7 14 hours ago
That's good, people who need E2EE shouldn't use TikTok either way, there are plenty of other secure apps for that.

TikTok is a social media app, and it gets heavily abused as it is.

reply
zthrowaway 11 hours ago
Making users less safe from… letting us snoop on all your communications for “national security”.
reply
0xbrayo 16 hours ago
unrelated but I'm always surprised by the number of people who don't know that instagram dms are not encrypted by default.
reply
2OEH8eoCRo0 3 hours ago
What unsafe things are users most likely to encounter?
reply
1970-01-01 10 hours ago
I see it like this: Taking in the totality of the danger, they're right. If the source (social network) and the destination (child brain) cannot be treated as trustworthy, then you must control the content for overall safety. If you could trust either end, then you could dismiss the argument. But you cannot trust children to be cognizant of abuse, and you already know social media literally reinvented abusive behaviors for the 21st century. Do nothing and children will be harmed. Overreach by any amount and you have destroyed freedom. The only middle ground is weaker encrypted E2E comms. Something that creates a forcing function with very high cost (an electric bill or SaaS service) for the sniffer but can be broken with enough horsepower. Think about what millions of dollars per character would do. Good luck codifying that insane compromise into a law.
reply
gnarlouse 15 hours ago
Maybe just don't use TikTok. Shocking that adults use a platform for children.
reply
lwansbrough 14 hours ago
The Chinese spyware app won’t do E2EE? I can’t believe what I’m reading.
reply
nicce 14 hours ago
Not so Chinese anymore. At least in the US.
reply
dev_l1x_be 15 hours ago
I take privacy suggestions from social media companies on a daily basis.
reply
SuperSandro2000 3 hours ago
hahaha, good one
reply
9864247888754 15 hours ago
And their target audience won't question it.
reply
insane_dreamer 6 hours ago
I'll never let my kids have a TikTok account anyway (once they're adults they can have one of course if they want to).
reply
edarchis 14 hours ago
> But critics have said E2EE makes it harder to stop harmful content spreading online, because it means tech firms and law enforcement have no way of viewing any material sent in direct messages.

Like they give a damn. I report accounts that explicitly sell fake credit cards, citing laws that make it illegal and 95% of the time "we checked and there is no violation here, we know that you're not happy but don't give a crap".

So the argument of security is utter bullshit and they just want to snoop.

reply
bas 18 hours ago
Fascinating. What a time to be alive.
reply
hd4 15 hours ago
I hate the BBC so much - "controversial privacy tech" "E2EE ... the best way to protect conversations from .. even repressive authorities" "End-to-end encryption has been criticised by governments, police forces"

They're saying this at the same time as they're clutching pearls over Iran's repression of protestors. Typical of the ethical consistency I would expect from them.

reply
tw04 19 hours ago
Reminder, Larry “citizens shouldn’t get any privacy” Ellison now owns tik tok. If you’re still using it or have friends and family using it you should stop immediately. It WILL eventually be used against you if this regime gets its way.

https://digitaldemocracynow.org/2025/03/22/the-troubling-imp...

reply
dylan604 17 hours ago
As if. If people haven't stopped using TikTok with all of the other reasons for stopping, then because Ellison is damn sure not going to move the needle.
reply
aprilthird2021 16 hours ago
What were the other reasons for stopping?
reply
chinathrow 15 hours ago
Curbing addiction?
reply
iso1631 12 hours ago
The actual headline is currently

> TikTok won't protect DMs with controversial privacy tech, saying it would put users at risk

Not sure if this was changed since first posting, I don't mind updates, but unless it'd redacting for legal purposes (which should then itself be clearly mentioned), the BBC should provide a public changelog like wikipedia

reply
crest 13 hours ago
A Chinese company saying you don't need encryption. Why should anyone waste time debunking their bad faith "arguments"?
reply
camillomiller 15 hours ago
Doublespeak. War is peace.
reply
Tyrubias 19 hours ago
TikTok’s stance against end-to-end encryption is unsurprising but still concerning. TikTok is a source of information on many topics, such as the genocide in Gaza, which traditional media underreport and many governments try to suppress. The network effect of big social media platforms means many people will likely talk about these topics in TikTok DMs. No matter what legal controls TikTok claims to enforce, there is no substitute for technological barriers for preventing invasions of privacy and government overreach. This is yet another example where corporations and governments sacrifice people’s autonomy and privacy in the name of security.
reply
spaqin 19 hours ago
It's a pretty terrifying world we live in now, where an unencrypted addictive short-form video platform is considered a source of information more than news agencies or even community-managed forums.
reply
consp 18 hours ago
For older generations Facebook has the same problem. "On Facebook it said [propaganda item bla bla]" is something I hear with those generations.
reply
9864247888754 15 hours ago
Of course you are the target audience for disinformation spread via this propaganda platform.
reply
bbshfishe 18 hours ago
[dead]
reply
burnt-resistor 18 hours ago
It's the Max app for Americans, now with 900% more US and IL government spying.
reply
blueTiger33 13 hours ago
so we need no encryption?...at the end of the day we have nothing to hide right CIA,FBI? :D
reply
rdiddly 18 hours ago
"The situation is made more complex because TikTok has long faced accusations that ties to the Chinese state may put users' data at risk."

And yet, it's even more complex than that, since it's now owned by cronies of the current US President. I've never had a TikTok account, but conceptually I was mostly pretty okay with being spied-upon by China. I'm never going to China.

reply
BLKNSLVR 18 hours ago
> I'm never going to China.

China will come to us.

Or should that be:

China will come to the US.

reply
andrewinardeer 18 hours ago
> "I'm never going to China."

Voluntarily.

reply
fragmede 17 hours ago
Yes. China gives a shit that user rdiddly, at 36 minutes before 00:55 UTC on March 4, 2026, said that China is spyihg to the point that they are going to be abducted for it.
reply
quotemstr 5 hours ago
It's one thing to make a policy decision I disagree with. It's another to lie, blatantly, to my face about it. But what do you expect from people who bought TikTok specifically so they could add censorship and lied about it being some kind of national security issue?
reply
knodi 11 hours ago
Another step in towards the endgame, mass surveillance state.
reply
Madmallard 18 hours ago
clown emoji
reply
villgax 14 hours ago
This according to many researchers is the best case study example for corporations gaslighting users into accepting surveillance by companies and governments alike.
reply
croes 18 hours ago
> Grooming and harassment risks are very real in DMs [direct messages] so TikTok now can credibly argue that it's prioritising 'proactive safety' over 'privacy absolutism' which is a pretty powerful soundbite

Means they read every message

reply
kotaKat 12 hours ago
Larry needs his kids' menu.
reply
animitronix 10 hours ago
lol why are people still using this garbage?
reply
deafpolygon 16 hours ago
why are we still wringing our hands around this? we’ve already determined that tiktok is bad for our health.

because tiktok is addicting, and they know it…

reply
hoestomper 3 hours ago
[dead]
reply
hackersk 13 hours ago
[dead]
reply
Bud 19 hours ago
BBC calling encryption "controversial privacy tech" is deeply disappointing and dangerous.
reply
1shooner 19 hours ago
I wondered how it could be considered 'controversial', but they do quote at least a couple groups speaking against it. The NSPCC for instance, who incidentally also warned parents about a Harry Potter video game because their children might want to learn more about the game:

>“Parents should also be aware that players may want to find out more about the game using other platforms such as YouTube, Twitch, Reddit and Discord, where other game fans can discuss strategies and experiences.

reply
ggm 19 hours ago
It is controversial.. amongst people who have concerns about private communications and society, from a regulatory and governance perspective.

It's uncontroversial amongst people who value their privacy.

The tension between the two camps (there are obviously nuances and this is a false dichotomy) is at a current peak. It's an ongoing controversy. It's a matter of public debate.

You might have liked it better if the angle had been "...which the government, controversially, wants to clamp down on" or something.

reply
stinkbeetle 18 hours ago
Calling something controversial is a favorite propaganda technique employed by "news" outlets. It's another form of selective reporting and framing. It carries negative connotations, and has really no objective standard by which it can be wrong since you'll always find somebody against any issue.

After you notice it, you'll notice it everywhere.

reply
trashb 12 hours ago
> It carries negative connotations

Interesting I'm not a native English speaker but in news articles I have always interpreted "controversial" as meaning "under discussion" (perhaps even around a 50/50 divide) hence why they are writing an article about it.

I feel it is the news outlet trying to justify why the topic is important to read about since most people reading it will interpret the issue at hand as having a "common" stance. Usually it is used in topics that are very binary, for or against.

reply
stinkbeetle 12 hours ago
It does have negative connotations. And it does get used by news corporations to influence opinion. I have rarely if ever seen them feel the need to explain why a topic they report is important or newsworthy, and just stating without evidence that something is controversial really doesn't either.

> Usually it is used in topics that are very binary, for or against.

It can be for those topics, but very rarely to describe the side of such topics with which they align.

reply
unethical_ban 19 hours ago
The UK government seems a lot more willing to embrace the panopticon in the name of protecting people from terrorists, child sex traffickers, human rights activists, Catholics, jaywalkers, you name it.
reply
bsza 12 hours ago
> We know just how risky end-to-end-encrypted platforms can be for children

As opposed to doomscrolling and brainrot, which are not risky to expose children to at all. /s

If TikTok cared about children in the slightest, they would not exist.

reply
dakolli 18 hours ago
[flagged]
reply
mobtrain 17 hours ago
Thanks for letting us know, 53 days old HN account!
reply
dakolli 13 hours ago
So what?
reply