Online astroturfing: A problem beyond disinformation (2022)
63 points by xyzal 4 hours ago | 30 comments
bpavuk 23 minutes ago
related: https://doublespeed.ai/ - basically astroturfing as a service.
replytheir landing page stops short of saying that Doublespeed would be "a good fit for your political campaign." I'd prefer fighting an AI-powered drone over becoming a victim of "Dead Internet-aaS" startup. at least, flying lawnmowers are honest
ajkjk 3 hours ago
strong agree, I feel like it poisons the fabric of society somehow when everything you interact with is fake or even just has a good chance of being fake, regardless of the also-shitty fact that it is also often trying to influence you.
replyapsurd 3 hours ago
Also how the being fake doesn't even have to be malicious. now every tom, dick, and harry wants to create content. All the world's a stage, follower count go up.
replypessimizer 2 hours ago
I held a hope that it would create an evolutionary pressure that would weed out people who fall for foolish arguments i.e. arguments without any sort of structure that should be capable of convincing anyone of anything. But that's just wishful thinking. People fall for anything as long as it's flattering and it allows them to do what they want to do when they want to do it.
replyEvery propagandistic argument is going to be like that for 80% of people, and 40% of people are going to be within that 80% about 99% of the time. They think the biggest issue of our time is how much people complain.
walterbell 2 hours ago
My browser highlights a few hundred accounts. For HN and other comment-oriented sites, local userscripts are supported by browser plugins, including mobile Safari. These can highlight known usernames and implement blocklists. Most LLMs can generate a userscript on demand for non-obfuscated sites, including userid list for manual edit.
replyBridged7756 2 hours ago
This is notorious in platforms like reddit, with people jumping in to suggest no name products in response to questions. It doesn't help that reddit allows private profiles, thus allowing astroturfers to get away with it. Also, another case is LLM astroturfing, we're bombarded with doomerism and obituaries about programming, some of said opinions are subtler, short comments, the most dangerous ones, because little by little they jab you, though the most conspicuous ones are easy to identify. And then there's the political astroturfing. In my country smokescreens are the defacto tool, but it is suspicious of the amount of high quality edits and memes that came out about the Epstein files, essentially cementing him as a "meme" and not a monster that abused minors.
reply
There's still a tiny window of opportunity for engineers to come up with or design technical safeguards, but eventually this problem will move past the realm of what's easily solvable and out of our hands, and into policy makers hands. A big part of me feels like that window is already slammed shut.
It's hard to distinguish who's a bot, who's a narrative pusher and who's an enthusiast. Which is exactly what you'd want from an astroturfing campaign. There's a clear benefit: people in the industry are reading this, and in doing so they're granting mindshare.
There's one way that can prevent inauthentic support campaigns - personal key signature. But judging by how afraid people, especially in the US, need to be of their government surveilling them, this isn't going to catch on.
Isn't this what exactly you'd expect in a connected world? The best arguments from both sides proliferate, thereby causing "The same arguments and tropes are echoing through every thread".
I would expect a figurative war for human attention. With so much information being available, everyone would try to make people focus on what they want to communicate.
> The best arguments
Some of these tropes and arguments aren't really the best. There's a lot of rhetorical gotchas, e.g. "that's exactly what I'd expect from a human" when an automated solution isn't up to par.
> from both sides
The only real "side" is the one actively pushing for something. Everyone else isn't a camp - they're just random people.
This phenomenon appears to be incrementally coming for every single topic and public platform.
I literally ask it to look for something, and immediately afterwards (before reading the long-winded result), ask it if the results were real or fabricated. It's just how the cost-benefit analysis works out, and I didn't learn until a ton of times reading the results, getting suspicious of a few, doing websearches to verify them, not finding them, then coming back to ask if they were real.
"Sorry! It's absolutely fair that you called me out on that... It's important that you hold me to a high standard... You're absolutely right..."
I'm finding it valuable for compressing all of the docs in the world, so I don't have to look up what a function does or how to accomplish something in some framework or CLI. I find it capable of writing code if I move an inch at a time; build copious verbose debugging output that I feed back into it every time it screws up; and when it goes into a stupid loop being stupid, just debugging by hand before wasting hours trying to get it to see something that it doesn't want to see.
Need to double check what is available, though I feel like that angle could work.
I’ve been wondering also if a simple lie & deception detection type system could be a useful angles. It’s complicated in practice; though the human intuition would say it’s figured this out millennia ago- I can’t tell you how many times my body has figured out someone’s toxic negative vibe by feeling. And I think we probably understand this better than we think and can represent it in the computer space with analysis of signals and some follow on questions. Hope I’m not too naive here.
[0] e.g. https://www.businessinsider.com/sam-altman-tools-for-humanit... and the feature piece at https://time.com/7288387/sam-altman-orb-tools-for-humanity/
[1] https://contentcredentials.org and https://c2pa.org
It's against the HN guidelines to insinuate that astroturfing happens on HN.
I was surveilled, experimented on and followed by them for being American-Pakistani and speaking out against them from 2022-2023. It was a scary time and I wish I were making this up. I wonder sometimes if they really are the good guys, and I just got things backwards. I also heard when you are kidnapped and in hostile territories for long enough, you fall in love with the kidnappers.
Happy to share more details if anyone’s curious.
(It's interesting that conservatives saw it as a partisan cause.)