We are getting to this weird situation where instead of Alice sending a message to Bob, Alice sends the message to her AI, which sends it to Bob's AI, which then tries to recover Alice's original message.
To be fair, I don't think it is an AI problem, more of a quirk of formal communication, the same happen with human secretaries. For example, I want my customer to pay me, I want to be professional but not bother with the details, so I ask my secretary to write a well written letter to my customer, with a proper bill and all that stuff, my customer's secretary will then read the letter and tell his boss "hey, our supplier wants $xxx". I could have just called the boss directly and say "hey, it is $xxx", but it is rarely how it is done. Here, it is AI that is taking charge of the formalism, and I find it to work really well for this, as it is essentially a translation task, what LLMs do best.
I am not discounting human secretaries here, they can do much more than write formal letters, but that's a part of their job where LLMs excel at.
Obviously you're not a golfer. Human sectretaries don't have non-deterministic hallucinations and random critical ommissions in their summaries, which I've witnessed first hand with LLMs. More importantly, if they do you have more deterministic mitigations with them than you do with LLMs, as there are no mitigations with LLMs except praying that a new model in some unspecified future will be magically better with the summaries down the line.
The only way to stay sane when using these tools is to pretend that these things won't ever happen and just go about your business like the rest of the zombie workforce, because no one wants to stop the train and address the issue.
There is a reason why the title of Dr.Strangelove is "How I Learned to Stop Worrying and Love the Bomb".
The funny thing is that I know my manager got this “working” within a week with Claude. I had to spend 2 weeks with 4 JIRA tasks, many commits for toy examples, and three reports.
I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.
I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize.
And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.
Isn't it obvious? If I'd wanted to see AI response to my question, I'd ask it myself (maybe I already did). If I'm asking humans, I want to see human responses. I eat fast-food sometimes, but if I was served a Big Mac at a sit down restaurant I'd be properly upset.
I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y"
Because it's probably not actually about the content but the sense of connection. People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention.
And if that's what people are seeking, slack and social media are probably not the platforms for it (and, arguably, never were).
If the LLM output is concise and efficient I don’t actually care that it’s LLM output.
My problem is that much of the LLM prose feels like someone took their half-baked idea and asked the LLM to put a veneer of quality writing on top of it. Then you waste your time reading it to parse out the half-baked idea hiding among the wall of text.
If a person has a shitty idea that sounds good, they start writing about it. If they exercise some care in their writing, the act of writing itself is enough to make them realize that their idea is shitty.
By the way, it happens to me all the time! Even just on HN, I’ve bailed halfway through writing a comment because I realized that I didn’t know what I was talking about, lol.
But an LLM will gladly take that shitty idea and expand it into a very plausible article/message/post, that seems reasonable if you don’t think very critically about it. And it’ll be done with such a high-seeming level of care that any human author would’ve been fact checking themselves the whole time.
So it forces the reader to think even more critically, rather than letting our subconscious try to judge authenticity of the writer through the language they use.
For example, someone who says “my WiFi is broken” when referring to the fact that their computer is dead, we can quickly judge them as “not an expert at computers”. But if they say that “my M.2 drive has gone bad”, we inherently assume they have some understanding. —- when the first person uses LLMs to write, they sound as informed as the second person even if they are completely clueless and wrong
What I'm asking and the response from AI through an intermediary lose some context (the prompt), it's like the telephone game where the data becomes more and more distorted, that's why people don't have an issue with their own AI generated answers.
Another issue is that when I'm talking with someone and parsing through what they've said I'm considering them, as a person, taking all available context (some of this might happen unconsciously).
In any case I don't think there is an easy solution to the problem.
So it does not meet the bare minimum of addressing my ask, the premise of the ask hinges on a discussion with a real person.
But it doesn't? I'm more than capable of using Google and chatgpt myself. If I was looking for a machine generated answer to my question I would have already found it myself and never made the post in the first place. If I went to the effort of posting the question, it means that either the slop answer is not sufficient for some reason or that I want to hear from actual humans that have subjective experiences that an LLM cannot.
Posting an AI response verbatim basically says "I think you're too stupid to click a couple of buttons, so let me show you how it's done". I think it's very reasonable to get upset at the implication.
So I have been Googling for "Reader X vs Reader Y review"(/comparison/etc) hoping to find Reddit comments or non-spam blog posts from people who actually own both to compare screen and battery life. I found a reddit thread comparing them directly and lo and behold the first comment is someone saying "I own both but honestly you could just ask ChatGPT for this". Fortunately a couple other people responded...
When I ask Gemini or ChatGPT, all I get is regurgitation of the tech specs (that are all mostly identical) plus summarized SEO spam reviews (that were probably written by another LLM based on those same tech specs) and it's totally unhelpful. So for this, I absolutely do NOT want an OpenClaw bot to respond as if they've physically used the devices and it would be actively enraging to learn a "helpful" comment "answering" the question was actually just an LLM impersonator.
The people copy-pasting slop almost never excerpt the relevant response. As a result, you get non-concise text you have to triple check. This is functionally useless to the point of being fine to skip.
It's also about the content. Generic slop I can get on demand from an LLM myself, vs a novel insight.
I don’t want a random person’s use of an AI to be slopped at me. I don’t know what they asked it, a lot of the words are made up, and I have to go through the effort of decoding it.
If I wanted an AI answer I would ask an AI. AI slop is made up. It’s like handing me a paste of google search results. It’s creating work for me.
They are achieving the exact opposite. I don't connect with the person who sends me slop. And they send me content that is a waste of my time and attention, because I have to vet it. Why would I trust someone - how can I ever connect with them - when the only thing I know about them is they take shortcuts?
In particular, I've been thinking a lot about educational content, and what I'd love to ask educational providers for is not AI-generated content, but rather carefully human-built curricula offered in a structured manner, which my own AI could then use to create dynamic content for me.
Reading AI generated prose, even if it’s my prompt, always gives me the same feeling as when I read a LinkedIn post: Like a simple concept was stretched into an unnecessarily long, formulaic format to trick the reader into thinking it was more than it was.
Everyone taking their scraps of thoughts and putting them into an LLM likes it because the output agrees with them. It’s flattering. But other people don’t like it because we have to read walls of text to absorb what should have been a couple of their scattered bullet points.
Just give me the bullet points. Don’t run it through the LLM expander. That just wastes my time.
The problem is that getting an AI to answer a question is trivial. If I wanted to know what an AI has to say about the topic, I would just ask myself. Sending AI output has, as the author writes, the same connotation as sending a LMGTFY link. It does not provide me any value at all, I know how to write a question to an AI, just as I know how to use Google.
Which is irrelevant. TFA is talking about personal communication (and the examples are from a business setting).
And their concern is not the mere quality or lack thereof, but also its origin, and this is something new.
>I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.
No, many of us hate "our AI" content too, and wouldn't impose it to other people, same way we wouldn't fling shit at them.
Did you even read the article? It is about person to person interactions. The three examples weer:
* Someone butting in to an ongoing discussion with a solution (but it's generic and misfitting AIslop)
* Someone being asked for their expertise and responding (but it's generic and misfitting AIslop)
* Someone comes with a problem thesis looking for help (but it's generic and misfitting AIslop)
The only one of these that existed prior to AI was the middle one, and the article very specifically calls out how transparent it used to be, because it had the shape of a google link.
The first one would be impossible because the person would have to either write an unhelpful response, and they wouldn't find the words at length. You could ignore them or pick it apart easily. The last one would be impossible unless if they were copy pasting from a large PDF, which would look nothing like a chat message.
What kind of workplace hellscape do you work on where people posting low effort bait on SLACK was the norm? The premise of this reply is entirely non-sensical.
The problem is the same as it has always been. Figure out how to use your time and attention effectively,
Conversely, if your take is that there's no point being angry and we should just take it in stride, that just emboldens the producers of slop.
Because they are. It would be like if I bought some trinket off aliexpress and told you I made it by hand just for you. You wouldn't mind if you bought it yourself, but the fact that I lied about it to make it seem like I care is deceptive and immoral.
Sending someone AI generated text without disclosing so is incredibly offensive. It says you don't care about wasting the receivers time and don't care about honesty either.
I honestly don't mind someone else's AI as long as I can trust it/them. One problem I have with sloppypasta specifically is that it reads as raw LLM output and the user isn't transparent about how they worked with the AI or what they verified. "ChatGPT says" isn't enough; for me to avoid inheriting a verification burden, I'd also need to understand what they were prompting for, if they iterated with the AI, and if/what/how they validated.
(the other problem is that dumping a multi-paragraph response in the midst of a chat thread is just obnoxious, but that's true even if its artisanal human-written text)
(n)amow(?): (not) All my own work ?
But the slop-wall is even worse, as it wastes the questioner's time in figuring out that they're just getting slop. At least RTFM is efficient.
I ignore it. But if that isn’t an option, this sort of writing can help you convince someone in power around you it’s okay to ignore it.
Well, cat videos make people happy.
> The internet was not a bastion of high quality content or discourse pre-AI.
I have read thousands upon thousands of pages of AI-related discourse, watched hundreds of videos since 2022, maybe even a thousand now on it. NEVER at any point in time did people opine for the "high quality" internet of before. They opined for the imperfect HUMAN internet of before. We are now seeing once pristine, curated corners of the internet being infected with sloppypasta.
This is quite a broad brush to paint the internet with. It's like saying The Earth is not a bastion of warzones/peaceful places to live. That is HIGHLY dependent on location.
To "opine" is to give an opinion on something.
To "pine" for something is to wish for it, usually in a nostalgic sense.
I get how the two are related and can be confused, especially when you're talking about comments on the web. Just thought I'd clarify.
If that's what you're pining for, you're going to have to find a highly protected part of the internet that is walled off from untrusted actors. However, that's always been the solution, and AI doesn't change that.
For me it destroyed company as aligned group of people, at C level, it's just bazaar of drones throwing AI slow at each other.
That, would give the responder the chance to modify the prompt and get a perhaps better answer from the LLM?
That results in a shorter and more concise message, and the original sender can choose to use the prompt you provided on their favourite LLM from the start.
AI, and different AIs, give different answers to the same question, so it may be useful if you can provide a good summary of the different responses you got.
You could even pipe the final summary directly to your email/IM client and save yourself the copy-paste.
This is one role that I can't tell if it's completely useless in an AI powered world, or if that's basically what we all end up doing, reviewing and commenting on the work versus actually making it.
However, the essay and the guidelines were all human-written!
So trace* through ninerealmlabs and ahgraber and sure enough:
I used AI:
- to help build this website.
- to help generate examples of sloppypasta
based on my original guidance
- to proofread and review the human-written
copy to provide a critical review
- to improve my arguments and ensure clarity.
Kudos for being forthright.---
* Turns out clicking "Open Source" bottom right gets there faster!
I'm possibly too jaded / cynical already...
They did disclose AI usage which is good: https://github.com/ahgraber/stopsloppypasta?tab=readme-ov-fi...
https://github.com/ocaml/ocaml/pull/14369#issuecomment-35573...
How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?
I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.
Make them realise they're replacing themselves if they continue down that path. "What value do you have if you're just acting as a pipe to the AI?"
You don’t. You keep these arguments handy for ignoring their output until it’s germane.
I've found success having sidebar conversations with the colleague (e.g., not in the main public thread where they pasted slop), explaining why it was disruptive and suggesting how they might alter their behavior. It may also be useful to see if you can propose or contribute to a broader policy on appropriate AI use/contribution with AI, and leverage that policy as the conversation justification?
Pattern rather than person? General team reviews or the like. As long as it's not tech leadership pressing for it..
The most interesting incident for me is having someone take our Discourse thread, paste it into AI to validate their feelings being hurt (I took a follow up prompt to go full sycophancy), and then posting the response back that lambasted me. The mods handled that one before I was aware, but I then did the same thing, giving different prompts, and never sharing the output. It was an intriguing experience and exploration. I've since been even more mindful of my writing, sometimes using similar prompts to adjust my tone or call me out. I still write the first pass myself, rarely relying on AI for editing.
The person being targeted just prompted the same AI with "Which user has thin skin" and instantly the AI turn on the other person. Then the moderators got involved and told the first guy to stop using AI as a genital pleaser.
Embrace the tension. Tension is human.
The other person already demonstrated a lack of professionalism by sharing unverified AI slop so, in case of conflict, I wouldn't be surprised if they continued acting unprofessionally by spreading false rumors, unnecessarily escalating the situation to higher ups, secretly sabotaging the project, etc.
People who previously couldn't put in the effort or quality, are now vomiting tons of slop I'm meant to read and review.
PRs descriptions. Documentation. Plans. Etc.
Walls of sprawling text, "relevant files", linked references, unhelpful factoids, subtle inconsistencies and incoherencies.
It's oppressive like 95% humidity on a warm day.
They are stealing our work, turning it into a model, and then renting our decisions to less intelligent people.
They (tech companies) don’t want us to be smart any more. They are commodifying intelligence.
sloppypasta: Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.
I'm starting to be reminded of Neal Stephenson's "Diamond Age". He described a future in which people walked around with a nearly invisible defensive army of nanobots surrounding them whose job it was to counter the offensive nanobot swarms of their enemies. Characters in this novel would go about their business while an unseen nanobot war took place in the air around them.
We're rapidly reaching the point where we will need AI to defend us from AI. i.e. We will soon need agents filtering all that we read and removing slop, just so we can preserve our time and attention for things that are human and real.
I am really surprised with the amount of backlash at this site for using llm helpers in writing. There are many ways in which this can go wrong - and the article lists some of them - but it does not blindly close all llm writing helpers.
What would be even more constructive would be an article listing the good ways of using llms.
What bullshit essentially misrepresents is neither the state of affairs to which it refers nor the beliefs of the speaker concerning that state of affairs. Those are what lies misrepresent, by virtue of being false. Since bullshit need not be false, it differs from lies in its misrepresentational intent. The bullshitter may not deceive us, or even intend to do so, either about the facts or about what he takes the facts to be. What he does necessarily attempt to deceive us about is his enterprise. His only indispensably distinctive characteristic is that in a certain way he misrepresents what he is up to.
Also related - Gish-gallop During a typical Gish gallop, the galloper confronts an opponent with a rapid series of specious arguments, half-truths, misrepresentations, and outright lies, making it impossible for the opponent to refute all of them within the format of the debate.[2] Each point raised by the Gish galloper takes considerably longer to refute than to assert. The technique wastes an opponent's time and may cast doubt on the opponent's debating ability for an audience unfamiliar with the technique, especially if no independent fact-checking is involved, or if the audience has limited knowledge of the topics.[3]The Wikipedia page has some good counter-strategies: https://en.wikipedia.org/wiki/Gish_gallop
It is easy to do in social media because the context is global but in enterprises it is a bit harder.
Something like "flagged as very likely untrue by AI" is something I would really appreciate.
I see many posts and comments throughout the internet that can easily be dispelled by a single LLM prompt. But this should only be used when the confidence is really high.
It's also very impolite to dump 5 pages of text on someone, because now you're asking _them_ to validate it.
When I ask a question in Slack I want people's input. Part of my work is also consulting the GPTs and see if the information makes sense.
And it shows up the most with people who answer questions in domains they're not a 100% familiar with.
It includes 4 follow up actions and I automate check in messages to see how they are progressing with them.
- now the fun part: which AI did I use to write the above?
As for "Stop Sloppypasta", it doesn't feel like the content is AI-generated to me but it feels like the presentation of it is. I don't know whether that changes my opinion of the whole thing or just the presentation. As for the advice in it, it seems good, but it also seems a little bit brittle, because people can use an LLM session to review things generated in a different LLM session before sending with some success, and this will increase and therefore it's a moving target.
Instead, they'll use an LLM to send a slop response back.
Instant karma!
The author wastes time talking about this case, and even does it first before talking about the much worse case:
>"The sender shares AI output as their own work, with no indication a chatbot wrote it."
This is 100 times worse, and is objective rather than subjective. If the author admits it's AI when confronted it kills their reputation, (if they don't admit it and turns out it is AI, it's fraud, fireable offense)
Putting these 2 categories of AI use wastes breath and conflates the two, the message will not be clear at all.
What's worse, such a policy actually has the effect of increasing undisclosed AI use. This is a specific case of the general case: banning all AI usage increases unregulated AI usage. Everyone who prohibited employees from using AI in 2024 knows that what you get is undisclosed AI use or content you are not sure is AI written or not. If you give a specific way to use AI, you can add features like auditability, supply chain control, and you can remove any outs from employees and users that do not comply with the policy.
How?
Admittedly, the paragraph is somewhat confusingly written. Also probably written by an LLM.
I'm 55 years old. "slop" is way older than your examples. Try a dictionary, eg: https://dictionary.cambridge.org/dictionary/english/slop
LLMs are tools. For me (wot had a C64 as one of my first computers) they are seriously close to magic but I understand what a "next token guesser" means.
I wish there was a remedy. I block or mute the person when I can.
I notice that your comment history is all rapid-fire three-paragraph LLM responses. You do appear knowledgeable and respond quickly, but I've just dumped 10 minutes of my life into your attention in order to verify, parse, and filter through your responses.
I can't tell whether you're a person who thought about something. Therefore, I can't tell whether, for example, https://news.ycombinator.com/item?id=47393311 is an analysis I should take seriously (as I might, if it were spoken from experience) or just Markov-chain, Reddit-trained hypothetical fluff.
How can we increase the friction to presumptively exclude you, but provide accommodation if, for example, you're more comfortable in your native language and using the LLM mainly to bring your English writing to a level consistent with your personal expertise?
> I notice that your comment history is all rapid-fire three-paragraph LLM responses
I looked after you said this and those are all from today, in the last hour. And is a stark change from their (very short) comment history.In particular these two comments are extremely suspicious[0,1]. I think even if not LLM generated I highlights something likely wrong, which paseante themselves states!
>> a long, detailed response in Slack implied the person had spent time thinking
There's 2 minutes between these comments, on different threads (I also noticed they did similar things in a few threads as I typed this out). While the timing is reasonable for the amount of words written it does not seem adequate for reading the article and/or other comments. Personally, I find that kind of behavior rude as it enshitifies the social space the rest of us are in[2].[0] https://news.ycombinator.com/item?id=47392999
frankly I'm disappointed in the amount of responses this account is getting on its other comments. i thought this forum was a bit better than average at detecting artificial behaviour. perhaps the internet is already completely dead and i am merely picking thru its bones.
this seems reasonable to me, especially in this transition period where we're navigating ethical and respectful collaboration that involves AI. give people a little grace in this weird new world.
Doesn’t match any of our internal product design, adds tons of extraneous features. When I brought this up with said PM they basically responded that these inaccuracies should just be brought up in the sprint review and “partnering” with the engineering team. AI etiquette is something we’ll all have to learn in the coming years.
Cue a similar joke about salary negotiation, and the annual dance around goals and performance indicators. Is it really programmers who should be afraid to become redundant, when you think about it?
I should know better than making jokes about reality. It has already one-upped me too many times.
The second problem was always going to be there, even with human written tickets, but the problem really is that someone who relies on AI gets into the habit of treating the LLM as a more trustworthy colleague than anybody on the team, and mistakes start slipping in.
This is equally problematic for the engineers using AI to implement the features because they are no longer learning the quirks of the codebase and they are very quickly putting a hard ceiling on their career growth by virtue of not working with the team, not communicating that well, and not learning.
Apparently, asking "why it doesn't make any sense" wasn't !polite~
If I remember correctly, she came up with ~200 questions for a 2-paged ticket. I helped write some of them, because for parts of the word salad you had to come up with the meaning first and then question the meaning.
You know what happened after she presented it? Ticket got rewritten as a job requirement, and now they seeking some poor sod to make it make sense lol
One had to be very unqualified to even get through the interview for that job without asking questions about the job, I feel. Truly, an AI-generated job for anyone who is new to the field
I'm pretty sure it would be okay to stop at 5-10 questions, because it was clear he couldn't answer any. But my friend is from a hateful branch, and so she went for humiliation angle of asking for as much clarification as the ticket itself allowed
Talk about an AI induced productivity increase ...
I also do the extra step of eliminating things that are not needed, or we review this during backlog refinement.
Anyway.
People are starting to log support tickets using Copilot. It's easily recognisable, and they just fire a Copilot-generated email into the Helldesk, which then means I have to pick through six paragraphs of scrool to find basic things like what's actually wrong and where. Apparently this is a great improvement for everyone over tickets that just say "John MacDonald's phone is crackling, extension number 2345" because that's somehow not informative enough for me to conf up a new one and throw it at the van driver to take to site next time he's passing, and then bring the broken one back for me to repair or scrap.
Progress, eh?
Currently it's a bit of a wild west, but eventually we'll need to figure out the correct set of rules of how to use AI.
The ticket was given to an LLM, the code written. Luckily the engineer working on it noticed the discrepancy at some point and was able to call it out.
Scrutinizing specs is always needed, no matter what.
The quick solution is to escalate the arms race, and start using AI to filter the AI slop, but I'm not sure that's a world I want to work in :)
So now, even figuring out that it was a careless or lazy job takes a lot more time, which drastically skews the economics in favor of the careless person.
Funny how that works out.
They write shit code, but but can be prompted to highlight common failures in certain proposals.
For example, I am planning a gateway now, and the ChatGPT correctly pointed out many common vulnerabilities that occur in such a product, all of which I knew but may not have remembered while coding, like request smuggling.
It missed a few, but that's okay too, because I have a more comprehensive list written down than I would have had if I rubber ducked with an actual rubber duck.
If I finally write this product, my product spec has a list of warnings.
Has been for a long time unfortunately. AI didn't create this behaviour but certainly made it easier for the other side to do it.
> Review the slop with the person that submitted it.
Alternatively, mark them as "Needs Work" if you can. But yes, put the ball in their court by peppering them with questions. Maybe they will get the hint.