Ars Technica fires reporter after AI controversy involving fabricated quotes
562 points by danso 24 hours ago | 355 comments

AnonC 20 hours ago
Journalists and bloggers usually write about others’ mess ups and apologies, dissecting which apologies are authentic and which apologies are non-apologies.

In this incident, Aurich Lawson of Ars Technica deleted the original article (which had LLM hallucinated quotes) instead of updating it with the error. He then published a vague non-apology, just like large companies and politicians usually do. And now we learn that this reporter was fired and yet Ars Technica doesn’t publish a snippet of an article about it.

There’s something to be said about the value of owning up to issues and being forthright with actions and consequences. In this age of indignation and fear of being perceived as weak or vulnerable due to honesty, I would’ve thought that Ars would be or could’ve been a beacon for how things should be talked about.

It’s sad to see Ars Technica at this level.

reply
Hnrobert42 11 hours ago
I cannot disagree with you more strongly.

Ars did own up to its mistake both in writing and in firing the author. The author himself fell on his sword in detail on Bluesky.

Your only real complaint is that their published explanation wasn't subjectively good enough for you and that means it's sad to see them at this level?

reply
Aurornis 10 hours ago
> The author himself fell on his sword in detail on Bluesky.

Not exactly. He wrote a long excuse blaming being sick, sidestepping the issue that he was using AI tools to write for him and not making an effort to fact check.

Also Bluesky is not Ars Technica. It doesn’t matter what he posts on his own obscure social media page. We’re talking about the journalistic platform where he was given a wide audience.

> Your only real complaint is that their published explanation wasn't subjectively good enough for you and that means it's sad to see them at this level?

Why do you not think that’s a valid complaint? It appears they eventually did part ways, but Ars Technica has also been trying to lay as low as possible and avoid the topic in hopes that it will blow over.

reply
madamelic 10 hours ago
Maybe I don't understand journalism but this guy being a reporter, shouldn't he have had an editor reviewing his work before they hit publish? I understand trusting a senior reporter but I would think due to libel concerns, they would check people's quotes ESPECIALLY if the reporter was sick.

Honestly it seems like journalism has been in their 'vibe code' era for a decade where they just publish whatever typos and all.

This was an institutional error, not an individual reporter's fault. We should also be asking why he was still contributing when he had a high fever. Why did his editors push him to publish his work? I will certainly write code and answer questions when I am sick when I am up to it but I would never push to main while sick.

reply
noboostforyou 9 hours ago
> Maybe I don't understand journalism but this guy being a reporter, shouldn't he have had an editor reviewing his work before they hit publish?

While the journalist is still responsible for their own actions, I agree with you that this being published in the first place is indicative of a deeper failure akin to - "if a junior dev accidentally deletes your production db on their first day that's on the company itself"

reply
Aurornis 8 hours ago
> failure akin to - "if a junior dev accidentally

This person was not a junior.

He chose to use the AI tools knowing that they hallucinate.

The comparisons to an untrained junior are illogical. This person was a long time reporter who knew better.

reply
newswasboring 8 hours ago
Even a senior dev being able to unilaterally delete your prod entirely should not be possible.

But I don't think the intention was to compare with junior devs, its just a popular shorthand for "your process sucks".

reply
Aurornis 8 hours ago
> But I don't think the intention was to compare with junior devs

Junior was said specifically.

A better analogy would be if one of your staff engineers decided to connect OpenClaw to his workspace and it found a way to delete the production DB.

The author was an AI reporter. You can’t argue that he didn’t know what he was doing when he made these choices. Any comparisons involved junior devs are just dishonest.

reply
noboostforyou 8 hours ago
I was using a common "phrase" that highlights individual human error vs systemic failures

Since you are stuck on the semantics allow me to rephrase - "if a single developer is able to delete your entire production db, that's an org failure"

reply
mikkupikku 8 hours ago
Specifying a junior dev on his first day is a plain deliberate rhetorical ploy to frame systemic blame as more legitimate than individual blame. If not, then why not make it a senior developer? Anybody can fuck something up, but we give special consideration to noobs who make noob mistakes, and that's what is being implicitly appealed to, illegitimately. This journalist wasn't a noob, and using ChatGPT to write his article was an error in judgement but not an actual mistake.
reply
newswasboring 4 hours ago
The original author clarified and you are still stuck here? Take a step back dude it's not that serious.
reply
mikkupikku 4 hours ago
"You disagree with me? Whoa dude, you need to relax and touch grass."
reply
newswasboring 8 hours ago
> Junior was said specifically.

Yes, but I think you are taking this phrase more literally than its meant to be read.

reply
Aurornis 8 hours ago
I don’t think so. Junior was a key designator in the claim and words have meanings. It would have been easier to leave it out if they didn’t intend for it to contribute meaning.

I think this is turning into a Motte and Bailey argument where the junior dev story is used to push the argument and then it’s backpedaled out when others identify the fallacy.

reply
afavour 7 hours ago
Sadly this is a reality of the money disappearing from the journalism industry. You're right, there absolutely should be fact checkers. A reporter absolutely shouldn't be filing while sick. And the big news orgs still do that. But I doubt Ars has the resources.
reply
ValentineC 7 hours ago
> But I doubt Ars has the resources.

Ars is owned by Conde Nast, which is owned by Advance Publications. Ars's parents could have funded all these to ensure journalistic integrity, but would rather squeeze their staff and make money off the brand goodwill and advertising.

reply
Aurornis 9 hours ago
The root offense wasn’t that this was published. The root problem is that the author submitted an LLM hallucination as a story. He should have faced consequences even if it had been caught.

> This was an institutional error, not an individual reporter's fault.

The person who caused the problem is at fault. It doesn’t help to do mental gymnastics to try to shift blame to a faceless institution. The author is at fault.

> We should also be asking why he was still contributing when he had a high fever. Why did his editors push him to publish his work?

I think you’re putting too much stock into the excuse. The author got caught doing one of the things you cannot do as a journalist: Publishing fake quotes. He was looking for any way to excuse it and make it not his fault so he could try to keep his job.

He made the choice. The consequences are his to bear. If it had been caught before publishing he still should have faced the consequences.

reply
lich_king 10 hours ago
It is not a job of the editor to assume that the author is lying to you.

> This was an institutional error, not an individual reporter's fault.

Ah yes, "the system made me use AI".

reply
barbazoo 10 hours ago
More akin to not having code reviews in opinion. If the process isn't there you're just not picking up certain issues.
reply
iepathos 9 hours ago
If the Ars Technica editorial process requires assuming reporters don't fabricate quotes, then their process is inadequate. That's like a software company letting junior engineers release directly to production with just a spellcheck and no real process to catch errors. Major publications like The New Yorker, The Atlantic, etc. have a dedicated fact-checking department that is part of the process and needs to give the ok before any article is published. Why is their process so deficient by comparison? Why wasn't there any fact checking?
reply
Aurornis 9 hours ago
> That's like a software company letting junior engineers release directly to production

This person wasn’t a junior.

Editorial processes don’t actually check every single line of everything that is written. Journalists are trusted to report accurately. This person demonstrated they could not be trusted.

> Why wasn't there any fact checking?

Why do programmers ever let any bugs get to production if they have code review? Journalistic outlets do not fact check literally every line that is ever written before it goes to publication.

reply
mikkupikku 7 hours ago
I agree completely, the people who are acting like it's Ars' responsibility to assume every sentence from their journalists are lies just aren't being realistic.

And even if Ars editors had caught the fabricated quote, what then? Obviously he should still be fired. Ars could probably benifit from better editors but even so this doesn't absolve the journalist of any of his own blame, for being the one responsible for introducing these fabrications in the first place.

reply
greedo 7 hours ago
But they generally (or at least they did when I was in the biz) fact check quotes. It only takes a few minutes to fire off an email.
reply
catlifeonmars 8 hours ago
The “system” should make it difficult to make mistakes.

But more importantly, why can’t both be at fault?

Having fact checkers review every articles you publish is a very low bar (as in you should not be in the business of publishing news if you can’t do it effectively).

reply
starkparker 9 hours ago
As someone who worked as a newspaper copy editor for the first third of my career, "assume that the author is lying to you" was the entire job.

A lapse in that non-hypothetically left me responsible, and legally liable, in situations like this.

reply
madamelic 5 hours ago
> legally liable

I think this is the thing people are missing the most. Libel is an incredibly serious thing to do. Misstating a fact is a faux pas and a bad look but misquoting someone, especially if that article is taken as a hit piece, can cost hundreds of thousands or millions.

reply
chrisjj 7 hours ago
> Ars did own up to its mistake both in writing

The Ars "own up" didn't even ID the article or the author.

reply
beloch 17 hours ago
"I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us, and this incident was isolated and is not representative of Ars‘ editorial standards."

----------

A reporter whose bailiwick is AI should have known that he needed to check any quotes an LLM spat out. The editorial staff should have been checking too, and this absolutely is representative of their standards if they weren't.

It would probably be worth checking to see if any other articles or employees have similarly disappeared.

reply
b112 16 hours ago
Editorial staff?

There was such a thing, in newspapers up until 2000. Then, as profits nosedived, these sorts of things largely disappeared.

Purely online entities have no way to pay for real editorial staff.

News has no money, compared to news of old. It's part of the reason 99% of modern news is just reporting other people's tweets or whatever.

I can't imagine many news companies having much money for court battles (to force disclosure of documents, or force declassification, or fighting to protect sources). Or spending months or years investigating a story.

Our news sources are poor, weak now.

reply
troyvit 10 hours ago
> Editorial staff?

> There was such a thing, in newspapers up until 2000. Then, as profits nosedived, these sorts of things largely disappeared.

In a lot of ways you're right, but our public radio station (cpr.org) has the largest newsroom in the state, and that newsroom makes up over a third of our staff. So yeah "news companies" don't have news rooms but that's because their business isn't news. It's funneling user data to their parent companies and getting people to click ads.

However, thanks to "listeners [and viewers, and surfers] like you" public media is still working its ass off to make a difference despite being cut lose from the government. It won't work unless you switch your perspective to local news (where most of the real information is anyway) and unless you donate.

Apologies for turning a comment into a mini fund-drive :)

reply
ThunderSizzle 14 hours ago
Agreed. Modern news is beyond lazy, and is not journalism by any means. Too many talking heads do nothing but sit behind a screen watching others for what to say next.

Granted, a few of the remaining newspapers I'm aware of run business awards (Best restaurant, etc), and the way to win is via wining and dining them, even though the paper claims it's based on people's votes.

That style of thinking - of entitlement - probably brought the lack of interest in both cable news and traditional web/paper outlets - as the younger generations started to see through it more.

reply
a4isms 10 hours ago
A few of the remaining newspapers I'm aware of run business awards (Best restaurant, etc), and the way to win is via wining and dining them, even though the paper claims it's based on people's votes.

Is that how it works where you are? Because over here, the best way to win an award from a publication is to advertise in that publication. Advertise enough, and you'll also become their go-to when they need a quote about anything vaguely related to your restaurant or other business, and once a year or so they'll print some hagiographic article about the amazing things going on under your leadership.

reply
katzgrau 13 hours ago
I think you missed the point of the parent comment.

The money (from advertising) that used to go to news now goes elsewhere (Google and Meta).

It’s left very little in terms of resources for staff.

Think about what the quality of commercial software would be like if there wasn’t enough money for QA and testers and top tier devs capped out at $180k with starting roles at 30k and 40k.

That’s the news industry right now. Poorer quality product.

reply
mikkupikku 13 hours ago
The money used to go to Hearst and co. The golden age of journalism is mostly a mirage.
reply
plufz 12 hours ago
I can’t talk for the US but here in Sweden most news media have fewer journalists today. Is that not the case in your country or in what way is it a mirage?
reply
mikkupikku 10 hours ago
Maybe it's different in Sweden, but when I read old American newspapers, from a hundred years ago, 90% of it is absurd slop that people would laugh out loud at today.
reply
plufz 8 hours ago
50 years then?
reply
WarmWash 10 hours ago
How many ars readers do you think don't use ad block?

Tech audiences are the worst to be advertisement dependent on.

reply
jodrellblank 11 hours ago
> is not journalism by any means

It literally is journal-ism.

Wikipedia: "Journalism is the production and distribution of reports on the interaction of events, facts, ideas, and people that are the "news of the day""

Britannica: "Journalism, the collection, preparation, and distribution of news and related commentary"

Stories from British Newspaper Archive[1]:

- June 1950 Cat in Tree in Sheffield - Sheffield Daily Telegraph

- July 1939 A cat which has sought refuge the top of a tree on Somerlayton Road, Stockwell, defied all attempts to get it down. - Sunderland Daily Echo.

- June 1956 A cat was rescued from a 60ft. oak tree by Southgate firemen at Abbotshall Avenue, Southgate. - Wood Green weekly herald.

- Ocober 1959 CAT UP TREE I was sorry to hear that your cat had been lost Frances, I hope he is none the worse for his experience up the tree, now. - Penrith Observer.

- July 1956 Cat in tree rescued. Worthing firemen rescued a cat - Worthing Herald.

- July 1955 RESCUED CAT IN TREE - Percy Kemp climbed 40ft up a tree to rescue a cat - Bradford Observer.

- November 1956 An emergency tender from the Eastbourne Fire Brigade went to the rescue of cat in a tree in Brassey-avenue, Hampden Park - Eastbourne Gazette.

- August 1953 Clifford Morton (25) climbed 120ft up a swaying fir tree to rescue a cat - Coventry Evening Telegraph.

- March 1950 Persian cat belonging to Mrs M. ___ ... heard meow-ing from a 40ft. tree in field nearby - Dundee Evening Telegraph.

- February 1950 CAT UP TREE A telescopic ladder. belonging to Birkenhead Fire Service was rushed three miles to Arrowe Park Road. Woodchurch. this afternoon. to rescue a cat which had climbed over 40 feet up a tree - Liverpool Echo

- October 1924 SHOTS AT CAT IN TREE .. It was stated that the boys saw a black Persian 'cat up a tree on the farm, and they fired at it - Daily Mirror

- July 1939 CAT IN TREE FOR TWO DAYS - Harlepool Northern Daily Mail

- August 1962 CAT IN TREE RESCUED BY FIREMEN - Lincolnshire Free PRess

- May 1956 The story of a stray cat, Mr. Budd and a 45ft, fir tree, was told at Wednesday's annual meeting of the Torquay and South-East Devon branch of the R.S.P.C.A. - Torquay Times

- etc. etc.

When was this imaginary wonderful time you're implying when newspapers were only speaking truth to power with mighty investigative reporting, and not literally a journal of things people did and said in a local area (or on a certain topic)?

[1] https://www.britishnewspaperarchive.co.uk/search/results?bas... tree&retrievecountrycounts=false

reply
vintagedave 16 hours ago
Yes: in newsrooms, this is the editor's responsibility. I note the editor wasn't fired.
reply
crazygringo 11 hours ago
It's the editor's responsibility to set processes and standards to try to make sure this doesn't happen. If the rules exist but the reporter breaks them, then it's the reporter's fault and they get fired. As happened -- that's part of the process of maintaining standards. It's not the editor's fault. What exactly do you expect them to do? They can't fact-check and verify every single fact and quote in every article. They're not superhuman.
reply
vintagedave 7 hours ago
Why not? Copy-paste-google would be a sanity check for 99% of them.
reply
crazygringo 5 hours ago
Because they're busy doing the rest of their job? They don't have enough time for it, nor is it a good use of their time.

That's like asking why the CEO of a 20-person startup isn't reading every line of code for bugs. It's not the best use of their valuable time.

reply
mikkupikku 13 hours ago
It's the editors responsibility to make sure fabricated quotes don't get published, but it's also the journalist's responsibility to not paste fabricated quotes from a chatbot into their articles. The responsibility of the former doesn't negate the responsibility of the latter.

I can't just submit shit work all day long then blame QA when some of it goes through. That's like a burglar saying it's the cops fault that people got burglared.

reply
lynx97 13 hours ago
When more and more typos started to creep into news articles of our state-owned, national news feed and people started to notice, the explanation we got was basically that the frequency of news articles is so supposedly so high that it is supposedly impossible to catch them. If news orgs can't even do as much proof reading that they catch typos and grammatical errors, I highly doubt anyone is still doing editorial checking...
reply
gruez 11 hours ago
Isn't that the factchecker's job?
reply
chrisjj 7 hours ago
> beloch 9 hours ago | parent | prev | next [–]

"I inadvertently ended up with a paraphrased version of Shambaugh’s words rather than his actual words,” Edwards continued. He emphasized that the “text of the article was human-written by us"

... except the bit that wasn't.

Nomination for Weasel Words of the Year award.

reply
jrmg 18 hours ago
Is it normal/expected for a news organization to publish that they fired someone? I’m inclined to take the ‘don’t comment on personnel matters’ at face value.

They did report on the article quote sourcing debacle at the time - perhaps not as quickly as some would’ve liked, but within a couple of days.

reply
bayindirh 18 hours ago
Yes. Normally, and Ars is generally up to that standard, the editorial staff (or Editor in Chief) updates the article, adds a note about the correction, and further adds that the original author of the article is not working with Ars anymore.

It stays as a mark, immortalizing the error, but it's a better scar than deleting and acting like it never happened.

I also want to note that, this last incident response is not typical of the Ars I'm used to.

reply
nerdsniper 17 hours ago
> this last incident response is not typical of the Ars I'm used to.

They never really announced Peter Bright leaving ArsTechnica either though. At least not until much much later.

reply
bayindirh 17 hours ago
That was a criminal case, though. The court process may have prevented them from talking about it to keep things fair.

I'm not a US citizen and IANAL, so YMMV.

reply
pavon 6 hours ago
It isn't just Dr Pizza. In recent history (perhaps since being bought by Conde Nast?), when staff left, stories from them simply stopped appearing, and questions about whether they had left or were on a break were met with crickets. The only conformation came when the bio was changed and/or they announced they were hiring or had hired the person replacing them.

At least that is what I remember with Sam Machkovech, Ron Amadeo, Cyrus Farivar, Joe Mullin, Andrew Cunningham, Casey Johnston, Jaqui Cheng. And the policy doesn't appear to be limited to people leaving on bad terms since Andrew has since returned, and Cyrus occasionally contributes freelance articles. The last time I remember them announcing a departing staff member is when Ben Kuchera left.

reply
jrmg 6 hours ago
There was nothing at the article’s URL for a day or so after it was pulled (on a holiday weekend, FWIW), which I agree isn’t great. But there is, now, a page up at the article’s original URL:

https://arstechnica.com/ai/2026/02/after-a-routine-code-reje...

with a locked comment leading to the Editor’s statement:

https://arstechnica.com/staff/2026/02/editors-note-retractio...

I disagree with the idea that the misleading article text should remain up after a retraction.

reply
crazygringo 11 hours ago
I don't know what you're basing that on.

It seems entirely normal and standard to retract articles and publish a note elsewhere that it was retracted. In fact, it's common because if an article had one fabrication it might have others which you haven't discovered yet, so you don't want to keep it up.

Whether they want to announce that the journalist was fired is up to their discretion. But it's not necessary or even normal.

I don't know why you're talking about a "mark", a "scar", that "immortalizes". That's weird and frankly a little disturbing. The journalist got fired and the article got taken down and a note was made by the editor. That's accountability working as intended. I don't know why you want more than that.

reply
bayindirh 11 hours ago
First, I didn't want him to be fired, frankly. I have a comment telling exactly that when this thing happened.

Second, as a reader following Ars for more than 10 (15?, IDK) years, I never seen them abruptly retract an article like this. Their modus operandi is correct and own the corrections. This is what I always said (this is the third time in a comment train).

We all have scars. From a fall, from a cut, physical, emotional, whatnot. You don't need to feel sad, or get disturbed about it. A scar is a life's way of making you remember something. If it's your own making, it makes you remember what not to do. If it's someone else's making, it's makes you remember an unfortunate event you made out alive.

Owning your mistakes by correcting an article and marking it is greater accountability than saying "this has never happened, nothing to see here, move along". I'll not comment further on firing of the author. I don't have enough information on any side, or I don't know them close enough to say anything further than I wish he didn't get fired.

reply
crazygringo 10 hours ago
But retracting an article is more serious than making a correction.

The accountability comes from the editor's note. It's there already. It's owned.

You're acting like this is some attempt to bury a mistake. It doesn't appear to be. It's what happens when you don't even have faith that the rest of the article is correct.

reply
donohoe 11 hours ago
No. That can happen but it’s not the only path. An article can be retracted. That said, it’s usually noted somewhere else.
reply
bayindirh 11 hours ago
You're right, but I told about what Ars does 99.999% of the time. This is the only exception I see Ars retracts an article and buries it deep like this.
reply
g947o 13 hours ago
If a news organization publishes an article welcoming someone onboard, they should also do that when someone is fired because of a scandal.

Of course, if someone leaves because of personal reasons or jumping ship, there is no reason to do that. But this is different.

reply
donohoe 11 hours ago
Sorta. Usually they would do a press release or a post on their company blog - not an article.

Aside, posting about a new hire is easy and has no legal livability. Posting on a departure can be a tangled web.

I do agree that some note by Ars would be good here.

reply
Hnrobert42 11 hours ago
The complications of HR policy and law do not allow for your proposed solution.
reply
g947o 11 hours ago
I have no experience in that area, but it's hard to see how a plain, factual statement "This person is no longer with the company." can be problematic.
reply
pavon 6 hours ago
That statement by itself wouldn't warrant an article, and it would be difficult to include a statement like that in a larger article about the event, without implying more than that.
reply
IshKebab 18 hours ago
The BBC reports on itself quite well (maybe too much even). Here's an example:

https://www.bbc.co.uk/news/articles/cly51dzw86wo

I think they're an outlier, but still I was disappointed by Ars's response. They deleted the article and didn't detail what was wrong with it at all. Felt like a cover-up.

reply
d1sxeyes 15 hours ago
To be completely fair, BBC news is effectively a different organisation which has the BBC name. There's a fairly good overview of it here: https://www.bbc.com/sport/football/articles/c80l3074mgko
reply
kitd 14 hours ago
BBC News does have to report on itself from time to time. Here's it's "live" feed from November on the Parliamentary Committee investigation into the Trump speech edit incident:

https://www.bbc.co.uk/news/live/cp34d5ly76lt

(edit: technically, it was Panorama. I'm not sure if that is part of the News remit or separate from it).

reply
buran77 15 hours ago
> They deleted the article

This was a big disappointment. I read the original article and the comment from the source highlighting the error, knew what was wrong with it, and still think it was the wrong move to just delete the article and all the original comments, and replace it with an editorial note.

This is a kind of cover-up. It's impossible to hide the issue but they went to great lengths to soften the optics and remove the damning content from the public record. They obscured the magnitude of the error. It looks like another "person uses AI and gets some details wrong".

What they did so far, the decisions that allowed the issue to occur in the first place (e.g. no editorial review before publishing) and the first reaction to deal with the incident (just destroy the content, article and comments) is everything I need to know about the journalistic principles at ArsTechnica. it's a major loss of trust for me.

reply
Gagarin1917 18 hours ago
They’re at this level because the editors have always had low standers.

I don’t know about you guys, but I feel like 50% of Ars headlines are completely misleading.

They’ve had this problem for years. They will publish anything that gets them clicks. They do not care if a writer makes things up. They do not care if their headlines are misleading - in fact, that’s the point. They clearly got into the job in order to influence and manipulate people.

They’re bad people, with terrible motivations, and unchecked power. They only walk back when something really really bad happens.

Never trust an Ars headline.

reply
3abiton 18 hours ago
> They’re at this level because the editors have always had low standers.

It's not just Ars Technica. I would go as far as saying the big majority. I work at the biggest alliance of public service media in EU, and my role required me to interact with editors. I often do not like painting with broad brush, but I am yet to meet a humble editor yet. They approach everything with a "I know better than anyone else" attitude. Probably the "public" aspect of the media, but I woupd argue it's editorial aspect too. The rest of the staff are often very nice and down to earth.

reply
mikkupikku 14 hours ago
> but I am yet to meet a humble editor yet. They approach everything with a "I know better than anyone else" attitude.

They're like "UX experts" in software. One does UX for software, the other does UX for text. Same attitude problems, from the way you describe it. If the expert in something so subjectively judged is seen to be conceding anything, that might undermine their perceived expertise. Any push back is interpreted as somebody challenging their career.

reply
pixl97 11 hours ago
> Any push back is interpreted as somebody challenging their career.

I mean, yes, this happens quite a bit, especially with egotistical people.

But to play devils advocate they do have to deal with a massive fuckload of bullshit asymmetry where people dumber than rocks spew forth a never ending stream of stupid crap with the authority of an LLM.

reply
iugtmkbdfil834 17 hours ago
<< They approach everything with a "I know better than anyone else" attitude.

My charitable read is that if one has to interact with the public, one naturally develops an understanding of what is wrong with it.

reply
g947o 13 hours ago
Same for the Verge. Sometimes their headline or content contains factual errors. If you point it out in the comment, sometimes they do it properly and add a correction, other times they quietly fix it and delete your comment. So much for their free speech stance and editorial practice.
reply
adornKey 7 hours ago
A few years ago I liked Ars Technica, but then somehow I think quality went down the drain. Did something happen to them a few years ago? If they get rid of the crazy reporters and go AI only - maybe the quality will improve again to a readable level.
reply
jodrellblank 10 hours ago
> "always had low standards"

Always? Or since they were bought by Conde Nast in 2008?

reply
tw85 6 hours ago
It doesn't help that the background of most Ars' writers was some variant of "former IT pro", which is almost guaranteed to mean they're unqualified to write with nuance and depth about serious technical topics. So you have guys like Jon Brodkin pumping out total nonsense about the latest wireless communications breakthroughs (just one example I remember) while 99% of the audience has no clue and won't check them on it.
reply
bayindirh 15 hours ago
> I don’t know about you guys, but I feel like 50% of Ars headlines are completely misleading.

I believe they are doing A/B testing on these.

Ah yes, I remember correctly for once: https://arstechnica.com/civis/threads/why-do-front-page-arti...

TL;DR: They are doing mandatory A/B testing since 2015.

reply
bombcar 12 hours ago
A/B headline testing is just scientific clickbait.
reply
oneeyedpigeon 11 hours ago
I disagree. You could A/B test two good, accurate, well-written headlines and stay clear of clickbait altogether. Sure, you're still optimising for the most popular, but "clickbait" doesn't just mean "well performing", there's also an implication of duplicity.

I have a modicum of experience here. I write for another online media company and, although we produce our own headlines, we are 'strongly encouraged' to write clickbait headlines, to the extent where we are asked to remove instances of specific product names (etc.) in order to be mysterious and not give the game away too early. (Yes, in case it wasn't clear, I hate this!)

reply
bombcar 3 hours ago
Sure, you can be above board (and perhaps they even try) but that recent “WiFi is broken wide open” headline that turned out to be something about device-to-device and not wargaming told me where their hearts lay (in being paid, understandably).
reply
bayindirh 12 hours ago
I didn't argue that it isn't?
reply
bombcar 12 hours ago
I was agreeing; anyone who admits to using A/B headline testing is just admitting to be a clickbait factory.
reply
pixl97 11 hours ago
You ever ask the question why they would want to be a clickbait factory?

Because it pays the bills, unfortunately. Google has sucked up all the advertising dollars that used to pay for media and the rest of the world is now doing card tricks to earn scraps to pay the bills.

reply
simianwords 15 hours ago
Example?
reply
ChrisSD 15 hours ago
> Aurich Lawson of Ars Technica deleted the original article

That's a very "shoot the messenger" statement. While Aurich is the community "face" of Ars, I very much doubt he has the power to do anything like that.

reply
kergonath 17 hours ago
Ars never commented about firing staff before, and it happened on several occasions. You get the occasional article when someone joins, never when someone leaves. They should have published another article after all this, but I would not expect them to comment about staff.
reply
anakaine 17 hours ago
And I think thats a good thing. People screw up, and journalists are people. This person's punishment for their screw up was losing their job. They do not need to be dragged into a hit piece.

Ars can, and probably should if they have not already, publish a piece about hallucinations and use of AI in journalism, and own up to their own lack of appropriate controls and reflections. They do not need to drag the authors name into the write up. It can be self critical of themselves as a journalistic outlet.

reply
g947o 13 hours ago
Nobody needs to publish a hit piece.

Ars could have just said "After investigation, we reviewed our editorial process. The author of the article is no longer with the company." factually and objectively.

I can't see how this could possibly be a negative or harmful thing.

reply
smallerize 11 hours ago
There's no point trying to update an article with fake sources. If you can't trust the material, there's no story. I think pulling it was the right move here.
reply
tw85 6 hours ago
I wonder if it has something to do with an incident years ago in which one of Ars' senior reporters (Peter Bright) was arrested and convicted for child enticement. Ars eventually allowed one of their readers to write a forum article about it, but they didn't write one themselves at the time. Some people defended this course of non-action by saying it was the sensible thing to do because his colleagues could become witnesses in the trial.
reply
elAhmo 12 hours ago
> There’s something to be said about the value of owning up to issues and being forthright with actions and consequences.

Exactly! The situation happened, no going back, but they had a choice - to be transparent about it and I am sure people would be appreciative of it, maybe giving them net positive rather than negative, but the choice they have made is a complete opposite and a sign that no one should trust them.

reply
smallerize 9 hours ago
Ars isn't winging it here, they are following Conde Nast HR processes. https://arstechnica.com/civis/threads/editor%E2%80%99s-note-... "I can confirm that the HR processes are intricate and complicated. I can confirm that we have union writers. I can confirm these things take time." -Aurich
reply
jmbwell 12 hours ago
I note that Ken Fisher did post an editor’s note, Benj did publicly own up to it, and all of this was mentioned in the article.
reply
hluska 11 hours ago
Republishing an article with corrected quotes is reserved for cases where an editorial team can trust the substance of an article. There is an error but that error doesn’t impact the amount of trust the editorial team has in the article.

A retraction is totally different. It means that an editorial team does not trust any of the underlying article. It’s the biggest stick in journalism and is only reserved for the absolute worst breaches of trust.

When you retract an article and then update the author’s bio to past tense, that’s as clear of a signal as you can ethically send. A publication with clout makes news and writes the first line of people’s obituaries while they’re still alive - a degree of tact, professionalism and newsworthiness comes into play.

reply
calyth2018 10 hours ago
Conde Nasty corrupts eveything.
reply
dust-jacket 9 hours ago
I absolutely disagree.

This has been done very professionally. They pulled the article. They handled the personnel matter. They didn't try to pretend it hasn't happened.

Why are people here acting like retracting an article is an attempt to hide something. They literally replaced the whole text with a note from the editor saying "this article was bad".

reply
chrisjj 7 hours ago
> Why are people here acting like retracting an article is an attempt to hide something.

Because the retraction notice hid the article name and the author name?

reply
petterroea 19 hours ago
It seemed to me like very hasty self defense, there's a lot of AI slop hate and Ars can't risk becoming known as slop when their readers are probably prone to be aware of the issue.

I don't think Ars thought they had a choice but to cut off the journalist who made the mistake, especially when it was regarding a very touchy subject. I don't think they had a choice, it's impossible for us readers to know if this was a single lapse of judgement or a bad habit. Regardless, the communication should have been better.

reply
esperent 19 hours ago
All they had to do was write a clear and simple message saying that one of their staff was responsible, has been fired, and they'll take steps to avoid this in future.

Their actions so far just make me think they're panicking and found a scapegoat to blame it on, but they're not going to put any new checks in place so it'll just happen again.

reply
DetroitThrow 19 hours ago
It was against their policy to use AI in producing any part of the final article, and the writer was aware of that.

I feel bad for the guy, but there's just no way I can imagine much better safeguards other than editors paying more close attention to referencing sources, and hiring more reliable people.

reply
autoexec 18 hours ago
> It was against their policy to use AI in producing any part of the final article, and the writer was aware of that.

More than that, as a reporter on AI he should have been fully aware that AI frequently bullshits and lies. He should have known it was not reliable and that its output needs to be carefully verified by a human if you care at all about the accuracy or quality of what it gives you. His excuse that this was done in a fever induced state of madness feels weak when it was his whole job to know that AI was not an appropriate tool for the task.

reply
Barbing 18 hours ago
>his whole job

Possibly akin to a roofer taking a shortcut up there, then taking a spill? You knew better but unfortunately let the fact that you could probably get away with it with zero impact decide for you.

IIRC hallucinations were essentially kicked off initially by user error, or rather… let’s say at least: a journalist using the best available technologies should have been able to reduce the chance of this big of an issue to near zero, even with language models in the loop & without human review.

(e.g. imagine Karpathy’s llm-council with extra harnessing/scripting, so even MORE expensive, but still. Or some RegEx!)

reply
true_religion 16 hours ago
Alternatively… there was no AI error, the reporter made up the quotes, and lied when they were challenged.
reply
Barbing 2 hours ago
Are you familiar with the reporter's work & reputation?
reply
bombcar 12 hours ago
The chance that the very first time AI was used it screwed up and was caught is pretty low.

It’s likely been used before but nobody got caught.

reply
tonyedgecombe 18 hours ago
You have to give them time to do the job properly as well. Companies will often pay lip service to standards then squeeze their staff so much those standards are impossible to attain.
reply
esperent 18 hours ago
Yes, those are exactly the kind of steps they would need to publicly commit to in order to retain trust. And yet, instead we get silence, no acceptance that some measure of responsibility falls on the editorial team here. So it's clear they just hope it'll blow over without them having to do anything, which is the opposite of what a trustworthy site would do.
reply
gertop 19 hours ago
AnonC doesn't seem to be upset that the journalist was fired. The disappointment comes from Ars trying to brush this entire situation away by deleting articles, comments, and making no statement on their website.
reply
petterroea 19 hours ago
My understanding is that AnonC is upset at Ars not taking the mature approach by allowing this to become a learning moment for the employee and using it to double down and confirm their stance on AI generated content. There's strength in maturity. But I am doing some reading between the lines, and I'm possibly reading a bit too much into "There’s something to be said about the value of owning up to issues"

Reminds me of a story I was told as an intern deploying infra changes to prod for the first time. Some guy had accidentally caused hours of downtime, and was expecting to be fired, only for his boss to say "Those hours of downtime is the price we pay to train our staff (you) to be careful. If we fire you, we throw the investment out the window"

reply
bandrami 18 hours ago
"Make sure quotes in your article are things the subject actually said to you" is not something that should need a "learning moment".
reply
watwut 18 hours ago
Accidentally taking down production should not lead to firing. It should lead to improved process

Making up quotes for article, with technology or not, should lead to firing.

reply
oneeyedpigeon 11 hours ago
"should lead to firing..."

... and, also, improved processes. There should be no way an individual writer can damage the brand to this extent with absolutely no checks or oversight. This was just an error, but a bad actor could've put something far, far worse out there.

Even an automated quote-checker might have helped in this case.

reply
jcgrillo 11 hours ago
Fact checking is a vital part of the editorial process and clearly that process failed here. Tech people often have a double standard when it comes to journalism--rules for thee but not for me. However the structure is fairly analogous, in that both professions ship under lots of time pressure where mistakes can be costly. I'm not sure, honestly, who is most at fault here or why only the reporter was terminated. But my comment above was to highlight that there shouldn't be a double standard--if you think a journalist should be fired for this kind of error it would be inconsistent to believe a software engineer shouldn't.
reply
lynx97 18 hours ago
There is a difference between an error and totally misunderstand your actual task. I have absolutely no sympathy for journalists getting caught producing hallucinated articles. Thats an absolute no go, and should always result in that person being fired.
reply
jcgrillo 18 hours ago
Same goes for engineers reviewing vibeslop. If you let that shit through code review, and a customer impacting outage results, that should be instant termination. But it won't be, because as an engineer you are supposed to be held "blameless" right?
reply
pixl97 11 hours ago
Hence why software engineers aren't an actual professional licensed engineers.
reply
mikkupikku 13 hours ago
I love vibe coding but you are absolutely right. We're at the stage where vibe coding is a fun way to produce sloppy software and that's fine if the intended user is just yourself and you're fully informed about what you're getting into. But actually shipping vibe coded slop to other people is wacky, anybody doing the needs to be manually reviewing every commit very carefully and needs to be prepared to accept personal responsibility for anything that slips by.
reply
jcgrillo 11 hours ago
The problem is that reviewing code for correctness is harder than writing correct code. So these things will always slip through review. I'm a little bit divided here whether we can (or should) blame a reviewer too harshly for letting broken code through review whether it's LLM or human generated.

I've worked on teams with a rubber stamp review culture where you're seen as a problem if you "slow things down" too much with thorough review. I've also worked on teams that see value in correctness and rigor. I've never worked on a team where a reviewer is putting their job on the line every time they click "Approve". And culturally, I'm not sure I'd want to.

That said I think it's pretty clear we need mechanisms that better hold engineers to account for signing off on things they shouldn't have. In some engineering domains you can lose your license for this kind of thing, and I feel like we need some analogous structure for the profession of software engineering.

reply
watwut 18 hours ago
Joirnalist job was not to review ai-slop. That is rather crucial difference.
reply
radiohead89 12 hours ago
> It’s sad to see Ars Technica at this level.

They had to do this. You have to have journalistic integrity above all.

reply
14 18 hours ago
Where I work in healthcare honestly and owning up is encouraged and unless there is major negligence not often punished. They just want to try learn why the mistake happened and look for ways to prevent it going forward. My buddy said for his company if an accident happens WorkSafe is not out to punish as long as they are very forward and honest. Again they want to learn how to avoid it happening again. Punishment only scares others to try hide mistakes.

I think they missed a big opportunity to instead of firing the guy sit him down and stress how not okay this was and that it harms the credibility and he needs to understand that and make a proper apology. They could make him do some education like ethical reporting responsibilities or whatever.

Then like you say not just hide the article but point out the mistakes and corrections. Describe the mistake and how credible reporting is their priority and the author will be given further education to avoid this happening again. They could also make new policies like going forward all articles that use AI for search results must attempt to find a source for that information. This would build trust not harm it in my opinion.

reply
sumeno 12 hours ago
If a doctor intentionally did something that they obviously know is unethical they would be fired too. This was not a "mistake", it is a huge ethical violation.

This is more like writing your buddy a prescription for drugs to take recreationally

reply
vintagedave 16 hours ago
I agree. I'd add that the fact he appeared to be working while sick -- and that he pre-emptively and immediately publicly apologised -- means I think he already did behave as he should.

This makes me question Ars not him. Loss of credibility indeed.

reply
vpribish 19 hours ago
This has just happened - i'm giving Ars a bit more time to come out with a piece examining the situation. They're a pretty good operation, I think. but it they don't...
reply
jmbwell 11 hours ago
This. The truth is still putting on its shoes and all that
reply
rob_c 13 hours ago
> It’s sad to see Ars Technica at this level.

This was from a journalist _who_is_hired_as_expert_ at knowing of/about tooling that hallucinates (LLM ((AI)) chatbots). Decides to implicitly trust said technology to write a "hit piece" (lets be honest it was).

In several territories that would fall under slander and if is untrue is a major journalistic mis-step and career ending faux-pas.

Why in any situation would their position now be defendable?

This is akin to being a journalist of iron-mongering writing a "truth" piece on how "jet fuel can't melt steel beams" (if you don't get my reference here, lucky you). It's outright un-professional.

Blaming it on illness allows everyone to save face, but they were compos mentis enough to hit publish at the time. That itself carries a certain "I'm well enough to agree this is a good article" from said author.

reply
vintagedave 16 hours ago
I'm sad to see them fire him. I've seen far worse: I have always approached issues by asking for accountability and improvement. Frankly, he already did: he openly apologised. I was very happy with that, it demonstrated integrity and I remained respecting him.

Even worse,

> I have been sick in bed with a high fever and unable to reliably address it (still am sick) [0]

In an earlier HN thread, I saw someone ask why Ars was requiring staff work while ill. If that's true, if he posted without verification while sick and under pressure, which is implied and plausible, firing looks doubly bad.

Ars has lost a lot of my trust in recent years, with articles seeming far worse. Just like you, I'm sorry to see the editorial position here.

[0] https://bsky.app/profile/virtuistic.bsky.social/post/3mey2mq...

reply
mikkupikku 13 hours ago
You're taking his fever dream excuse at face value, and I think you probably shouldn't. It reads like a lame excuse to deflect personal responsibility, a cynical face-saving tactic.

If the illness was genuine, can he document that he advised management of this fever and they told him to submit an article anyway? It's not his bosses job to stick a thermometer up his ass every morning.

reply
vintagedave 7 hours ago
> You're taking his fever dream excuse at face value

Being sick with a bad fever is awful, it's a nightmare, and I cannot imagine making good decisions at the time.

I do not know if he was ordered to work while sick, but there are often implicit expectations in workplaces and this was a time-sensitive article.

reply
asadotzler 6 hours ago
>I do not know if he was ordered to work while sick

You don't know if they were even sick at all. In fact, when someone gets fired for cause, it's quite common for them to lie about the circumstances. That you take their comments at full value seems kinda naive.

reply
seethishat 12 hours ago
I agree. In my experience, no one cares when you are sick. No one. Maybe your mom, but that's it. Using it as an excuse when you make a mistake is even worse. People value responsibility... "Sorry, my bad, won't happen again", not excuses.
reply
bmurphy1976 13 hours ago
He posted his not very impressive apology as images not text that is easily indexed. I do think that was purposeful and manipulative and very much makes me question his motivation. If I'm missing the original posting in text I'd sure like to know so I can correct this perception.
reply
vintagedave 7 hours ago
In fairness, people often do this in order to have a full statement visible, not a portion, not spread over multiple posts, etc.

I'm not saying you're wrong. I just caution that jumping to 'purposeful' (re not easily indexed), 'manipulative' etc is a very strong leap.

I also found the post by searching for a quote from it, so I think images with text likely are indexed. I can't imagine in 2026 they wouldn't be.

reply
WarmWash 10 hours ago
Ars has probably the most rabid anti-AI audience of all the tech publications
reply
dust-jacket 9 hours ago
Nah, The Register is far more strongly anti-AI. Mention AI and systemd in the same article and watch them froth
reply
Paddyz 14 hours ago
[dead]
reply
xnx 12 hours ago
Ars is not journalism. It's scraped content to put between ads.
reply
mmmpetrichor 9 hours ago
I disagree strongly. I've found that they have more journalistic integrity than most mainstream news pages. Did you see the extremely strong post from the senior editor after this AI hallucination was discovered in a published story? And then the reporter was dismissed.
reply
zombot 12 hours ago
Yep, Ars Technica is off my reading list. They completely lost my trust.
reply
carabiner 17 hours ago
It's cuz Ars's roots are in being video game bloggers and graphics card reviewers, not legitimate journalists. They don't have a notion of professionalism or journalistic duty, only virality and juicy takes.
reply
vasco 18 hours ago
They're a random tech blog, the kind of website that is peak time waste slop, why would they have any standards? Even the new york times and the Washington post put up wrong things all the time without corrections. People need to realize journalists are just ad sellers, not some beacon of truth. They are there to sell ads, the same way a youtube video of a guy eating too much food in front of a camera is.

Journalism has devolved into content creation in the literal sense of the word, they are just there to put something inside the div with the id "content", to justify the ads around it.

reply
lukan 17 hours ago
"People need to realize journalists are just ad sellers, not some beacon of truth."

You just changed the meaning of journalist. Now sure, the job of some journalists could be better described as ad sellers, but I rather call those like that and restrict the original term to actual journalists who actually care about truth. Because they still exist.

reply
vasco 17 hours ago
The 3 people that work at Reuters actually doing journalism are not doing in ANY way a similar job to the millions writing blog posts for Ars Technica like publications. The latter is an ad seller indeed. And the majority of publications that are renowned also do little to no journalism.

It's as if we called "web devs" that learned JS on udemy and just vibe code, Computer Scientists and treated them as if they publish compiler research papers. It's just a completely different job

reply
lukan 17 hours ago
Eric Berger at Ars for instance is someone I consider a journalist. Have you proof, that he systematically neglects truth in favor of ad selling?
reply
mikkupikku 13 hours ago
Berger is a real one. I'm surprised he's lasted so long at Ars Technica. I think eventually his objective reporting of SpaceX will get the Are Technica reader base to demand his firing, Ars readers are very reddit-like. Team minded, not interested I hearing dispassionate takes. Hearing Elon Musk criticized as a person while simultaneously seeing SpaceX described as a real and highly accomplished company gives reddit/ars readers tonal whiplash, such people prefer simple narratives without nuance.

See also, in this very thread, somebody who thinks Berger has a strong pro-musk bias because his reporting and books say that SpaceX are good at what they do.

reply
lukan 12 hours ago
"Ars readers are very reddit-like"

How can you know? I think you mean most reddit commentors are very reddit like (nowdays I tend to agree). I read Ars from time to time, but I never commented there. But still, when I read comments, I don't get the impression that Berger is close of getting fired.

reply
mikkupikku 8 hours ago
It's a little thing called reading. I read Are comments, I read reddit comments, and I judge them to both be a bunch of morons who are perpetually suspicious of nuance.
reply
lukan 7 hours ago
You did not seem to got my point, that there are readers, but not all of them are commentors? So judging all readers because you perceive commentors as mainly stupid is maybe missing data?
reply
mikkupikku 6 hours ago
You're right, I used the word readers when I was talking about the commenters. My apologies to Ars readers who don't comment, including myself.
reply
jodrellblank 10 hours ago
> How can you know?

They don't know; their whole comment is just empty insults about simpeltons. If anything should have the derision that "slop" gets, it should be the thousands of comments like that which hit HN every day.

reply
doctorpangloss 19 hours ago
you're participating in a social media site where something like 20% of the articles have become, "I told Claude Code to do something and write this article about it." So put your money where your mouth is, if you think it's sad, if this is more than concern trolling, hit Ctrl+W.
reply
aizk 20 hours ago
I have a story with Benji.

Last year I went viral, and Benji was the first person to interview me. It was a really cool experience, we chatted via Twitter dms, and he wrote a piece about my work - overall did a decent job.

Then, 6 months later a separate project I was adjacent to was starting to pick up steam. I reached out to him asking if he wanted to cover us. No response.

Then, tech crunch wrote an article on our project.

I reached to Benji again saying "Hey would you like to chat again, now we have some coverage?" And he finally responded, but said he couldn't report on me because he had a directive that he could only report on things that didn't have any prior or pre-existing coverage (?)

I thought that was rather strange, especially since we already had built up a relationship.

I don't really have a moral or lesson to this story, other than that journalism can be rather opaque sometimes.

Oh one other tip for anyone reading this - if you do ever get reached out to by journalists, communicate in writing, not a phone call so you can be VERY precise in your wordings.

reply
aleph_minus_one 16 hours ago
> Then, 6 months later a separate project I was adjacent to was starting to pick up steam. I reached out to him asking if he wanted to cover us. No response. [...]

> I reached to Benji again saying "Hey would you like to chat again, now we have some coverage?" And he finally responded, but said he couldn't report on me because he had a directive that he could only report on things that didn't have any prior or pre-existing coverage (?)

> I thought that was rather strange, especially since we already had built up a relationship.

The US mentality might be different, but at least having grown up and living in Germany, such an annoying hustler who wants to use some journalist as a marketing influencer for his private project is a huge no-no. In other words: it is a very reasonable decision (perhaps even the only right one) for any journalist to fob off such a hustler.

reply
aizk 8 hours ago
If simply saying "Hi, would you be interested in covering this?" characterizes me as an annoying hustler then you know what I'll take it.
reply
aizk 8 hours ago
I'm not hustling enough to be honest.
reply
0xDEAFBEAD 12 hours ago
>The US mentality might be different, but at least having grown up and living in Germany, such an annoying hustler who wants to use some journalist as a marketing influencer for his private project is a huge no-no. In other words: it is a very reasonable decision (perhaps even the only right one) for any journalist to fob off such a hustler.

Yeah there seems to be a thing where in the US, what's seen as "selling yourself" or "putting your best foot forward" is considered excessive self-promotion / tall poppy behavior in other cultures.

reply
stuxnet79 5 hours ago
> Yeah there seems to be a thing where in the US, what's seen as "selling yourself" or "putting your best foot forward" is considered excessive self-promotion / tall poppy behavior in other cultures.

It is a uniquely US thing & is a common struggle for foreigners who are new to US corporate culture.

Can be especially tricky if you are a 3rd culture individual that has to manage relationships spanning different cultures in your daily life. You can't easily turn "hustler" mode off and on.

It is a huge faux pas in almost every non-western culture and can wreak havoc in your personal life.

reply
aleph_minus_one 11 hours ago
Slightly off topic:

Why is excessive self-promotion considered "putting your best foot forward"?

I understand that you need the money, so you do self-promotion. But this is clearly not "putting your best foot forward", but a "put a bad foot (annoy other people by excessive self-promotion) forward because you need the money", i.e. what many US-Americans do is by my understanding the opposite of this life advice which they give.

reply
0xDEAFBEAD 11 minutes ago
I could equally well ask why putting your best foot forward would be considered excessive self-promotion. Consider the example of contacting a journalist. Why would it be a huge no-no? Why can't the journalist just treat it as any other lead? Skim the email, if they're not interested, ignore or delete. That's not a significant burden. If they are interested, such emails actually help the journalist do their job, by providing ideas for stories.
reply
ambicapter 10 hours ago
You're coming off as clearly not understanding the other side here. Obviously "putting your best foot forward" is not simultaneously "annoy other people by excessive self-promotion" in the mind of a single person.

There are two different types of people, and they think of the same action in two different ways.

reply
albedoa 10 hours ago
That is the US mentality too outside of a small but persistent bubble of hustlers, supported by their symbiotic relationships with publications that need them just as much.
reply
NicuCalcea 13 hours ago
I'm a journalist. As a general rule, if someone approaches me with a pitch for a feature or investigation (not news piece) that was already published elsewhere, I'll turn it down. To be fair, I turn down all PR pitches, but there are journalists who don't but still want an exclusive.

It sometimes happens that you spend weeks or months working on a story, only to be scooped by another publication. It sucks, especially if you think your story is the better one, but unless you can pivot or add a substantial amount of new insight, it won't come out.

reply
areoform 20 hours ago
Sometimes people get busy and overwhelmed, but they don't know how to say no.
reply
epistasis 20 hours ago
I know a lot of people that don't get through their email every week, for example. Even saying no takes too much time, with the volume of communication required by daily work.
reply
abustamam 17 hours ago
Very few people email me except for endless newsletters that I accidentally signed up for. I try to un sub to a few every day but it seems never ending.

In the event that you actually do end up emailing me, it's contingent on me actually checking my personal email, which I never do when I'm not working, and only sometimes do during work hours.

If it's you asking me a favor that I'm not in the mental space for, I'll mark the message unread as a reminder to get to it later.

Maybe I just have weird email habits, but I can get away with this because email is not a heavy part of my job.

That being said, one guy was pitching me on something several times a month for several months. I just recently responded to him and apologized because of x y z. He said don't worry and we had a fruitful conversation later.

So, follow through is important!

reply
Barbing 18 hours ago
Their repeat emailers might win eventually!

Passing on some life advice to anyone who’d benefit, people are busy. Maybe they didn’t respond because you’re annoying?… no no, feel it out and text again a while later. Give them another shot, get to the top of their inbox or messages again.

After someone told me that I realized it’s true!

reply
Sammi 15 hours ago
This is an experience I've had with reporters multiple times. They don't like to write about the same thing twice.
reply
xnx 15 hours ago
My hunch is Ars will copy/reword/repost articles from real news sources (basically free for Ars) or do its own reporting for exclusive stories (costs reporters some time). No reason for Ars to spend reporter time on something they can copy.
reply
lovich 19 hours ago
[flagged]
reply
grantith 18 hours ago
They have a website, a twitter handle, and a GitHub profile with their real name.
reply
guerython 10 hours ago
This is exactly why every AI citation we publish goes through a blocker. We dump the AI transcript plus the generated case numbers into a little script that hits the official court database and only passes through citations that return the same case id, party names, and paragraph text. If the extra lookup fails, the shot has to be marked as a hallucination, logged in the docket, and a human has to go re-verify with the actual law reports before we file anything. Treat the LLM like a drafting helper, not an authority, and make the human verification the gate that moves the draft from “AI promised” to “judicially safe.” We also keep a micro audit trail so if a clerk says “the AI gave me this,” we can replay how the prompt went and which citation check failed. What guard rails have other people put in front of AI-written judgements?
reply
bsimpson 6 hours ago
I recently saw an interview with Anders Hejlsberg of TypeScript (and a long pedigree before that). The interviewer asked him about the role of AI in his work. I believe the context was porting TypeScript's tooling to Go.

His trick is to use AI to build the tools that do the work, not to ask it to do the work itself. If you say "hey Mr. AI, please port this code to Go," it'll give you back a big bag of code that you have no insight into. If it hallucinated something, you wouldn't know without auditing the whole massive codebase.

If instead you let AI build a small tool to aid the work, your audit surface is much smaller - you just need to make sure the helper tool is correct. It can then operate deterministically over the much larger codebase.

reply
gracelynewhouse 10 hours ago
The individual firing is a distraction from the structural issue. Newsrooms have been cutting editorial staff for a decade, which means the verification layers that would have caught this — fact-checkers, copy editors, senior editors doing source verification — largely don't exist anymore. Then they adopt AI tools that increase throughput without increasing oversight capacity, and act surprised when fabrication slips through.

This is a classic systems failure: you remove the safety mechanisms, add a new source of risk, and punish the individual operator. It's the same pattern you see in industrial accidents. The Swiss cheese model applies — every editorial layer that got cut was a slice of cheese being removed.

The more interesting policy question is whether publications should be required to disclose AI tool usage in their editorial process, similar to how financial publications disclose conflicts of interest. The FTC has signaled interest in AI-generated content transparency but hasn't issued concrete guidance for journalism yet.

reply
breput 19 hours ago
As much as I respect the site and gladly financially support it, this is ultimately a failure on Ars Technica and its editors. If there are any.

If this were just some random blogger, then yes the blame is totally theirs. But this was published under the Ars Technica masthead and there should have been someone or something double checking the veracity of the contents.

That said, there are a number of Ars Technica contributors that are among the best in their fields: Eric Burger, Dan Goodin, Beth Mole, Stephen Clark, and Andrew Cunningham amongst many, so one f'up shouldn't really impugn the entire organization.

reply
DamnableNook 16 hours ago
Eric Berger has a strong pro-Musk bias (having literally written a fawning book about him). To him, Musk can do no wrong, it seems.

I also dislike Dan Goodin’s reporting. He tries to talk the talk, but nearly every article he writes has some tell that he doesn’t really understand the thing he’s reporting on. Which is fine if he was relying on third-party expertise and quoting that, but he tries to make it sound like he has the expertise and it just comes up short. I feel like he’s a good example of that old fallacy that you think the news is correct about everything, until they report about something you know.

For me, Ashley Belanger is the best reporter they have. She might not have the subject matter expertise some of the others there claim, but she has the best journalism of anybody there. Lots of direct sources, well written, and the right level of depth. I honestly feel like I’m reading a different (and better) publication when I read her articles. More than once, I’ve had to scroll up to see if the article I’m reading was one of Ars’ licensed outside pieces, as the quality bar was higher than I’m used to, only to find her name.

Beth Mole is a close second. She has subject matter expertise, good journalism, and loves to slip in some humor or justified “get a load of this idiot” comments.

reply
infotainment 16 hours ago
I'd say if one has any interest in writing objectively about space technology, one will likely end up being perceived as having a "pro-Musk bias".

Elon himself is indeed questionable, but you really can't argue with his space-related achievements. Even other eccentric billionaires like Bezos haven't come close.

reply
sbarre 11 hours ago
Perhaps we should be attributing the "space-related achievements" to Musk's companies and employees, and not to him directly, or at least not solely?
reply
asadotzler 6 hours ago
Your comment makes it pretty clear you've not read him much. He regularly credits SpaceX with specific accomplishments and rarely brings Musk into the topic unless it's about setting direction, etc.
reply
latexr 13 hours ago
> that old fallacy that you think the news is correct about everything, until they report about something you know.

Gell-Mann Amnesia.

https://en.wikipedia.org/wiki/Michael_Crichton#Gell-Mann_amn...

The description is slightly backwards. The problem is you continue to trust the news after seeing how wrong they are about something on which you’re an expert.

reply
senko 15 hours ago
Berger wrote 2 books about SpaceX (not Musk), and he definitely does not have a pro-Musk bias.

He's is careful not to opine on Musk's other dealings, which is fair. As someone who wants to know more about SpaceX, I don't want to read yet more about Tesla, or Twitter, or Trump, or Epstein.

Personally, one of the authors I most like to read on ArsTechnica (though he writes rarely nowadays).

CarTechnica though .. yuck. Also, Oulette reliably picks movies and TV shows I will absolutely hate, so I guess good S/N there?

Mole's coverage is great if you're into Cronenberg-but-in-real-life.

reply
cubefox 15 hours ago
I think it's pretty widely agreed in the space flight community that Eric Berger is currently the best space flight reporter in the world. He has lots of insider sources. Several times he correctly predicted things years in advance. Most recently the Artemis III change to a LEO mission.
reply
latexr 13 hours ago
> He's is careful not to opine on Musk's other dealings, which is fair. As someone who wants to know more about SpaceX, I don't want to read yet more about Tesla, or Twitter, or Trump, or Epstein.

But all of those matter, and are not isolated. When the leader of an organisation is distracted by the other organisations they control, it matters. It also matters when they are repeatedly wrong about their predictions, even if on another organisation, because it helps you calibrate expectations.

https://en.wikipedia.org/wiki/List_of_predictions_for_autono...

reply
senko 12 hours ago
> But all of those matter, and are not isolated.

And have been done to death elsewhere.

Meanwhile, Berger produces balanced, informed, interesting, and informative coverage of space tech (in general, not just SpaceX).

reply
latexr 12 hours ago
Fair. I misunderstood your original post as “not wanting to read/care at all about”, but you did say “yet more”. Thank you for clarifying.
reply
icegreentea2 8 hours ago
Berger is clearly guarded and measured when he talks about Musk and SpaceX. Given the configuration of the space industry and the reality that Berger clearly needs access to make his living, I think Berger has provided generally even handed coverage of SpaceX, Musk and Musk's antics.

For example, his article on the SpaceX/xAI merger ends with a section called "Has SpaceX lost its way?", and framing that clearly indicates "this is what Musk says". https://arstechnica.com/ai/2026/02/spacex-acquires-xai-plans...

In his article on SpaceX's "pivot from Mars to Moon", he describes Musk as "In the last 25 years, Musk has gone from an obscure, modestly wealthy person to the richest human being ever, from a political moderate to chief supporter of Donald Trump; from a respected entrepreneur to, well, to a lot of things to a lot of people: world’s greatest industrialist/supervillain/savant/grifter-fraudster.". https://arstechnica.com/space/2026/02/has-elon-musk-given-up...

You can read a AMA he did on the SpaceX subreddit last year for more of his thoughts: https://www.reddit.com/r/spacex/comments/1fnq02q/eric_berger...

I think Berger got a lot of flack for not really commenting on Musk and DOGE during his articles last year, and I think it's fair to criticize him for that choice, but I don't think it's really a "Musk can do no wrong" position.

In other words, I think you really need to read between the lines for Berger re Musk.

reply
akdor1154 15 hours ago
Yeah Goodin's stuff is often slop.. Probably human slop but slop nonetheless.
reply
AceJohnny2 18 hours ago
> That said, there are a number of Ars Technica contributors that are among the best in their fields

I miss Maggie Koerth & Jon Stokes

reply
Y-bar 16 hours ago
Yes, dearly missed. As is John Siracusa’s Mac OS reviews.
reply
clint 7 hours ago
Most of these people left because the Ars readership is insanely toxic as evidenced in this thread.
reply
Y-bar 7 hours ago
Could be. I am currently scrolling the comments on the new Apple displays and the gatekeeping "Only rEaL ProoOOOs should have any achual use for frame rates over 60. Ur just lowly a gamer. Shoo!" attitude is through the roof there.
reply
PunchyHamster 14 hours ago
I think you have far too much faith in the process for the big media sites
reply
spppedury 19 hours ago
[dead]
reply
geerlingguy 20 hours ago
Context from earlier discussion of the article being pulled: https://news.ycombinator.com/item?id=47009949
reply
dang 20 hours ago
Thanks! and indeed - here's the sequence (in the usual reverse order). If there are missing threads we can add them...

An AI Agent Published a Hit Piece on Me – The Operator Came Forward - https://news.ycombinator.com/item?id=47083145 - Feb 2026 (501 comments)

OpenClaw is dangerous - https://news.ycombinator.com/item?id=47064470 - Feb 2026 (93 comments)

An AI Agent Published a Hit Piece on Me – Forensics and More Fallout - https://news.ycombinator.com/item?id=47051956 - Feb 2026 (82 comments)

Editor's Note: Retraction of article containing fabricated quotations - https://news.ycombinator.com/item?id=47026071 - Feb 2026 (205 comments)

An AI agent published a hit piece on me – more things have happened - https://news.ycombinator.com/item?id=47009949 - Feb 2026 (624 comments)

AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (30 comments)

The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (125 comments)

An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (951 comments)

AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (750 comments)

reply
themeiguoren 8 hours ago
There’s also “ An AI Agent Published a Hit Piece on Me – The Operator Came Forward”

https://news.ycombinator.com/item?id=47083145

reply
dang 3 hours ago
Great catch. Added. Thanks!
reply
swyx 18 hours ago
also how the heck do you pull all these related things, do you have a semantic/agentic search bot by now or is this all just from your head?
reply
dang 18 hours ago
It should be a semantic search bot and maybe will be in the future, but for now I rely on the method described at https://news.ycombinator.com/item?id=45546715 and the links back from there.
reply
swyx 58 minutes ago
i vaguely recall about 2 years ago some RAG startup or other did do one for you. i mean this is nothing for a startup and they could use the endorsement ha
reply
suzzer99 18 hours ago
> It should be a semantic search bot and maybe will be in the future

No. We only get the dystopian AI features, not the useful ones.

reply
vpribish 19 hours ago
dang, we appreciate all you do. thanks
reply
dang 2 hours ago
that's nice of you to say! I must add that tomhow is holding so much of this effort that I can't imagine what it would be like solo
reply
DiskoHexyl 14 hours ago
I'ts an open secret that even the larger news outlets mandate LLM use. They buy subscriptions and have guidelines on how to mask the output (so that it would read less AI'ed), how to fact-check the links and the quotes etc. The authors which aren't willing to jump on this particular train are quickly let go due to performance.

The expectation is to produce more with much less (staff), the pipeline is heavily optimized for clicks, every single headline is A/B tested- Ars isn't alone in churning out poorly reviewed clickbait (and then not owning their mistakes)

reply
mikkupikku 14 hours ago
Is there any evidence that Are Technica management induced this journalist to use AI, or are you just claiming it's an "open secret" and don't know anything about this specific incident? Because without any kind of details it kind of sounds like the latter, maybe motivated by a reflex to blame management whenever workers blunder? Unless there's evidence that a actually points at Ars Technica management, dismissing the journalists professional responsibilities using vague rumors doesn't seem appropriate.
reply
DiskoHexyl 14 hours ago
I didn't state that Ars Technica specifically mandate LLM use for their authors. What I did state about them is that their editorial standards are lacking, and they tend to produce a lot of clickbait.

IMO the industry is in crisis

reply
oneeyedpigeon 11 hours ago
> I'ts an open secret that even the larger news outlets mandate LLM use.

Given the context, if you didn't intend for this to imply that Ars mandates LLM use, you should probably rewrite it.

reply
amatecha 3 hours ago
I definitely interpreted his original post as suggesting that Ars also mandates LLM use, even if the words didn't say that explicitly. "even the larger news outlets" implies "in addition to the one we're already talking about"
reply
Dumblydorr 12 hours ago
How is it an open secret? Is there evidence for this? I still see typos in some articles so it feels like humans are in the loop, maybe that’s their AI masking?
reply
donohoe 11 hours ago
I would love to hear more about this “open secret” - especially the guidance on how to “mask the output” etc. because as someone who works in news/media it’s news to me.
reply
rahimnathwani 20 hours ago
The headline says Ars fired the reporter, but AFAICT the article doesn't include any facts that indicate this. All we know is that he no longer works there, and that Ars refused to provide any additional information.
reply
gwd 16 hours ago
> the article doesn't include any facts that indicate this.

It does include two facts:

1. That the reporter's bio on the webpage changed "...is a reporter at Ars" to "...was a reporter at Ars". On the one hand, that's pretty thin sauce. On the other hand, that's not exactly the sort of change that gets made randomly.

2. They reached out to the various people involved, and although nobody has confirmed it, it's also the case that nobody has denied it.

reply
bell-cot 15 hours ago
IANAL, but those facts could support "fired", or "resigned", or "short-term contract not renewed", or probably other stuff.
reply
dust-jacket 9 hours ago
I mean fired and resigned when it became clear you'd be fired are the same thing really.

We're not actually entitled to know the exact details of someone's job ending. They worked there. Now they don't. That much is the bit we're entitled to.

reply
asadotzler 6 hours ago
For public misconduct like this, we should get to know if he was fired (or asked to resign) as opposed to his making the independent decision to find work elsewhere or retire or whatever. We should get to know if he left because the company wanted him gone or because he wanted to be gone.
reply
Kwpolska 18 hours ago
Neither side has issued a statement about what happened, but Benj’s Bluesky post does not read like a post of someone who would have resigned due to this.
reply
flerchin 10 hours ago
The commentariat senses blood in the water and will criticize Ars Technica no matter how they respond here. It seems fine. The author really paid the price. I trust Ars to be extra vigilant to this going forward.
reply
raincole 20 hours ago
I have to admit, nowadays Google AI Overview's accuracy is so good that I often don't check the links. It's scary that it got from 'practically useless' to 'the actual google search' in less than two years.

I really don't know where the internet is heading to and how any content site can survive.

reply
SchemaLoad 20 hours ago
It's because the AI overview is most of the time directly summarising the search results rather than synthesizing an answer from internal model knowledge. Which is why it can hyperlink the sources for the facts now. Even a very dumb lightweight model can extract relevant text from articles

I just can't see how this is sustainable since they are stealing from the sources who are now getting defunded.

reply
raincole 19 hours ago
> I just can't see how this is sustainable since they are stealing from the sources who are now getting defunded.

Yeah, that's why I said I don't know where the internet is heading to.

reply
jrmg 18 hours ago
You can see the fall in real time - half the sources are also dubious AI slop now and that number’s only growing :-/
reply
Gigachad 18 hours ago
At work the conversation is that simultaneously everyone is using LLMs now, yet we receive virtually no traffic through them. The LLMs scrape our data, provide an answer to the user, and we see nothing from it.
reply
jrmg 18 hours ago
I have the same worry about LLMs in general - I know that ‘model collapse’ seems to be an unfashionable idea, but when the internet’s just full of garbage (soon?…), what are we going to train these things on?
reply
tehjoker 10 hours ago
They moved away from raw text and are now working with verifiable synthetic data (eg math, games, code) to improve general reasoning.
reply
Barbing 17 hours ago
How often are they scraping?

Also generally wondering… Do labs view scraping as legally safer than trying to cache the Internet? I figure it’s easy to mark certain content as all but evergreen (can do a quick secondary check for possible new news).

Maybe caching everything is too expensive?

reply
palmotea 18 hours ago
> I have to admit, nowadays Google AI Overview's accuracy is so good that I often don't check the links. It's scary that it got from 'practically useless' to 'the actual google search' in less than two years.

It says things I know to be false fairly regularly. I don't keep a log or anything, but it's left an impression that it's far from reliable.

reply
suzzer99 17 hours ago
Today I searched something and almost pasted the output into an internet forum discussion I was having. But I decided to check the wikipedia source just to make sure. The AI summary was not quoted directly from wikipedia, and it got some major aspects wrong in its summary. Lesson learned.
reply
jcranmer 10 hours ago
For my anecdote, I don't frequently deign to look at the overview at all... but every time I have, it has been completely and totally wrong. There's probably some selection bias going on in when I choose to try looking at it again, but still notable that the frequency is that high.
reply
paxys 12 hours ago
It is simply summarizing the top few search results. If they are false then the summary will be false.
reply
sjsdaiuasgdia 9 hours ago
That's not always the cause of the wrong info. I've had a few situations where I was asking pretty specific questions that absolutely have publicly documented authoritative answers, and the first several search results were either the authoritative sources themselves or things that reference the authoritative sources.

The AI answer got the actual questions completely wrong. The questions involved vehicle registration laws in a specific state. The questions included the name of the state. The AI's answer seemed to be giving information based on other states.

All of the first page of search results were specific to the state I asked for, so if it was just summarizing those you wouldn't think it would give answers that would only exist on entirely different pages.

reply
pseudalopex 19 hours ago
> I have to admit, nowadays Google AI Overview's accuracy is so good that I often don't check the links.

You would know how?

The links contradict or do not support the overviews often in my experience.

reply
mncharity 5 hours ago
> nowadays Google AI Overview's accuracy is so good

I felt similarly yesterday. This morning AIMode fabricated for me a diverse science education publishing and research effort around using generative AI to teach rough-quantitative reasoning. My face-nibbling canary was a linked cite to a book "Orders of Magnitude"... a sci-fi horror novella about space marines. Would be nice if the outlined work actually existed. There we some nice ideas. I look forward to it. The education stuff, not the space marines.

reply
deathanatos 19 hours ago
You should be checking the links more often, IMO. I've seen it respond a number of times with content that is not supported by the citations.

While trying to find an example by going back through my history though, the search "linux shebang argument splitting" comes back from the AI with:

> On Linux and most Unix-like systems, the shebang line (e.g., #!/bin/bash ...) does not perform argument splitting by default. The entire string after the interpreter path is passed as a single argument to the interpreter.

(that's correct) …followed by:

> To pass multiple arguments portably on modern systems, the env command with the -S (split string) option is the standard solution.

(`env -S` isn't portable. IDK if a subset of it is portable, or not. I tend to avoid it, as it is just too complex, but let's call "is portable" opinion.)

(edited out a bit about the splitting on Linux; I think I had a different output earlier saying it would split the args into "-S" and "the rest", but this one was fine.)

> Note: The -S option is a modern extension and may not be available

But this, … which is it.

reply
g947o 13 hours ago
We must be using different Google.com.

Sometimes I use a completely meaningless combination of keywords by mistake, and AI Overview will happily make up a story telling me what I am looking for.

reply
lucaspfeifer 19 hours ago
It is scary but also exciting. As long as there are humans making informed decisions, there will be demand for quality sources of information. But to keep up with AI, content sites will need to raise their standards. Less intrusive ads, less superficial stuff, more in-depth articles with complex yet easily navigable structure, with layers of citations, diagrams, data, and impeccable accuracy. News articles with the technical depth of today's dissertations.
reply
techpression 19 hours ago
For AI to steal and summarize without attribution. These sites you talk about exists today but are dying because of AI.
reply
dirkc 18 hours ago
Well, I hope you take this story as a caution that you shouldn't do that in any way that can seriously compromise your career/health/finances.
reply
krige 19 hours ago
I have seen it be utterly wrong so many times recently I'm considering permanently hiding it. For instance, googling for "Amiga twin stick games" it listed a number of old, top-down, very much single axis games like Alien Breed as examples.
reply
Kwpolska 18 hours ago
Try searching for something niche. You'll get a confidently wrong and often condescending answer.
reply
abustamam 17 hours ago
The ai summary has been wrong so many times for me. Not that I ever trusted it anyway.

I think content sites will need to rely on supporters (ala patreon or substack). It's shitty but it's what the internet has come to

reply
Hnrobert42 11 hours ago
Of what relevance is this to this discussion?
reply
maccard 17 hours ago
Really? I’ve noticed that the AI overview is full of glaring issues repeatedly. It’s akin to trusting the first Reddit post that is found by Google.
reply
archagon 19 hours ago
Uh, really? In my experience, at least a quarter of the info it gives me is usually manufactured or incorrect in some critical way.

In fact, if you switch to "Pro" mode, it frequently says the complete opposite of what it claimed in "Fast" mode while still being ~10-20% wrong. (Not to say it's not useful — there's no better way to aggregate and synthesize obscure information — but it should definitely not be relied on a source of anything other than links for detailed followup.)

reply
croes 19 hours ago
It will cycle.

Without the content site the AI overview will become useless

reply
ajkjk 18 hours ago
I know people love to hate on the AI overviews, and I'm a person who generally hates both google and AI. But--I see them as basically good and ideal. After all most of the time I am googling something like trivial, like a simple fact. And for the last decade when I have to click into sites for the information it's some SEO spam-ridden garbage site. So I am very glad to not have to interact with those anymore.

Of course Google gets little credit for this since it was their own malfeasance that led to all the SEO spam anyway (and the horrible expertsexchange-quality tech information, and stupid recipe sites that put life stories first)... but at least there now there is a backpressure against some of the spammy crap.

I am also convinced that the people here reporting that the overviews are always wrong are... basically lying? Or more likely applying some serious negative bias to the pattern they're reporting. The overviews are wrong sometimes, yes, but surely it is like 10% of the time, not always. Probably they're biased because they're generally mad at google, or AI being shoved in their face in general, and I get that... but you don't make the case against google/AI stronger by misrepresenting it; it is a stronger argument if it's accurate and resonates with everyones' experiences.

reply
autoexec 18 hours ago
> -I see them as basically good and ideal. After all most of the time I am googling something like trivial, like a simple fact. And for the last decade when I have to click into sites for the information it's some SEO spam-ridden garbage site.

What good is it if the overviews lie some percentage of the time (your own guess is 10%) and you have to search to verify that they aren't making shit up anyway. Also, those SEO spam-ridden garbage sites google feeds you whenever you bother to look past the undependable AI summaries are mostly written by AI these days and prone to the same problem of lying which only makes fact checking google's auto-bullshitter even harder.

reply
ajkjk 8 hours ago
a lot of times the thing I'm searching for is something I kinda know and just want to see verified, or which, as soon as I see it, I'll know if it's right or not. So... some good?
reply
raincole 18 hours ago
> I am also convinced that the people here reporting that the overviews are always wrong are... basically lying?

https://en.wikipedia.org/wiki/Availability_heuristic

No one remembers when AI Overview gets the answer right (it's expected to do so after all) but everyone has their favorite examples of "oh stupid AI."

reply
Terr_ 16 hours ago
That's incomplete, because another "nobody remembers" is when the hallucination differs from reality, but the reader doesn't promptly detect the problem and remember where they got it from.

Think about the urban legends in the style of "the average person eats X spiders per year." It's extremely unlikely that Rumor Patient Zero is in a position to realize it's wrong, or that they will inform the next person that it came from an LLM summary.

reply
aidenn0 20 hours ago
I don't know that this is what happened here, but any time there is a push to do more with less, you end up rewarding people who take shortcuts over those who do a proper job, and from the outside, it looks like journalism has a push to do more with less.
reply
Aurornis 8 hours ago
> any time there is a push to do more with less, you end up rewarding people who take shortcuts over those who do a proper job

You don’t need an institutional push to make people take shortcuts.

Many people will take shortcuts to work less if they think they won’t get caught. You don’t even need external pressure.

reply
arcadianalpaca 17 hours ago
That's basically the problem. If the shortcut produces something passable 95% of the time and nobody is checking, it just looks like you're faster. Journalism just has a more public failure mode than most fields.
reply
jmward01 18 hours ago
You will never get the internet to agree on how incident x should have been handled. I think the world right now is running to figure out AI and its place. Just when you think you understand, the ground shifts. It is clear that in the future this exact use of AI will be expected and work, on average, way better than a person. I know that a lot of people probably have an emotional 'no it won't!' and disagree with me here but there have been so many 'no it won't! never!' moments passed in the last two years that I can't imagine this won't also be one. With that in mind I don't think it is reasonable to fire this journalist. They used a tool too soon but it is really hard to figure out what is too soon right now. This should have been a moment of reflection for their news room (and probably some private conversations) but it turned into a firing which I think is too much. Did the news room gain from that? Will it prevent them from doing it again? Did it fix the original mistake? I don't think the answer is 'yes' to any of these questions. A good retraction, apology, statement on how they are changing and will review new technology entering the newsroom in the future. Those help.
reply
Gigachad 18 hours ago
The problem is accountability. If your name is on the article, this is your work. If you publish an article with fabricated quotes, it’s your fault regardless of if an AI tool was used or not since you hit the button at the end to sign off on it.
reply
jmward01 18 hours ago
I care about the future. I care that actions taken help improve the future. If someone makes a mistake the question shouldn't ever be 'how do we punish them' but instead 'what actions can best improve the future'. Sometimes that does mean firing a person. If the effort to fix their behavior is more than the expected gain then that is an option to consider (not the only thing to consider though). In this case though I think there is likely more to it. What were their policies? Have they been pushing their journalists to accept more AI tools? Even without pushing AI tools, have they been implying that speed is more important than accuracy? Was this truly JUST this journalist's mistake or are their culture elements that are missing in the newsroom? I would expect the head of that news room to have a detailed rational of why firing this person was the right choice. How does it help them move forward and improve? Why this isn't just a decision to try to deflect blame from their internal culture problems. As is this looks like a case of 'the internet got mad. Do something to make them happy'.
reply
yesterdayjones 6 hours ago
These comments are full of people who are not and have never been journalists talking about editorial processes they don't understand. I was a journalist before AI (and have since pivoted to software development), so I feel the need to correct a mistaken assumption.

1. An editor's role is not to make sure all quotes are accurate to what the source spoke or wrote. They are not babysitters. It's a higher-level position that is supposed to ensure that the article makes sense; that the sources are sufficient in number and are credible; that the story covers the assigned topic, etc. Your job as an editor is _not_ to babysit your reporters. Reporters are given a large degree of freedom and are expected to have sufficient education/training and ethical grounding* to do their jobs independently. What seems to have happened here is a lapse in judgement while the reporter was sick and on a deadline. Pressure's a bitch.

2. This is certainly more of an opinion, but AI tools have zero place in any profession that relies on information integrity (law, journalism). The reporter shouldn't have used it, and the editors and editorial processes are not at fault, especially when Ars already forbids AI-generated content.

* I'm aware people think reporters are liars with agendas. You don't need to say it.

reply
JumpCrisscross 21 hours ago
“Edwards also stressed that his colleague Kyle Orland, the site’s senior gaming editor who co-bylined the retracted story, had ‘no role in this error.’”

Has Orland issued a real apology? He bylined a piece containing fraudulent quotes.

reply
schiffern 21 hours ago
"I always have and always will abide by that rule to the best of my knowledge at the time a story is published."

Nothing suspicious about heavy use of qualifiers in a non-apology blanket denial. Where's the Polymarket for whether this guy has a job next month?

https://www.404media.co/ars-technica-pulls-article-with-ai-f...

reply
JumpCrisscross 20 hours ago
> whether this guy has a job next month?

That’s a problem. If he really hasn’t apologized, neither he nor Ars have recognized there is a problem, which means it will happen again.

reply
slg 20 hours ago
Is there something to the story that I'm missing? Why does Orland need to apologize? Edwards fabricated the quotes via AI and seemingly presented them to Orland as authentic. Orland had no reason to suspect the quotes weren't real until after publishing.

When journalists are working on a shared byline, they don't each do the same research in order to fact-check each other. There is inherently a level of trust required for collaborating like this and Edwards violated that trust.

You can say this is a failure by the editorial process for not including fact checking, but that is an organizational issue with Ars, it's not the fault of Orland for failing to duplicate the work that he believed his coauthor did.

reply
Marsymars 19 hours ago
Yeah, consider the same thing in other domains - e.g. say you're doing some code review and the PR author is a cowoker you've had for years, and they include a comment with a link to some canonical documentation along with a verbatim quote from said doc explaining usage of something in the PR. If the quote and usage both make sense in the context, I'm not going to be habitually clicking through to the docs to verify that the quote isn't actually fabricated.
reply
JumpCrisscross 17 hours ago
> Why does Orland need to apologize? Edwards fabricated the quotes

He's on the byline and he's an editor.

> they don't each do the same research in order to fact-check each other. There is inherently a level of trust

If we're going to excuse this, what does the byline mean? He trusted the wrong person. It would be like if a source lied to him. Not the end of the world. But absolutely credibility destroying if instead of an apology you get a word salad.

> You can say this is a failure by the editorial process

Orland is also an editor. (Senior gaming editor [1].)

[1] https://arstechnica.com/author/kyle-orland/

reply
slg 7 hours ago
Having a byline on a piece is not an indication he edited the piece, in fact, it's an indication he didn't edit it. That byline is simply an indication that he was one of two people responsible for writing the piece. He obviously didn't write every line or else there wouldn't be a second byline.

There is also a huge difference between trusting a coworker and falling for a lie of a source. Journalists deal with sources with a certain level of skepticism that just isn't productive or conducive to being a good coworker. Have you ever dealt with a coworker who didn't trust people to do their jobs? It's incredibly offputting.

I'll also point out that I said blame the "editorial process", that isn't the same as blaming an individual editor. This type of basic fact checking is either funded by the business or it isn't. This is very unlikely to be a failure of an individual rather than an absence of fact-checking at all and the decision for that is very unlikely to be made by the "senior gaming editor" (and it should be noted this wasn't even a gaming story).

There seems to be a disconnect between the way journalism generally works and your expectations for how it works. I believe Orland got duped by behaving the way most journalists would in a system that is less able to catch issues like this due to general industry cutbacks.

reply
fp64 19 hours ago
Sad state of things. He did it because he was sick? That's close to claiming his dog ate the original quotes so he had to make some up.

Well, Ars Technica is already for quite some time on my ignore list, and this further solidifies its place there.

reply
smallerize 10 hours ago
Isn't Ars agreeing with you here though? It's not excusable and they fired him even after the apology.
reply
SideburnsOfDoom 17 hours ago
I think that there's a potential different story with this. He felt that he had no option but to do work, even though he was so sick that he failed at the job. What's up with that? How insecure and pressured is his employment?

If it's not true then the error is on him. But it seems plausibly bad to me as an outside observer of US employment and healthcare customs. And the precarity of journalism nowadays. It is a sad state of things, as in it could be more a systemic than individual failure.

reply
fp64 15 hours ago
Circumstances can help explain, but never excuse unethical behavior
reply
SideburnsOfDoom 15 hours ago
Systems can make such failures inevitable. The language of "blame" vs. "excuse" is not the most relevant.
reply
fp64 14 hours ago
Are you saying unethical behavior is not a choice but forced by the system? That it would be unreasonable to expect people to behave ethically in situation were the system is set up in a way that does not reward ethical behavior? That lying and cheating can always be excused because if people didn't, they would endager their societal status?
reply
SideburnsOfDoom 14 hours ago
No. That's a wild gallop off in a pointless direction using the same irrelevant language.
reply
lich_king 20 hours ago
I clicked through the author's earlier stories when this first made waves. I obviously had no proof, but I was pretty certain that he's been using LLMs to generate stories for a good while.

When Ars released a statement saying this was an isolated incident, my reaction was "they probably didn't look too hard". I suspect they did, in the end?

reply
Marsymars 19 hours ago
In defense of that, his writing style was basically the same long before LLMs.
reply
nsxwolf 20 hours ago
Sad if true. I used to really enjoy reading his freelance articles in various publications pre-AI.
reply
arizen 13 hours ago
"Don't believe everything you read on the Internet"

-Isaac Newton

reply
nijave 12 hours ago
The irony is Ars readers tend to be fairly anti LLM in the comments of every article
reply
renegade-otter 12 hours ago
Man, the IRONY hurts. I love Ars for what they do and their comment section.

The readers there are borderline militant about AI's more problematic uses. This could have gone only one way.

reply
plutokras 12 hours ago
The comment section under any AI-related article is unbelievably negative, and the headlines are feeding the fire too. Ashley Belanger's articles are the most notorious examples, like "Bombshell report exposes how Meta relied on scam ad profits to fund AI". The actual content has nothing to do with AI; Meta simply spends on AI infrastructure from profits made elsewhere...

I still read and sub to Ars - as they are the least bad source of day-to-day technews - but quality is dropping.

reply
sl0pmaestro 20 hours ago
Happy to see some accountability here. Athough it's unclear why the other co-author who stamped their name on that article was retained. Maybe they just stamped their name to meet their quota of articles. In any case this follow up action makes me take arstechnica standards a bit more seriously.
reply
maplethorpe 13 hours ago
> Edwards said that he was sick at the time, and “while working from bed with a fever and very little sleep,” he “unintentionally made a serious journalistic error” as he attempted to use an “experimental Claude Code-based AI tool”

I'm skeptical. I hate to be the one to say it, but I don't think this would have happened if he was using Claude 4.6 Opus.

reply
WesolyKubeczek 13 hours ago
It's another way of saying "dog ate my homework".
reply
thefringthing 7 hours ago
I bought a handmade arcade-style SNES controller from this guy one time. He should go back to making those, I guess.
reply
klustregrif 14 hours ago
This reads like “I was sick and my dog accidentally used AI to write my homework”

If the content is human written and you check your sources there is no way for AI to “accidentally” seep in. Sure you can use an AI tool to find links to places you should check and you can then go and verify sources. That’s obviously not what happened.

reply
bragr 20 hours ago
The headline is a bit sensational considering all we know from the reporting is that he isn't working there anymore. Fired likely, sure, but not for a fact.
reply
0xbadcafebee 18 hours ago
I guess Blameless Postmortems haven't arrived in journalism yet.

Pretty weird that journalism as a business still revolves around "we hired a guy to write a thing, and he's perfect. oh wait, he's not perfect? it was all his fault. we've hired a new perfect guy, so everything's good now." My dudes... there are many ways you can vet information before publishing it. I get that the business is all about "being first", but that also seems to imply "being the first to be wrong".

I feel bad for the reporters. People seem to be piling onto them like they're supposed to be superhuman, but actually they're normal people under intense pressure. People fail, it's human. But when an organization fails, it's a failure of many people, not one.

reply
kuschku 16 hours ago
> I guess Blameless Postmortems haven't arrived in journalism yet.

Not anymore. Back in the day of print newspapers, a dozen people read an article before it was printed, including editorial staff, fact checkers, legal review, layouting and printers. If something slipped through – which was much rarer at the time – they'd also print a retraction.

Most of that stopped when newspapers and the blogosphere basically merged into one ad-funded business.

reply
sumeno 12 hours ago
This is more akin to "looking into your ex's private data at work" than "made a mistake that caused a production outage"
reply
orwin 16 hours ago
They have. Some paper journals even have a dedicated space in early pages (2-3) for corrections and retractation.
reply
rsynnott 16 hours ago
This isn’t a case of “made a mistake”/“did something incorrectly”, though. This is “knowingly broke the rules”. They had a policy against using our benevolent robot overlords to generate slop.

And fabricating quotes is pretty high up there in the list of things that journos should never, ever do.

reply
vadansky 20 hours ago
Good time to watch Shattered Glass.

Imagine what he could have gotten up to with LLMs.

reply
thomassmith65 14 hours ago
It's an excellent movie, regardless.

  "When this thing blows there isn't going to be a magazine anymore!"
https://youtube.com/watch?v=oj79mp2WEx0
reply
gigatexal 18 hours ago
This is good. They had to distance themselves from a journalist who would do such a thing. But this is more or less on the editor I think. So let’s see if they learn from this.
reply
cubefox 15 hours ago
I liked his articles about AI. They were generally quite good. He has an understanding of AI that usual journalists don't have. But to use an LLM for writing is deception.
reply
ModernMech 18 hours ago
I'm very bad with names and quotes, so sometimes I'll ask ChatGPT something like "what's that famous quote Brian Kernighan said about programming language names" and it will just make shit up, when really I was thinking about Donald Knuth. But according to ChatGPT, Kernighan famously said:

  “Everyone knows that Perl is designed to make easy things easy, and hard things possible, but nobody knows why it’s called Perl.”
Which of course returns 0 results on Google, as is customary for famous quotes.
reply
kolinko 16 hours ago
Which version? I just tested mine and it replied with an actual quote
reply
ModernMech 13 hours ago
The model hosted at chat.com gave me that. It told me what he said about debugging, but also included that made up quote as well which it attributed to him. But what it should have said was "Kernighan has no famous quotes about naming languages, you're probably thinking of the quote by Knuth about the importance of naming in language design."
reply
pluc 12 hours ago
> ...he attempted to use an “experimental Claude Code-based AI tool” to help him “extract relevant verbatim source material.” He said the tool wasn’t being used to generate the article, but was instead designed to “help list structured references” to put in an outline. When the tool failed to work, said Edwards, he decided to try and use ChatGPT to help him understand why."

This, right here. Coming from an "AI Expert", this is what we can expect the future to be. One AI isn't working? Let's ask the other AI why. I have no words for that reflex. It's beyond idiotic. It takes everything that's human about your reasoning and tosses it aside. What a dumb idea.

reply
zombot 12 hours ago
> It also comes at a moment in which many media bosses are pushing staff to find uses for AI

Which should be a red flag in and of itself. You don't need to "push" people to "find uses" for genuinely useful tools.

reply
ares623 17 hours ago
If a tool is not fit for purpose then it either gets fixed or gets discarded/replaced.

AI is not a tool and from the way things are going never will be. Humans are more tool-like in that sense. In this case the human was discarded, the AI remains.

reply
shadowgovt 19 hours ago
That was wise. It was an honest mistake, but a direct hit to is credibility that made not just him, but the paper, look sloppy. And in an era where people are deeply concerned about journalism pedigree.
reply
Imustaskforhelp 14 hours ago
I read the bluesky in article posted and Benj Edward's images that he had sent in bluesky.

The main comment I found relevant is probably this (There is more that he has written but I am pasting what I find relevant for my comment)

> I have been sick with Covid all week and missed Mon and Tues due to this, On friday, while working from bed with a fever and very little sleep. I unintentionally made a serious journalistic error in an article about Scott Shambaugh

... > I should have takena. sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh's words rather than his actual words.

> Being sick and rushing to finish, I failed to verify the quotes in my outline against the original blog source before including them in my draft

The journalistic system has failed us so much that in the news cycle, we want things NOW. I think ars-technica post went viral on HN as well before the whole controversy and none were wiser until Sam commented about the false quotes.

It prefers views and to get views you have to do work now. There is no room left for someone being sick and I think that this sort of expands to every job at times.

And instead of AI being a productive tool, it can act as a noise generator. It writes enough noise that looks like signal and Tada, none are the wiser.

People think that using AI with an person is gonna make their work 10x more but what's gonna happen is the noise is raised 10x more and the work of finding signal from that noise is gonna increase 10x more (I am speaking about employment related projects, obv in personal projects this might not matter if it might have 10x noise or 100x noise if it can just do the thing you want it to do)

When AI systems are constrained, they can deny you your api request with marginal loss. But when Human people are constrained, they really can't deny your employee's request without taking massive losses at times (whole day leaves) and I have heard in some countries, sick days can be a joke. This could very well be cultural because sick days are well implemented in Europe compared to america (from what I hear)

I don't know about Benj but some reporters are really paid peanuts. Remember the pakistani newspaper which had pasted Chatgpt Verbatim with content like "“If you want, I can also create an even snappier ‘front-page style’ version with punchy one-line stats and a bold, infographic-ready layout perfect for maximum reader impact. Do you want me to do that next?." WITHIN the newspaper.

I believe that humans should be treated with more dignity so that they feel comfortable around taking sick leaves when they are sick... or just fixing this culture that we have of people chugging along in sick leaves.

Until then, AI is bound to be used, I don't think that this is gonna be a single incident, and AI will produce noise/spew random stuff. Imagine you are a journalist and you are sick and you feel like there's a magical tool which can do the job for you when you are sick. You use it and in the moments of sickness, you are in the IDGAF attitude and push the article to main.

I personally don't believe that this is gonna be a single incident with this whole story playing out like this at the very least.

If any Journalist is reading this, please take sick leaves when you are sick. Readers appreciate your writing and I hope you don't integrate AI tools into your workflow (a lot) that the work is started being done by AI in this case. Even without AI I feel like you guys might not be working at the best mental space and Readers are happy to wait if you add unique perspectives into the story, something I don't think is possible when you are sick. If any employer try to still pressure you, just share this message to them haha to tell your employer what the people want (and what brings them money long term).

I also hate how the culture has become of finding the article which came the fastest after an event happens because that would promote AI use more often than not and it to me feels like jackals coming out of nowhere to try to take whatever piece you can take out of a particular news and that to me doesn't feel soo great of look. (I know nothing about how such journalism works so sorry if I am wrong about anything, I usually am but these are just my opinions on the whole thing)

reply
skc 16 hours ago
Really disappointing. A lot of us have always considered Ars Technica to be the last of a dying breed of ultra serious, no-nonsense professionalism.

Obviously, we were rocked by the DrPizza scandal years ago...and now this.

Sobering.

reply
Barrin92 20 hours ago
people have said enough about the ethics of all of it but what I found even sadder is that the story made me curious to take a look at the actual piece he "investigated" with AI, it's this one (https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...) This is btw a bit more than 1k words, which takes the average American reader, not senior journalist, ~5 minutes.

This whole story involved asking Claude to mine this text for quotes, which refused because it included harassment related content, then asking ChatGPT to explain that, and so on.

That entire ordeal probably generated more text from the chatbots than just reading the few paragraphs of the blogpost. That's why I think the "I'm sick" angle doesn't matter much. This is the same brainrot as people who go "grok what does this mean" under every twitter post. It's like a schoolchild who cheats and expends more energy cheating than just learning what they're supposed to.

reply
HardwareLust 12 hours ago
"He added that, upon further review, the error appeared to be an “isolated incident.”

Yeah, no.

reply
WesolyKubeczek 13 hours ago
So this is another way how you can lose your job because of AI.
reply
the_af 12 hours ago
To me this is extremely relevant regardless of the specifics in this case. From TFA:

> It also comes at a moment in which many media bosses are pushing staff to find uses for AI — as are executives across most industries — even while clear guidelines around use of the technology that uphold editorial ethics remain elusive.

Anyone who's working with computers now knows this is true. We're being pushed relentlessly to use AI; in some cases (I've heard second hand) people are mandated to use AI, sometimes forbidden from crafting code manually, and are disciplined if they don't. Yet the guidelines are very unclear, as they must be since if we're honest we're all threading new ground.

This, being mandated to use AI at all costs yet given very brittle/unclear indications on how to use it, and these guidelines evolve weekly, and also we're all fearful of losing our jobs, makes for a recipe for disaster.

So yeah, this journalist should have called in sick and use better judgment when toying with AI tools, but still there's a wider problem and the responsibility for this craziness is also on the leadership of most companies and the investors pusing for this.

(None of this is an excuse for generating AI slop. I hate it and I don't need to be told any guidelines about not doing it. If you cannot be bothered to write the text, I cannot be bothered to read it.)

reply
Revanche1367 21 hours ago
So the original blogger got slandered by an LLM agent, then got slandered again by a human journalist who used an LLM agent to write the article about him getting slandered by an LLM agent? How ironic.

But, does that mean he got slandered twice by an LLM agent or once by an agent and once by a human? Or was he technically slandered 3 times? Twice by agents and a third time by the journalist? New questions for the new agentic society.

reply
sparky_z 20 hours ago
He was only slandered once, by the LLM Agent. The Ars Technica article had presented paraphrases that it falsely attributed as direct quotes, and was therefore factually incorrect reporting. But it was not defamatory by any reasonable standard. Slander isn't just a synonym of "lie".
reply
th0ma5 19 hours ago
[dead]
reply
Revanche1367 18 hours ago
[flagged]
reply
zarflax 20 hours ago
No, the journalist came in and slandered the LLM Twice and Jim Fell.
reply
gdulli 19 hours ago
"Who are you, and how did you lose your job?"

"I'm an AI reporter. And, I'm an AI reporter."

reply
amstan 20 hours ago
4 times, you forgot the owner of the bot that did the PR.
reply
Revanche1367 18 hours ago
Indeed, you’re right.
reply
lostmsu 8 hours ago
Wait, in what way the original blogger got slandered by an LLM agent?
reply
sparky_z 6 hours ago
It literally wrote a blog post [supposedly on its own initiative] trying to gin up outrage at open source maintainer after he denied the LLM's pull request.

Here's the original write-up of the incident:

https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...

reply
lostmsu 5 hours ago
And which part of that blog post is slander?
reply
add-sub-mul-div 21 hours ago
> senior AI reporter

A true "senior" AI reporter should be more skeptical of LLM output than anyone else.

reply
zmmmmm 21 hours ago
I think that's the nail in the coffin. Most others could say it was a giant whoopsie, but here it goes to the heart of their credibility. How could they continue write authoritatively about AI, having done this.
reply
amarant 20 hours ago
I dunno. If AI doesn't write your articles, are you even an AI reporter?

Sorry, I never could resist a good dad joke

reply
internet2000 20 hours ago
[flagged]
reply
aaron695 22 hours ago
[dead]
reply
itvision 16 hours ago
[flagged]
reply
hnarn 16 hours ago
Calling Ars Technica "woke far left" is crazy, the U.S. really is lost to complete fractionalist brainrot.
reply
itvision 15 hours ago
I'm not from the US and I'm not bipartisan; in fact, I find the bipartisan US to be extremely backwards, illogical, and detrimental to the whole nation.
reply
AlexeyBelov 15 hours ago
True, but this user is Russian. But otherwise you're right, it's essentially the same brainrot.
reply
DaanDL 16 hours ago
I hope you will too escape from your echo chamber.
reply
itvision 15 hours ago
I love facts, reasoning, and logic and I'm not known for being biased or opinionated, something that the Ars comments section has become where unpopular points of view are downvoted to hell.

AI is mocked even though the vast majority of Ars commenters have extensively been using chatbots for years. You know how it's called? Hypocrisy.

reply
arashsadrieh 11 hours ago
[flagged]
reply
protocolture 19 hours ago
>The Condé Nast-owned Ars Technica

I despise Conde Nast

reply
jmyeet 20 hours ago
The crazy part to me is that even here on HN there are people who still insist that LLMs don't fabricate things or otherwise lie.

I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.

Sure there is such a thing as a naysayer but there are also people think all forms of valid criticism are just naysaying.

reply
protocolture 18 hours ago
>I wonder if these are the same people who 3-4 years ago were insisting putting 20 characters onto a blockchain (ie an NFT, which was just a URL) was the next multi-billion dollar business.

NFT protocol doesnt really care what the payload is. NFT purveyors likewise dont care what their payload is, as long as they could use the term "NFT".

NFT's are great for certain use cases (Crypto Kitties is still around I believe) but there was never a single moment I considered that owning a weird ape jpeg, even if it was somehow, properly owned by me, would be worth millions of dollars or whatever. Its like trying to sell a "TCP".

That said, future blockchain applications will probably still rely on NFT's in some fashion. Just not the protocol as product weirdness we got for a few years there.

reply
weird-eye-issue 19 hours ago
I've never seen anyone here claim that AI never hallucinates or can't provide incorrect information.
reply
bigstrat2003 17 hours ago
I've absolutely seen commenters who claim that hallucinations are a thing of the past if you use the newest models. They're wrong, but they exist.
reply
lostmsu 8 hours ago
It's really easy to prove existence of a thing by showing the thing. Or, maybe, you hallucinated it?
reply
gertop 19 hours ago
I've not heard many people claim that LLMs don't hallucinate, however I have seen people (that I previously believed to be smart):

1. Believe LLMs outright even knowing they are frequently wrong

2. Claim that LLMs making shit up is caused by the user not prompting it correctly. I suppose in the same way that C is memory safe and only bad programmers make it not so.

reply
Gagarin1917 18 hours ago
Are Technicas editors fabricate misleading headlines all the fucking time.

The editors are the ones ultimately responsible for what they publish. Yet they’re not taking responsibility.

reply
sl0pmaestro 20 hours ago
> while working from bed with a fever and very little sleep," he "unintentionally made a serious journalistic error" as he attempted to use an "experimental Claude Code-based AI tool" to help him

Oh right, being ill is what caused the error. I can bet that if you start verifying the past content from this author, you will see similar AI slop. Either that or he has been always ill with very little sleep.

reply
jackyli02 20 hours ago
The role "reporter" deserves very little credence in AI now. The public might be better off if they get their information on AI from ChatGPT.
reply
3eb7988a1663 19 hours ago
The core story is literally about how AI made up facts. The solution is more of the same?
reply
neya 20 hours ago
[flagged]
reply
dang 20 hours ago
Would you please stop breaking the site guidelines? I just had to ask you this in a different context.

You may not owe your least favorite publications better, but you owe this community better if you're participating in it.

https://news.ycombinator.com/newsguidelines.html

reply
neya 18 hours ago
> I just had to ask you this in a different context.

Sorry, I just searched my comment history, maybe I missed it? Was it recent?

reply
kittikitti 20 hours ago
"Don't feed egregious comments by replying; flag them instead."

You probably wish everyone would post as bots do, without em—dashes of course.

reply
dang 19 hours ago
Sorry but I don't follow
reply
apparent 20 hours ago
Can you elaborate? Perhaps I haven't noticed that they push pro-sponsored content (what does this mean, exactly?). I do find their comment section to be pretty lousy, and very partisan. But the tech coverage always seemed fair enough. What am I missing?
reply
neya 18 hours ago
If you feed their articles into a python script that identifies biases, subtle upsells and advertorials, you will see bunch of it is exactly just promotional marketing for some companies. They also almost never report the news, just opinions of it.
reply
angelfangs 10 hours ago
In a post gamergate world dunno why people still rate Conde Nasty
reply
ab_testing 21 hours ago
So they fired that author after the author had publicly apologized on Blue sky.
reply
somenameforme 21 hours ago
He was supposed to be their "Senior AI Reporter." Him including basically anything from LLMs, without verifying it, in articles not only demonstrates a complete lack of credibility as a writer, but also a complete lack of understanding of AI. Even if they might have personally wanted to keep him on, you just can't after something like this.
reply
bingaweek 21 hours ago
What is the connection between these two statements? Are we supposed to presume that someone who apologizes on Bluesky should never be fired? Or did you also read the article and thought this was important information?
reply
landl0rd 20 hours ago
The raison d’etre for the journalist, in AD 2026, is less to gather information than to verify it. The journalist who cannot be trusted is no journalist at all. He is a blogger.
reply
danso 21 hours ago
Why would apologizing for plagiarism and fabrication preclude you from facing sanctions for plagiarism and fabrication?
reply
skygazer 19 hours ago
Is it “plagiarism” to misattribute hallucinated quotes? Not that a whole lot of sloppy, unprofessional shortcuts weren’t taken, but plagiarism doesn’t seem like the right word, as quotes are almost definitionally not plagiarism. But maybe these were paraphrasings masquerading as quotes, so maybe that’s the difference.
reply
smallerize 10 hours ago
Maybe it's plagiarism because he did not attribute the LLM output to the LLM.
reply
danso 10 hours ago
Yeah, it's the lack of attribution that is key, even if it sounds like a trivial and ceremonial step. If a New York Times reporter writes "'Our investigation has completely stalled,' Kings County Sheriff Bob Jones told the Springfield Observer", I can infer that the NYT is reliant on local reporting for this story and may not have done original on-the-ground work themselves.

Imagine how flimsy Ars' story about a blog post would look like if the story had correctly attributed the quotes (fabricated or not) to, "according to Claude AI's analysis of the blog post". The reader would have the right to wonder if the reporter had even read the blog post.

reply
danso 10 hours ago
Plagiarism hurts not only the original author (in this case, I don't think we have to worry about the LLM), but also the reporter's audience, who has an expectation that the writer's reporting and analysis are original and based on the writer's own research and observations. At the very least it's a theft of the reader's time, if I wanted an LLM's perspective on a topic, I'd generate it myself

One of the things left unsaid in Edwards's apology [0] was whether he read the blog post that is the entire raison d'etre of his story. It's not like the story purported to do anything other than incorporate publish blog posts. So in his overworked and sickened state, how did trying out an "experimental Claude Code-based AI tool" substantially save him time versus jotting notes while ostensibly reading the source material himself

[0] https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p

reply
gdulli 19 hours ago
"Slop" and "hallucinate" have meanings outside of AI too, but it's easier to repurpose existing words than come up with a whole new lexicon for AI failure modes.
reply
netsharc 16 hours ago
Groan, redefining "plagiarism" to add "inventing quotes" is a stupidity too far for me.

Making up quotes and attributing them to people has happened before AI, journalists proper and pretend have done it too.

reply
emptyfile 10 hours ago
[dead]
reply
coldtea 21 hours ago
"Apologized on Blue Sky" is absolutely no reason to keep them. The author did the absolutely worst things a journalist can do (short of actual corruption) and is unfit for the job:

- He didn't care for his story,

- he didn't care to verify his story,

- he published bullshit made up stuff,

- and put words in a real person's mouth

- and he didn't even care to write the thing himself

Why keep him and pay him? What mentality all the above show? What respect, both self respect and respect for the job?

If they wanted stories from an LLM, they can pay for a subscription to one directly.

Hope this sends a message to journalist hacks who offload their writing or research to an LLM.

reply
bwbwbwbw 21 hours ago
[dead]
reply
bigyabai 21 hours ago
Can you name any other way for Ars Technica to handle this situation without permanently soiling their reputation?
reply
Marsymars 19 hours ago
That's the thing. I feel kinda bad for Benj, I don't wish him ill, and maybe he keeps writing on his own site and/or other places, but I don't see any way that he could have kept writing for Ars.
reply
bandrami 20 hours ago
That absolutely should be career-ending for a journalist, apology or no
reply
etchalon 7 hours ago
Not "career-ending" but definitely back a few paces.

This wasn't outright fabrication, it was a sloppy editorial workflow that resulted in hallucinations being published as fast (which is absolutely going to be get more and more common unless newsrooms develop specific guards against it).

reply
bandrami 9 minutes ago
How is it not "outright fabrication"? Quotes were attributed to a subject that he didn't say. That's fabrication.
reply