Ars Technica: Our newsroom AI policy
23 points by zdw 3 hours ago | 6 comments

vintagedave 20 minutes ago
> Anyone who uses AI tools in our editorial workflow is responsible for the accuracy and integrity of the resulting work. This responsibility cannot be transferred to colleagues, editors...

This sounds a direct callout to the incident earlier this year where an apparently sick staff member relied on an AI to reproduce quotes, and it did not. Ars retracted the article and the staffmember was fired.

I have felt very ethically uneasy about this because the person was ill, and I emailed the Ars editorial team directly to express concern re labour conditions, and to note that it is the editorial team's responsibility to do things like check quotes.

Of course it is the journalist's responsibility: when you have a job you do your job by policy (I wonder if this policy existed in writing at the time of the firing?) plus, it is part of the job to be accurate. But I am also a firm believer in responsibility being greater at higher levels. This sounds a direct abrogation of journalistic standards by the Ars editorial team.

reply
legitster 28 minutes ago
AI is in danger of peeing in it's own water source. It's unbelievably useful at imitating and generating content, but it needs enough original content to be able to train and scrape.

Google got one thing wrong and nearly destroyed the internet - people need to have an incentive to contribute content online, and that incentive should not be to game the system for advertising.

This in particular dawned on me when asking Claude for instructions in taking apart my dryer. There was literally only one webpage on the internet left with instructions for my particular dryer - the page was more or less unusable with rotten links and riddled with adware. Claude did it's best but filled in the missing diagrams with hallucinations.

I was imaging if LLMs could finally solve the micropayments solution people have always proposed for the internet. Part of my monthly payment gets split between all of the sites that the LLM scraped knowledge. Paid out like Spotify pays out artists.

It might not be a lot of money, but it would certainly be more than the pitiful ad revenue you get from posting content online right now. And if I want to upload corrected instructions for repairing this dryer I would have reason to.

reply
ares623 7 minutes ago
> I was imaging if LLMs could finally solve the micropayments solution people have always proposed for the internet. Part of my monthly payment gets split between all of the sites that the LLM scraped knowledge. Paid out like Spotify pays out artists.

As a software user I wish I could do the same for all the software I use.

reply
defrost 28 minutes ago
\1 AI-generated news is unhuman slop. Crikey is banning it (2024) - Crikey.com.au - https://www.crikey.com.au/2024/06/24/crikey-insider-artifici...

\2 Why Crikey retracted an article that we found out was written with AI help (2026) - https://www.crikey.com.au/2026/03/19/crikey-responds-to-ai-c...

  Yesterday, we published an article by a contributor who later confirmed they used AI in some aspects of its production.

  This goes against our editorial policies. As a result, we’ve taken down the story and the preceding three stories in the series.
(\2) is an interesting follow on from the policy set two years earlier (\1) as the specific piece in question "used AI in some aspects of its production" but was largely very much a human conceived, shaped and written piece that was only "assisted" by AI.

The Australian Media Watch team looked at this tension closely and felt the rejection was unfair, pointing out that while slop is bad, assistance (subject to terms and conditions) can enhance.

- Media Watch, likely geolocked to AU, might need a proxy - https://www.abc.net.au/mediawatch/episodes/ep-08/106487250

reply
ares623 33 minutes ago
Trust, reputation, and credibility will become (even more of) a premium.
reply
gnabgib 3 hours ago
Doesn't need Ars Technica added to the title
reply