-----
The Department of War is threatening to
- Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"
- Label the company a "supply chain risk"
All in retaliation for Anthropic sticking to their red lines to not allow their models to be used for domestic mass surveillance and autonomously killing people without human oversight.
The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused.
They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.
We are the employees of Google and OpenAI, two of the top AI companies in the world.
We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
Signed,
Going after the visa-holding employees of these companies is within reach of the WH, and it's consistent with their MO.
> They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.
This is about spreading information among the companies about each others' position, not a petition to the DoD.
Also why would the department of war care about what citizens think specifically?
Why can't they go to the contract generator of last resort, aka the Pentagon. It's what Elon has done with SpaceX and Grok.
I suspect the same will happen here.
Musk (Tesla, SpaceX), Ellison (Oracle) consistently supported Trump before his win was certain and are tight with Trump. They were megadonors behind his campaign.
Bezos (Amazon, Blue Origin) and Zuckerberg (Meta) pivoted towards Trump in 2024 after it looked like he would wind second time. They are opportunistic bastards who try to weasel into the good side of Trump with varying results.
Apple, Google, Microsoft, Nvidia etc. just bend the knee. They are reluctant but pragmatic and try to protect the company when their competition Amazon, Meta and Oracle are on the inside. Notice that in this final group, CEOs lack autonomy. At Alphabet, Page and Brin retain controlling authority (and they just try to avoid getting involved with Trump). Nvidia lacks a dual-class structure, meaning Jensen Huang (4% votes) can be outvoted on critical matters. Both Apple and Microsoft are "faceless" corporation where the CEOs serve as hired hands.
Vidkun Quisling
If anything, I have less respect for people who support fascism for money than I do for people who actually believe in it.
Silly logic. The first are average humans, the second are evil.
You would not want that either.
People and companies are free to do whatever the fuck they want that’s not illegal. They can resist any government priorities for any reason, including finding them destructive or anti-democratic or corrupt.
The government is able to change the laws within the current system to back its will—regardless of whether it’s in the interest of the people who voted for them, let alone the entire population.
(No the em dash isn’t AI.)
Refusing to join forces and contribute your efforts towards actively support fascism is not "deciding against democratically elected leaders". This sort of rhetorical sophism is unhelpful and, indeed, damaging.
It is ABSOLUTELY everyone's place, ("corporate leaders" included) to have principles and stick to them.
Personally, I agree with the principles of not using fallible AI for mass domestic surveillance analysis purposes, or for fully autonomous weapon purposes.
It's meaningless to talk about what the employees think or care about. They are selling their labor and value to the corporation that is legally entitled to outspend all of them to get whatever it wants.
Some are culture warriors who feel they have been wronged, some are opportunists. But the thing with opportunism is that this is who they are and what they believe in. Having a president who is corrupt is exactly what they want because they know exactly how to work with him: quid pro quo.
There is no distance between them being pro-Trump and opportunistic. He’s the perfect embodiment of those values.
If this remains primarily a political/corporate bargaining question, the equilibrium is unstable: some actors will resist, some will comply, and capital will flow toward whoever captures the demand.
In that world, the likely endgame is not "the industry says no," but organizational restructuring (or new entrants) built to serve the market anyway.
If we as a society want a real boundary here, it probably has to be set at the policy/law level, not left to voluntary corporate red lines.
https://www.wsj.com/tech/ai/openais-sam-altman-calls-for-de-...
(This is also why the DoD move is so dumb. I think we'd see massive talent flight from Anthropic if they end up complying, even if that compliance is against Dario's will.)
The control rests with the board and the executives. They have the control and the power and can make decisions.
> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
So not today, but the door is open for this after AI systems have gathered enough "training data"?
Then I re-read the previous paragraph and realized it's specifically only criticizing
> AI-driven domestic mass surveillance
And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance
A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War
> I thought "Anthropic" was about being concerned about humans
See also: OpenAI being open, Democratic People's Republic of Korea being democratic and peoples-first[0].[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/PeoplesRepublicO...
"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."
This reads like his objection is not on "autocratic", but on "adversaries". Autocratic friends & family are cool with him. A clear wink to a certain administration with autocratic tendencies.
So yes, the second part of your comment is what is going to come back to haunt them. The road to hell is paved with the best intentions.
Much has been said about the purported superiority of western values, but as we've all seen the USA was very quick to get rid of even the slightest notion of these values when Trump promised them some money and a dominant vibe.
The old world is dying, and the new world struggles to be born: now is the time of monsters.
There's nothing contradictory or circular in both of those claims.
If someone were to present to me a better caretaker of western liberal ideals than the US and ask whether I would prefer AI empower them, the answer would be: yes.
And in fact, that is precisely what I am arguing. It is good that Anthropic, which so far has demonstrated closer adherence to western liberal ideals than the current US government, is pushing back on the current US government.
I also think it is good that Anthropic stands in opposition to China, which also does not embody western liberal ideals.
China has been competing with India for decades for the most-polluted cities crown, and only slightly ranks below the US and Russia in CO2 emissions per capita. It's also the only large country where its emissions have been growing over the last decade. Where does the idea come from that China somehow puts less pressure on the environment? Less than what, exactly?
By slightly ranks below you mean ~50-60% per capital.
>China somehow puts less pressure on the environment
PRC renewables at staggering scale.
Last year PRC brrrted out enough solar panels whose lifetime output is equivalent to MORE than annual global consumption of oil. AKA world uses about >40billion barrels of oil per year, PRC's annual solar production will sink about 40billion barrels of oil of emissions in their life times. That's fucking obscene amount of carbon sink, and frankly at full productionm annual PRC solar + wind can on paper displace 100% of oil, 100% of lng, and good % of coal (again annual utilization) once storage figured out.
This BTW functionally makes PRC emission negative, by massive margin, arguably the only country who is.
It's only retarded emission accounting rules that says PRC should be penalized for manufacturing renewables, but buyers credited AND fossil producers like US not penalized for extraction, which US has only increased.
It's a great policy, but it also makes sense for geo-strategic reasons (even ignoring the climate issue).
Thats insane to say, given that he's literally acting in the public sphere as the mouth of Sauron for how AI will grow so effective as to destroy almost everyone's jobs and AGI will take over our society and kill us all.
It's just incredible to me that people think this is some kind of bold statement defying the administration when it is absolutely filled with small and medium capitulations, laying out in numerous examples how they just jumped right in bed with the military.
And no one seems disturbed by the blatant Orwellian doublespeak throughout. "We thoroughly support the mission of the Department of War"--because War is Peace.
https://www.whitehouse.gov/presidential-actions/2025/09/rest...
"By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered:
"The name “Department of War,” more than the current “Department of Defense,” ensures peace through strength, as it demonstrates our ability and willingness to fight and win wars on behalf of our Nation at a moment’s notice, not just to defend. This name sharpens the Department’s focus on our own national interest and our adversaries’ focus on our willingness and availability to wage war to secure what is ours. I have therefore determined that this Department should once again be known as the Department of War and the Secretary should be known as the Secretary of War."
We've always been OK with this in the pre-AI era. (See the plot line of dozens of movies where the "good" government spies on the "bad" one.) Heck we've even been OK with domestic surveillance. (See "The Wire".) Has something changed, or are we just now realizing how it's problematic?
When Google Met Wikileaks is a fun read, billionaire CEOs love to take Americas side.
That's as Anthropic as it gets if your nerve expands a little bit further than your HOA.
After your machines are destroyed you will be fighting machines or machines will extract and constantly optimize you. They will either exterminate you or make you busy enough not to have time for resistance. If you have something of value they will take it away. The best case scenario is to make you join the owners of the machines and keep you busy so that you don't have time to raise concerns about your 2nd class citizenship.
But the reality is more like the surprise of a bunch of submersible kill bots terrorising a coastal city and murdering people. Even in bot-first combat, at some point one side of bots wins either totally, allowing it to kill people indiscriminately or partially, which forces the team on the back foot to pivot to guerilla warfare and terror attacks, using robots.
What makes you think in any war the machines would stop at just fighting other machines?
It essentially becomes a computer against human. And such software if and when developed, who's going to stop it from going to the masses? imagine a software virues/malwares that can take a life.
I'm shocked very few are even bothered about this and is really concerning that technology developed for the welfare could become something totally against humans.
Yes. Absolutely.
The US system doesn't empower a company to say no. It should though.
You own nothing but your opinion. (No offense to personal property aficionados)
However Anthropic situation is very different: there’s no ongoing invasion of the USA, and they traditionally attack other countries once in a while (no judgment) so the weapons upgrade will be "useful" on the field.
There are a lot of well meaning people that are very anti-weapon or anti-violence under any circumstances. The problem is that when those people actually need those weapons and that violence, they are so inadequate at it that they become a liability to themselves and others.
I'm not saying I have or know of a solution, but I remember the old saying (paraphrasing) that it's better to be a warrior working a farm than a farmer working a war.
> 1. Do not obey in advance.
> Most of the power of authoritarianism is freely given. In times like these, individuals think ahead about what a more repressive government will want, and then offer themselves without being asked. A citizen who adapts in this way is teaching power what it can do.
https://scholars.org/contribution/twenty-lessons-fighting-ty...
After all, the regime already says such domestic dissenters are terrorists, and have, on multiple recent occasions, justified the execution of domestic dissenters based on that.
Yes. Yes, that's precisely what we want.
You can take issue with that argument if you want but it’s unconvincing not to address it.
- had four [!] terms, a move so anomalous it was subsequently patched by constitutional amendment
- threatened court-packing until SCOTUS backed down and stated rubber-stamping his agenda
- ruled entire industries by emergency decree in a way that contemporaries on the left and right compared to Mussolini
- interned 120k people without due process, on the basis of ethnicity
- turned a national party into a personal patronage system
- threatened to override the legislature if it didn’t start passing laws he liked
Not even saying any of this is even good or bad, clearly in the official history it was retroactively justified by victory in WWII. But it’s a bit rich to say that the bomb wasn’t developed under authoritarian conditions.
I think you need to have people thinking through this stuff at a nuts-and-bolts level if you want to avoid getting dominated by a slightly less nice adversary, and so too with AI. Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?
I’d love to know that the “no killbots, come what may” strategy is sound, but it’s not clear that that’s a stable equilibrium.
China considers all lethal autonomous weapons "unacceptable", calling all countries to ban it. Countries like the US and India refuse to back such proposals. See China's official stands on this matter below.
https://documents.unoda.org/wp-content/uploads/2022/07/Worki...
I totally understand that you got brainwashed by the media, but hey you appearantly have internet access, why can't you just do a little bit research of your own before posting nonsense using imagination as your source of information?
``` Basic characteristics of Unacceptable Autonomous Weapons Systems should include but not limited to the following:
- Firstly, lethality, meaning sufficient lethal payload (charge) and means.
- Secondly, autonomy, meaning absence of human intervention and control during the entire process of executing a task.
- Thirdly, impossibility for termination, meaning that once started, there is no way to terminate the operation.
- Fourthly, indiscriminate killing, meaning that the device will execute the mission of killing and maiming regardless of conditions, scenarios and targets.
- Fifthly, evolution, meaning that through interaction with the environment, the device can learn autonomously, expand its functions and capabilities in a degree exceeding human expectations.
Autonomous weapons systems with all of the five characteristics clearly have anti-human characteristics and significant humanitarian risks, and the international community could consider following the example of the Protocol on Blinding Laser Weapons and work to reach a legal instrument to prohibit such weapons systems. ```
Charitably, you might say that China is worried about a nightmare scenario. Less charitably, you might say that the definition of an unacceptable weapon system is so tight that it does not describe anything that anyone would ever build, or would want to build. This posture would allow China to adopt the international posture of seeming to oppose autonomous weapons without actually de facto constraining themselves at all.
This, by contrast, is what China considers acceptable:
``` Acceptable Autonomous Weapons Systems could have a high degree of autonomy, but are always under human control. It means they can be used in a secure, credible, reliable and manageable manner, can be suspended by human beings at any time and comply with basic principles of international humanitarian law in military operations, such as distinction, proportionality and precaution. ```
So as long as the system has a killswitch (something that afaik absolutely no one is proposing to dispense with?), it's Acceptable.
Meanwhile, it would certainly seem that China's defense research universities are interested in developing this tech: https://thediplomat.com/2026/02/machines-in-the-alleyways-ch....
So, I did a bit of research with my internet access-- how do my findings square with your impressions?
All the world powers are in a race to it.
https://cset.georgetown.edu/article/china-trains-ai-controll...
https://thediplomat.com/2026/02/machines-in-the-alleyways-ch...
https://www.brookings.edu/articles/ai-weapons-in-chinas-mili...
https://cset.georgetown.edu/article/how-china-is-using-ai-fo...
If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?
No
> If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?
The risks are high, so if you're the US, you want a portfolio of possible winners. The risks are too high to not leverage all the cutting edge AI labs.
However horrific the regimes in these countries are, the people behind the technology there are just as likely to be intelligent and moral human beings as the people in the USA and Europe working on these are.
The A-bombs were not the worst part of the attack on Japan. And thus were not "needed to end the war". They were part of marketing /the/ super power.
Was it the best path to end the war? Certainly.
The modern argument around targeting civilians or not was not even relevant at the time due to the advent of strategic bombing, which itself was seen as less-horrific than the stalemated trench warfare of WW1. The question was only whether to target civilian inputs to the military with an atomic weapon (and hopefully shock & awe into submission) or firebomb and invade.
Just one example of many, but the companies that make the CPUs you and all of use use every day, also supply to militaries.
I am unaware of any tech company that directly does physical warfare on the battlefield against humans.
In any case, AI drones will largely be used for "defense" in the euphemistic sense.
He is trying to win sympathies even (or especially?) among nationalist hawks.
Sounds more like the door is open for this once reliability targets are met.
I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick.
Citizens were loyal to Rome. Soldiers were loyal to their commanders. If commanders wanted to launch rebellions, the soldiers would likely support them.
A commander who commands the loyalty of legions by convincing a handful of drone operators would be very dangerous for democracy.
I'm not making a values judgment here, just saying that they will absolutely be used in war as soon as it's feasible to do so. The only exception I could see is if the world managed to come together and sign a treaty explicitly banning the use of autonomous weapons, but it's hard for me to see that happening in the near future.
Edit: come to think of it, you could argue a landmine is a fully autonomous weapon already.
Honestly, even landmines could easily be considered fully autonomous weapons and they don't care if you're human or not.
Notably USA is not one of those signatories.
There have been quite a lot discussions about this itself on Gandhi here on Hackernews as well.
Gandhi itself became the face of satyagrah movement considering he started it but that movement only had values because of many important people joining in.
Here is a quote from Martin Luther King Jr that I found about satyagrah from wikipedia
Like most people, I had heard of Gandhi, but I had never studied him seriously. As I read I became deeply fascinated by his campaigns of nonviolent resistance. I was particularly moved by his Salt March to the Sea and his numerous fasts. The whole concept of Satyagraha (Satya is truth which equals love, and agraha is force; Satyagraha, therefore, means truth force or love force) was profoundly significant to me. As I delved deeper into the philosophy of Gandhi, my skepticism concerning the power of love gradually diminished, and I came to see for the first time its potency in the area of social reform. ... It was in this Gandhian emphasis on love and nonviolence that I discovered the method for social reform that I had been seeking.[25]
It's better to wish for more satyagrahis to be named but I don't think that the western media might catch on to it.
Ghaffar Khan, Sarojini Naidu, Vinoba Bhave are all people who I think have a simple life history while being from different religions and castes and genders while adhering to the philosophy of satyagrah.
That being said, Satyagrah might not work in the current contexts because Britain was only able to rule India with the help of Indians which was why satyagrah movement was so successful. But if, the govt can get hands onto autonomous drones capable of killing civilians and mass surveilance then satyagrah might not work as much in the near future
(the two things Anthropic is denying to provide to the DOD, vis-a-vis the article itself)
I don't think Anthropic is a great company, it certainly has its flaws but I do think that it is very admirable of them to stand even when the govt.s is essentially saying to follow them or they will literally kill the business with the 3-4 national security laws that they are proposing to invoke on Anthropic.
I do urge to say satyagrah or mention other peaceful protests because usually whenever people talk about gandhi now, this discussion is bound to come which really alienates from the original thing at times. It was the collective efforts of the blood of so so many Indian leaders for India to gain independence.
But the point of my cynical comment was that Ghandi's Idealism is so far from the profit centered mentality of big tech its almost unimaginable that a CEO of such company will stick to pacifism.
Odd.
a lot of white collar jobs see no decision more important than a few hours of revenue. that's the difference: you can afford to fuck up in that environment.
Striking a building with ordinance (indirect fires, dropped from fixed wing, doesn't really matter) involves some discernment about utility, secondary effects, probability of accomplishing a given goal, and so on. Writing an office memo (a good one at least) involves the same kind of analysis. I know your point is that "people will die" when you blow up a building, but the parameters are really quite similar.
> I know your point is that "people will die" when you blow up a building, but the parameters are really quite similar
The parameters are similar, but the effects are different. That's what makes the decision not functionally equivalent. A functionally equivalent decision would have the same functional result.
To put a point on it: we are allowed to, and indeed should, consider the effects of a decision when making it.
Yes, if you fuck up some white collar work, people will die. It’s irresponsible.
A lot of the work in those sectors are not the ones that are being targeted for fully autonomous replacement. They likely would be in the future though.
You have to be deliberately naive in a world where five eyes exists to somehow believe that "foreign" mass surveillance won't be used domestically.
I don't think that your point makes sense especially when you can have enemies within your own administration/country who can use the same weapons to hunt you.
I don't think that the people operating the drones are a bottleneck for a war between your country and your enemies but rather its a bottleneck for a war between your country and its people. The bottleneck is of morality as you would find less people willing to do the same atrocities to their own community but terminator style AI is an orphan with no community ie. it has no problem following any orders from the govt. and THIS is the core of the argument because Anthropic has safeguards to reject such orders and DOD is threatening to essentially kill the company by invoking many laws to force it to give.
Are you prepared to be the "enemy" of these soulless killbots? Do you personally have AI powered-weapons? You need to be at the cutting edge of capability, right?
“Even fully autonomous weapons (…) may prove critical for our national defense”
FWIW there’s simply no way around this in the end. If your even attempts to create such weapons, the only possible defensive counter is weapons of a similar nature.
Snowden revealed that every single call on Bahamas were being monitored by NSA [1]. That was in 2013. How would this be any worse if it were US citizens instead?
(Note, I myself am not an US citizen)
Anyway, regardless of that, the established practice is for the five eyes countries to spy on each other and share their results. This means that the UK can spy on US citizens, the US can spy on UK citizens, and through intelligence sharing they effectively spy on their own citizens. That's what supporting "foreign surveillance" will buy you. That was also revealed in 2013 by Snowden [2]
[1] https://theintercept.com/2014/05/19/data-pirates-caribbean-n...
[2] https://www.theguardian.com/world/2013/dec/02/nsa-files-spyi...
I'm not suggesting that Anthropics models should be used by foreign governments for domestic surveillance
I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned
This absolutely is about privacy.
> I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned
Those foreign governments are spying on Americans and then sharing the results with the US government because the US government is misaligned with the interests of its own people
Maybe the others on here are not happy that this company is supporting a fascist government in committing international aggressions on other countries which has been condemned by the majority of countries around the world.
But personally I wouldn't like to die because some crackpot with the right connections can will rest-of-world to that fate, no matter their affiliation. This escalation of destructive power and the carelessness with which it is justified pretty disheartening to see. Good times create bad people?
We seem to be unable to stop building the weapon, we seem unable to stop handing it over to morons, and I should expect these morons to not fire it?
Then again, it's called MAD for a reason... What's one more WMD after all? Let's hope that we at least understand it before it becomes as powerful as everyone seems to think it will become.
Citation needed. I believe there's at least some research showing the opposite: military buildup leads to a higher risk of military conflict
Between the years of 1850-1950, an estimated 150M humans died (and many more permanently disabled) due to armed conflict (~1.5M/year). Between 1950-today: closer to 10M (~132k/year). The majority of those came from the Vietnam and Korean wars. If you limit the window to after 2000: only ~2M deaths, or ~78k/year. We carry bigger sticks than ever, and those sticks allow us to execute more strategic, incapacitating strikes, or stop conflict from even happening in the first place.
On the other hand, your position is at best misguided and at worst hopelessly naive. The probability that adversaries of the United States, potential or not, are having these discussions about AI release authority and HITL kill chains is basically zero, other then doing so at a technical level so they get them right. We're over the event horizon already, and into some very harsh and brutal game theory.
- https://the-decoder.com/anthropics-head-of-safeguards-resear...
- https://the-decoder.com/anthropics-ceo-admits-compromising-w... (see also https://news.ycombinator.com/item?id=44651971, https://futurism.com/leaked-messages-ceo-anthropic-dictators)
- https://the-decoder.com/anthropic-ceo-dario-amodei-backs-pre...
> He recalls meeting President Trump at an AI and energy summit in Pennsylvania, "where he and I had a good conversation about US leadership in AI,"
> "Unfortunately, I think 'No bad person should ever benefit from our success' is a pretty difficult principle to run a business on... This is a real downside and I'm not thrilled about it."
> "Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too." (from a researcher at Anthropic)
I don't think that any of this is particularly damning. Even if you don't like the president, I don't think it's bad to say that you had a good conversation with them. I believe the CEO of NVIDIA has said similar. The Saudis invest in many public US companies, does that make those companies less trust worthy? What about taking private capital from institutions such as State Street and Blackrock? The last quote seems like more of a reflection than an allegation. It read to me as a desire to do better.
I'm all for not trusting companies, but Anthropic seems to be one of the few that's trying to do good. I think we've seen a lot worse from many of their competitors.
> The Saudis invest in many public US companies, does that make those companies less trust worthy?
It does. If Anthropic takes money from the middle east that might be the reason, why they cannot work for the Pentagon. Simply because the Pentagon works together with the Israeli Forces and middle east investors might not like this. So Anthropic has to decide to either take a lot of money from the middle east, or work for the Pentagon.
Of course the problem goes much deeper than just Anthropic. I don't understand why taking money from dictatorships doesn't count as money laundering in our society. Because basically this is dirty money, generated by slavery and forceful suppression of people. We should forbid all companies to take this kind of dirty money. But because we don't do that at the moment companies who don't take this dirty money will have a disadvantage against companies that do. And because companies are all about money, in the end they are basically forced to act against their good intentions, just to survive.
We as society have to stop this. We must make sure, that companies who are not taking dirty money survive the competition. My idea would be to extend the rules for money laundering to all countries that are dictatorships. But there might be other ideas, to level the playing field between companies, so we as society can help them to make the right decision.
The Saudis invest in many public US companies, does that make those companies
less trust worthy?
Uhh.. yeah? we've seen a lot worse from many of their competitors
I think we should demand people do better than just being slightly above the worst.So let's see what happens tonight at 5:01PM but Anthropic isn't really the story here.
Ethics is complicated. I’m not saying this means it can’t be reasoned about and discussed. It can! But the sources you’ve cited have shown themselves to be rather shallow.
I encourage everyone to write out your ethical model and put yourself in their shoes and think about how you would weigh the factors.
There is no free lunch. For many practical decisions with high stakes, many reasonable decisions from one POV could be argued against from another. It is the synthesis that matters the most. Among those articles, I don’t see great minds doing their best work. (The constraints of their medium and funding model are a big problem I think.)
Read Brian Christian’s “The Alignment Problem”’s take on predictive policing if you want a specific example of what I mean. There are actually mathematical impossibilities at play when it comes to common sense, ethical reasoning.
Common sense ethical reasoning has never been very good at new or complicated situations. “Common sense” at its worst is often a rhetorical technique used to shut down careful thinking. At its best, it can drive us to pay attention to our conscience and to synthesize.
I suggest finding better discussions and/or allocating the time yourself to think through it. My preferred sources for AI and ethics discussions are highly curated. I don’t “trust” any of them absolutely. * They are all grist for the mill.
I get better grist from LessWrong than HN 99% of the time. I discuss here to make sure I have a sense of what more “mainstream” people are discussing. HN lags the quality of LW — and will probably never catch up — but it does move in that direction usually over time. I’m not criticizing individuals here; I’m commenting on culture.
Please don’t confuse what I’m saying as pure subjectivity. One could conduct scientific experiments about the quality of discussions of a particular forum in many senses. Which places are drawing upon better information? Which are synthesizing it more carefully? Which drill down into detail? Which participants have allocated more to think clearly? Which strive to make predictions? Which prioritize hot takes? Which prioritize mutual understanding?
It isn’t even close.
Opinions and the Overton window are moving pretty rapidly, compared to even one year ago.
* I’ve written several comments about viewing trust as a triple (who, what, why). This isn’t my idea: I stole it.
So reframe I did. (I don’t think those articles you cited are worth any more attention than I’ve already given them.)
My most blunt editorializing would be this: most people would be better grounded if they read AI alignment and safety books by Stuart Russell, Nick Bostrom, Brian Christian, Eliezer Yudkowsky, and Nate Soares. If you’ve read others that you recommend, please let me know. I’ve read many that I don’t usually recommend.
As far as long form articles, I recommend Paul Christiano, Zvi Moshowitz, as well as anyone with the fortitude to make predictions while sharing their models (like the AI 2027 crew).
I recommend browsing “Best of Year Y” (or whatever they are called) articles on the AI Alignment Forum and LessWrong. They are my go-tos for smart & informed writing on AI. For posts that have more than say 100 votes, the quality bar is tremendously higher than almost anywhere else I’ve seen, including mainstream sources with great reputations.
In conclusion, I would rather point to interesting people to read and places to engage.
The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.
Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.
To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.
During a war with national mobilization, that would make sense. Or in a country like China. This kind of coercion is not an expected part of democratic rule.
> Brigadier-General Mattias Hanson, CIO, Swedish Armed Forces, says: “Strengthening Sweden’s militarily and acting as part of a collective defense requires us to increase our defensive capabilities. We need to utilize the latest technology and all the innovative power of the Swedish private sector. Sweden has unique skills and capabilities in both telecoms and defense technology..." [0]
This is just one quick example I could find.
[0] https://www.ericsson.com/en/news/2025/6/ericsson-5g-connecti...
Under such a scenario, requisition applies, and so all of this talk is moot.
The fact that the military is killing people without a declaration of war is the problem, and that's where energy and effort should be directed.
Edit:
There's a yet larger question on whether any legal constraints on the military's use of technology even makes sense at all, since any safeguards will be quickly yielded if a real enemy presents itself. As a course of natural law, no society will willingly handicap its means of defense against an external threat.
It follows then that the only time these ethical concerns apply is when we are the aggressor, which we almost always are. It's the aggression that we should be limiting, not the technology.
Same for chemical and biologicals. Those do prove your point that the law will be ignored if expedient. But it doesn't invalidate the notion of a society putting constraints on itself.
give yourself a break. what your fancy democratic rule still holds under Trump?
The military should never be allowed to dictate how Private corporations act
I strongly doubt this is true. I think if you gave the US government total control over Anthropic's assets right now, they would utterly fail to reach AGI or develop improved models. I doubt they would be capable even of operating the current gen models at the scale Anthropic does.
> Or the models could be developed internally, after having requisitioned the data centers.
I would bet my life savings the US government never produces a frontier model. Remember when they couldn't even build a proper website for Obamacare?
With a massive budget, too. Hundreds of millions iirc.
It felt like a website that the small web-dev shop I worked for could build without much problem in a couple months.
We didn't have 200 layers of beauracracy, though.
That said I don't doubt the military could take their current tech and keep it running. It's far different from the typical grift of government contractors.
And contrary to what the model-makers would like you to believe, I don't think we're anywhere close to the system being self-improving enough that you could just let it run without intervention and it spits out a new frontier model
With this mindset the said group will quickly grow to half of the US population.
I don't think there's been a greater indictment of a political program (the one you likely subscribe to) in history than Trump's landslide victory in 2024.
You guys used to call deprogramming by another name, I think it was called "re-education". Maybe you should sign up for your own class.
In part the propaganda machine that started in the 80s with AM talk radio, culminating to algorithmic feeds today.
> it doesn't need to be an attack of which you are putting yourself on a side
and also
> I can't know if you are talking about either the right or left
Which are contradictory, if you think about it. I am not sure what you want me to write if I can't use "they" to refer to other people. Also, I didn't use "we", something you somehow also seem to want me to say, and didn't.
Whenever someone spends the time, and it takes a long time, to correct you, laugh, mock them, spew a few more lies.
And it's easy to do when the rich, the owner class side with you, because they buy newspapers, websites, ads, which you can't do if you lean left because acquiring money at all cost is not a priority of left wing people.
(I myself don't have a clear answer to why Trump won, but I don't think it speaks well to the decision-making of the median voter on their own terms, whatever those were, that Trump's now so unpopular despite governing in pretty much the way he said he would.)
There are no real Maoists or true communists in the US anymore, at least not enough to constitute meaningful political forces. To the extent they exist they are irrelevant, and one can argue further that no true left remains in the US at all.
As for my analysis of the Trump phenomenon, I only have intuitions and biases to offer, so caveat lector.
I don't think it's particularly mysterious. The general perception is that the American left has made identity politics and social justice its main political and social programs, to the detriment of basic governance, most importantly the economy and security, thereby breaking the social contract.
You cannot be a party that aggressively defends and promotes the interests of minority classes at the expense of the majority without loosing the support of the majority. In some cases, these minorities are so small as to border on the absurd.
Something like 0.6% of people identify as transgender in the United States(1). They are vastly over-represented in the media, in left wing political programs, and in the general zeitgeist at large relative to their population size. The same goes for the LGBT population, which represents maybe 10% of the US population (and that's a liberal estimate).
Try as you might, you cannot escape the cold, hard fact that 60% the US population is white, with something closer to 70% identifying as white or partly white. 90% percent of that group is going to be straight.
The US middle and working classes still really haven't recovered from the financial crisis of 2008, the aftermath of which precipitated a huge transfer of wealth from these classes to the upper class, a trend that accelerated during the pandemic.
So you have a majority of the population who are reeling from a devastating loss of wealth, station, and status, unable to keep pace with inflation, watching one of the two main political parties aggressively promote the interests of a tiny minority at their expense, or at least that is the perception.
Putting aside the nature of the minorities in question, the subservience of the political class to a minority of the population has another name: elitism. The natural response to elitism is populism, which is what we are seeing.
The protection of minority rights is a noble cause, but it's primarily a civil rights issue, and the focus should be on making sure those classes are treated equally under the law. The goal should not be the elevation of their social and cultural station above the majority.
Biden, and then Harris/Waltz, are the kind of the ultimate expression of this left-wing, elitist decadence. Biden appointed a man who wears stilettos and dresses to work in charge of nuclear waste as the Department of Energy. People can rage at me all they want for that description, but that is what the majority of Americans perceive. Again, putting aside any questions of morality, it is political suicide.
Tolerance of mass border crossings was probably a more directly fatal error, representing a final decoupling of the democratic party from their ideological roots in the labor movement which was always militantly against illegal immigration. Again, the perception is that interest of minorities (in this case migrants) are primary to the interests of the majority. In this case the minority are not even American citizens.
There's a lot more to say on this topic, and I'm sure you can find more persuasive analyses from better sources, but these are some of my intuitions.
Thanks for coming to my TED Talk.
1. https://williamsinstitute.law.ucla.edu/publications/trans-ad...
This is just totally disconnected from policy reality. Biden did not tolerate mass border crossings. (I _wish_ he'd dismantled ICE, but he very clearly did not.) A relatively minor DoE appointment going to a member of an unpopular minority both has nothing to do with policy and is the kind of thing that must necessarily be acceptable if minorities are actually going to be "treated equally under the law". This is a ludicrous basis to infer "the subservience of the political class" to transgender people.
On the other hand, Trump is a billionaire with Epstein connections and entirely unabashed about making money for his businesses and family using his government position. If this isn't "decadence", or "elitism", what meaning could the words possibly have?
"Deprogramming" might be an unfriendly word but it's hard for me to imagine how you have a functional democracy when a plurality of voters are making decisions on the basis of straightforward falsehoods, or even inversions of reality, just because "at least that is the perception". This isn't a sustainable situation, and it will end with either re-connecting these people to reality or disenfranchising them (really, them disenfranchising themselves along with the rest of us, e.g. by re-empowering someone who tried to steal an election). The former seems vastly preferable.
Speaking of unfriendly words - I also broadly have very little sympathy for a demand that people on the left speak respectfully of Trump voters given the total lack of any reciprocation. Even if it is the right way to do politics, the asymmetry between the way Democratic politicians talk about rural areas and the way Republican politicians talk about cities is another thing that's totally unsustainable.
> Tolerance of mass border crossings was probably a more directly fatal error, representing a final decoupling of the democratic party from their ideological roots in the labor movement which was always militantly against illegal immigration
Both Biden and Obama turned away more immigrants than Trump did in his first term. And Clinton is the kind of denying asylum. The idea that we just had completely open borders and nothing was being done about is a fabrication.
> Something like 0.6% of people identify as transgender in the United States(1). They are vastly over-represented in the media, in left wing political programs, and in the general zeitgeist at large relative to their population size
If you actually pay attention to who is talking about Trans people, it is the right. Liberal media may be occasionally baited into arguing about it, but to say it was a major platform is a perception the right crafted. Fox was talking about it 24/7 leading up to the election [1]. Musk and Trump were tweeting about it constantly. They ran political ads saying they wanted to convert your kids to trans ideology. It's gotten so bad that our current president just harasses women that look kinda manly, saying they are trans.
[1] https://www.yahoo.com/news/fox-news-covers-transgender-issue...
As an example, replacing sex with "gender identity" in prisons policy has inflicted considerable harm on women prisoners, who have been sexually assaulted, raped and impregnated by male prisoners who were transferred to the female prison estate on the basis of their supposed "female gender identity".
Feminist groups like WoLF spoke up on the horrors of this first, and the Republicans followed when they realized they could capitalize on this politically. But really it shouldn't have happened at all.
Propaganda, 1 in 6 Boomers being exposed to amounts of lead in childhood that lead to measurable cognitive declines, average age of the US population being on the rise with lower birth rates means most eligible votes are in the age groups most likely to suffer low grade dementia, and the weaponization of social media by foreign adversaries and wealthy elites.
There's maybe 4-5M true believers, the rest are gullible lead-addled old fools who got brainwashed by Fox News. That's the unvarnished truth of it.
I'm not upset at people for having a differing opinion or being upset at some economic conditions attributable to Democrats, but rather their persistent belief in provably false information like the relative danger of immigrants, the causes of climate change, vaccine safety, election security or whether or not a particular ethnic group is eating their pets. This isn't a matter of opinion or it's a matter of observable reality and fundamental human morality.
What are you talking about?
It's on you to argue it was, e.g. by comparing it to other clear landslide victories like Reagan in 1984. Truth is that 2024 the final popular vote gap was 1.5%, compared to 4.5% for 2020, -2.0% for 2016 (yeah, really), 3.9% in 2012, 7.28% in 2008, and so on.
You know who doesn't have as much power? The swiss head of state, so weak you can't even reliably name them! THATS what it looks like to defeat personalization, not some hand wringing hoping a system does something that it wasn't designed to do.
> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.
This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.
Also there’s probably a way to abuse the Taft-Hawley act beyond current recognition to force the employees to stay by designating any en-masse quitting to be a “strike / walk off / collective action”. The consequences to the individuals for this is unclear - the act really focuses on punishing the union rather than the employees. It would take some very creative maneuvering to do anything beyond denying unemployment benefits and telling the other big AI companies (Google / ChatGPT / xAI) to blacklist them. And probably using any semi-relevant three letter agency to make them regret their choice and deliver a chilling effect to anyone else thinking of leaving (FBI, DHS, IRS, SEC all come to mind).
If the administration could figure out how to nationalize the company (like replace the leadership with ideologically-aligned directors who sell it to the government) then any now-federal-employees declared to be quitting as part of a collective action could be fined $1,000 per day or incarcerated for up to one year.
It’s worth noting that this thesis would get an F grade at any accredited law school. Forcing people to work is a violation of the 13th amendment. But interpretations of the constitution and federal law are very dynamic these days so who knows.
Once a war has started, it won't be fake any more.
> they’ll definitely declare wars to extend the presidency.
You don't exchange the Fraudster in Chief while at war, so they do want a war. Any war. But I have the strange impression that von Clownstick doesn't want to be seen as having started it by himself.
Perhaps there's a war, that a misguided congress won't declare as such, and a certain vice president that runs for president, with a certain someone as his vice president...
You do not under any circumstances gotta hand it to the American military but they do seem unwilling to play a role in Trump's let's say extraconstitutional ambitions. At least a junta doesn't seem likely. Without the military behind him he's just a senile old pedophile. What's he going to do, lock himself into the Oval Office?
But violating the constitution with such a blatant power grab, and thus throwing the future of the United States and its military into uncertainty, is probably not something they want. Better to just force Trump out and maintain the status quo of new presidents every 4-8 years.
Specifically section on martial law in wartime context. It’s not very clear but I just feel like the norms and laws will be stretched or broken, as the administration has already done numerous times.
It’s a flippant move by Hegseth. I doubt anyone at the Pentagon is pushing for this. I doubt Trump is more than cursorily aware. Maybe Miller got in the idiot’s ear, who knows.
And one of the few constraints in their approach is not to fuck with the Dow. Expropriating Anthropic’s IP would trash the AI sector, and by extension, the Dow. (Even designating it a supply-chain risk sets a material precedent that a future administration could use against OpenAI and xAI.)
Hegseth is bluffing on his most destructive fronts, even if he doesn’t know it.
>Over on Steve Bannon's show, War Room -- the influential podcast that's emerged as the tip of the spear of the MAGA movement -- Trump's longtime ally unloaded on the efforts behind accelerating AI, calling it likely "the most dangerous technology in the history of mankind."
>...
>"You have more restrictions on starting a nail salon on Capitol Hill or to have your hair braided, then you have on the most dangerous technologies in the history of mankind," Bannon told his listeners.
https://abcnews.com/US/inside-magas-growing-fight-stop-trump...
Care to convert this into a prediction?: are you predicting Hegseth will back down?
> I doubt anyone at the Pentagon is pushing for this.
... what does this mean to you? What comes next? As SecDef/SecWar, Hegseth is the head of the Pentagon. He's pushing for this. Something like 2+ million people are under his authority. Do you think they will push back? Stonewall?
One can view Hegseth as unqualified, even a walking publicity stunt while also taking his power seriously.
I think he may be able to cancel Anthropic’s contract. But no more. He won’t back down as much as be overruled.
> As SecDef/SecWar, Hegseth is the head of the Pentagon
On paper. Also, being the de jure head of something doesn’t automatically mean you speak for it as a whole.
> while also taking his power seriously
Authority and power are different. A plane pilot has a lot of authority. They don’t have a lot of power.
This outcome might be a win for everyone involved, the time and effort for those billions with a lot of strings attached are less useful as Ai matures.
You’ll notice I’m trying to avoid debating generic phrases and terms such as “power” that probably won’t advance mutual understanding of this situation. I’m talking about specific actions and systems. It makes it clearer.
You’re missing the forest for the trees. Take the tariffs as analogy. Specifying the laws invoked to effect the tariffs is more precise, but less complete than describing Trump, Bessent and Navarro’s motivations and theories.
Same here. We can wax lyrical about the DPA and specific statutory authorities and how they may be litigated. Or we can look at the actual power structures. The former is precise but inaccurate. The latter is the actual dynamic.
> terms such as “power” that probably won’t advance mutual understanding
If terms like power and influence don’t make sense to someone, they’re going to be lost in any political discussion. But particularly under this administration.
There aren’t legal analytic fundamentals driving why Trump hates windmills or Biden pardoned his son, these were expressions of Presidential power and preference. The legality was ex post facto.
How much are we connecting in this particular conversation? What if each of us were to step back and ask 3 questions: What am I trying to communicate? Are we both interested in having this conversation? Are we both learning from it?
Again, this is not meant as a criticism of you. It is a statement of the dynamic here, and how we’re relating. (Even though HN is well above average, it has massive failure modes when you view it from a systems POV.)
My feeling is that you aren’t responding to the intent behind my statement. But I’ll also recognize that I’m probably not communicating that lands for you. Maybe you feel the same in reverse? That would be my guess.
This as a failure of our communication norms and technologies. Given we’re in the year 2026 and have minimal technical barriers, we have very much failed culturally to get anywhere close to the potential of the Internet or whatever needs to come next.
For what it’s worth, I’m not seeing a failure of communication. I’m seeing a failure of scoping. You’re arguing on the basis of specific legal mechanisms by which power is expressed. I’m arguing the real motivations of and political constraints on decision makers are more fundamental in this case.
That isn’t universally true. Power predicted what Trump would do with tariffs (again, analogy). Legal analysis predicted his constraints (which SCOTUS affirmed). In this case, SecDef has the legal authority to do what’s described. He doesn’t, however, have the political freedom to do so. That turns the latter into the germane constraint, not a litany of proscribed powers.
Put another way, the people—here—are fundamental. (Market reactions, too, though again largely because the people in this administration have chosen the Dow as a lighthouse.) The legal justifications are worse than surface level, they’re ex post facto findings of retaliatory paths. It may feel more substantial to quote DPA statute versus discuss Hegseth and Dario’s motivations and relationships, but that’s, again, missing the forest for the trees.
It's true that there's a lot of grey area and turbulence right now around which HN posts have been LLM-generated or LLM-edited, and it's compounded by the fact that there's no way to tell for sure. We all have to find our way through this—both the community and the mods. But we can and need to do so without breaking HN's rules ourselves in the process.
If they don't oppose directly, large bureaucracies know how to drag their feet until the midterms at least, if not until 2028. Soldiers literally dragged their feet at the glorious Trump military parade, when they walked disinterested and casually instead of marching.
While I grant the spirit of this point, I don't think it applies to this situation. The "bureaucratic resistance" explanation doesn't fit when you think about what would happen next. Here is my educated guess based on some research:
- contract termination: Hegseth can direct the relevant contracting officer(s) at the Pentagon to terminate the contract. This could happen within days. Internal stonewalling here might add weeks of delay, but probably not more than that.
- supply chain risk designation: Hegseth signs a document, puts it into motion. Then it becomes a bureaucratic process that chugs along. Noncompliant contracting officers probably would be fired, so this happens within weeks or a few months. Substantial delays could come from litigation, to be sure -- but this isn't a case where civil service stonewalling saves us.
- Defense Production Act: would require an executive order from Trump. This would go into effect right away, at least on paper. It would very likely lead to litigation and possibly court injunctions.
My point is that non-compliant civil servants at the Pentagon probably can't slow it down very much. (I recommend they do what their oath and conscience demands, to be sure!) Hegseth has shown he's willing to fire quickly and aggressively. I admire people who take a stand against Hegseth and Trump -- they are a nasty combination of dangerous and corrupt. At the moment, they appear weaker than ever. Sustained civil pushback is working.
Let's "roll this up" back to my original point. I responded to a comment that said "I doubt anyone at the Pentagon is pushing for this.", asking the commenter to explain. I don't think that comment promotes a better understanding of the situation. It is more useful to talk about the components of the situation and some possible cause-effect relationships.
> Mass domestic surveillance.
Since when has DoD started getting involved with the internal affairs of the country?
https://en.wikipedia.org/wiki/United_States_Department_of_De...
Any law changing the name of the Defense Department would have to be passed by both Houses of Congress and signed by the President (or by 2/3 of both Houses overriding a Presidential veto). The Senate has no such authority on its own.
If the rename gets struck down then they don't have the power. If it doesn't they have the power.
There are many dictatorships that built their power in the face of people claiming that they can't do what they planned because it was illegal.
Until they did it anyway.
The talk page on the linked Wikipedia article arguing about logos is just as deranged. It's very important to realize there is literally nothing you—or anyone else—can do about this.
What an utterly bewildering statement. So your suggestion is to suck it up, because we're all impotent anyway? The only thing that can bring authoritarian systems down is civil resistance.
"The government doesn't have control of this technology" is an odd way to think about "the government can't force a company to apply this technology dangerously."
“Four key words (…) The only phrase that can genuinely make a weak bully go away, and that is: Fuck You, Make Me.”
Is your view that contracts with the government should be meaningless? That the government should be able to unilaterally, and without recourse, change any contract they previously agreed to for any reason, and the vendor should be forced at gunpoint to comply?
If you do believe this, then what do you believe the second order effects will be when contracts with the government have no meaning? How will vendors to the government respond? Will this ultimately help or hinder the American government's efficacy?
No, it’s up to the government to create policy and legislation that outlines what is lawful or not and install mechanisms to monitor and regulate usage.
The fact that an arm of the government wants to go YOLO mode is merely a symptom of the deeper problem that this government is currently not effectual.
Not like limiting uses of products is anything new
You can’t say “no disabled people at your business”. Hell, you can’t even say “no fake service animals at my restaurant”. Many in America also think you can’t say no girls in the Boy Scouts, or no men in a women’s locker room.
> Many in America also think you can’t say no girls in the Boy Scouts, or no men in a women’s locker room.
Your average American is low functioning, low education, vibe driven with a 6th-8th grade reading level, so this ("What Americans think") is not terribly relevant in my opinion. Provide statute and case law.
https://www.justsecurity.org/107087/tracker-litigation-legal...
They are only contradictory if you think about it.
Why the hell should companies get to dictate on their own to the government how their product is used?
The government should have far less control and power over individuals and businesses than it currently does.
But this is irrelevant to the case we are discussing, where Anthropic used legal contractual terms, and the government willingly signed them, then demanded they be changed after the fact.
1) it’s pretty transparently obvious that Anthropic is not a supply chain risk, and that this is a retaliatory gesture. So I don’t support that usage.
2) if they do try, Congress or SCOTUS could well reduce or remove that authority. I give the Trump admin enough credit to assume that they are considering carefully which laws they spend in this way, DPA is a valuable chip they may need to spend for something more valuable than Hegseth’s temper tantrum.
The third amendment is there for a reason. I am a third amendment absolutist and willing to put my life on the line to defend it.
The government couldn’t justify the killing of innocent civilians.
The government couldn’t justify the killing of the unborn.
The government couldn’t justify eugenics.
There are objective moral absolutes.
The argument so far seems to be "They can do anything, but there are moral absolutes that I can personally list out, and in those cases they can't do those things". That is a hilariously stupid view of the world but sadly a common one.
Even if I grant moral objectivity, I reject that you have epistemic access to it so it's moot.
> Even if I grant moral objectivity, I reject that you have epistemic access to it so it's moot.
This is a silly and self refuting statement.
No it isn't and it's a pretty standard argument.
Other than insulting you, my response was pretty damn charitable tbh. I tried to state your argument for you as best I could.
https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...
Well:
"""
Imagine that you created an LLC, and that you are the sole owner and employee.
One day your LLC receives a letter from the government that says, "here is a contract to go mine heavy rare earth elements in Alaska." You don't want to do that, so you reply, "no thanks!"
There is no retaliation. Everything is fine. You declined the terms of a contract. You live in a civilized capitalist republic. We figured this stuff out centuries ago, and today we have bigger fish to fry.
"""
Signing a contract with Anthropic assuming they wouldn’t rug pull over their own moral soapbox was mistake number one.
I love anthropic products and heavily use them daily, but they need to get off their high horse. They complain they’re being robbed by Chinese labs - robbed of what they stole from copyright holders. Anthropic doesn’t have the moral high ground they try to claim.
I really don't see how anybody could think a private defense contractor should be entitled to countermand the military by leveraging the control it has over products it has already sold. Maybe the terms of their contract entitled them to some discretion over what orders the product will carry out, but there's no such claim in the OP.
I don't think that is what is happening. What most likely is happening is that they want Anthropic to produce new systems due to the success of the previous ones, but they are refusing to do so because the new systems are against their mission. What seems like the DoD is attempting to do, on one hand, is call them a supply chain risk to limit Anthropic's business opportunities with other companies, and then, on the other hand, simultaneously invoke DPA so that they can compel them to make the new system. But why would the government want to compel a company to make a system for them due to a need for national prepareness that they designated as such a supply chain risk that they forbid other companies that provide government services from doing business with due to the national security risk of having a sabotaged supply chain? It doesn't really make sense, other than from a pure coercion perspective.
Does it necessarily prevent other companies from doing business with them or does it prevent other companies from subcontracting them on government projects? The term "supply chain" leads me to think it's the latter.
Try introducing DPA invocation into your analogy and let's see where it goes!
When I introduce that, I see Anthropic's management getting Tiktok'ed.
It can be true that Anthropic's products are essential for national defense and also true that the management of the company are a supply chain risk.
Is any of that true? Well, so much of what has been done in the name of "national defense" & etc over the past many decades has clearly not been done for reasons that are true, so -when it comes to "national defense"- I don't think that the truth actually matters much at all.
It's inspiring to see that Anthropic is capable of taking a principled stand, despite having raised a fortune in venture capital.
I don't think a lot of companies would have made this choice. I wish them the very best of luck in weathering the consequences of their courage.
Power corrupts, and absolute power corrupts absolutely.
No other country that went through a phase like this has ever recovered. Not even in a century.
Germany, Italy and Japan are all wealthy, stable democracies right now. Not without their problems and baggage, but pleasant places in a lot of ways.
And we're throwing that all out the window.
US military bases aren't what made those countries modern, prosperous, democratic places. It took the will of the people to rebuild something better after the war.
Most powers have to pay in blood to do what they want geo politically without question. The US inherited a global state where many potential rivals were weak and helped keep them weak. It was a cost worth paying and its a shame that current US leaders are so cheap and foolhardy to not see what they're throwing away.
Italy: Nominally center-right government, similar problems as Germany, less the energy issues
Japan: just elected a landslide right wing government that is going to change the constitution so they can build an offensive military again
Curious.
The Netherlands for example got their last reset by completely losing the Dutch empire.
Also, some societies have flatter curves than others. That really maps 1:1 to your style and culture of living and where the priorities are.
If your priorities are to be the best as fast as possible (Germany) you will have less time between resets. If your priorities are "let's chill and wait until the coconut falls from the tree into my hand", your society might be able to have a far longer time between resets.
But in the end: It's an iterative process. Which means: There must be iterations.
Once you have understood that, you can just apply the rules learned backward, and they will typically match pretty well. I can buy fractal veggies in a supermarket.
And also, it's just data. Just take some random samples. That even civilizations like the Mayas who have faaaar more time on the clock than say than the US had multiple full resets.
Another random sample I've just pulled out of thin google air: San Francisco Fire of 1851. Everybody knew that wood burns. And that wooden buildings burn. And that wooden cities burn. Did anyone decide to tear down their house and re-build with a different material? No. This happened after everything had burned down to the ground. That was the reset needed.
I think it is very clearly an iterative process. Have a look.
You are not at all working with "data" or "samples". You are just making arguments and supporting them with examples. That's not science, that's philosophy or persuasive essay writing.
You are generalizing those arguments in insane ways. Just like the worst philosophy. You are drawing conclusions from extremely weak claims that don't even map to reality in the first place.
You can't say "Math works to describe the head of broccoli so I can just think hard enough and understand geopolitics". That's emphatically not science.
Germany has to be forced to accept that, although it was advanced, it could not have the European empire it thought it deserved. Japan had to learn a similar lesson. The speed and horror of the reset was in direct proportion to the potential for advancement and high society in these nations.
Ghana, where I come form, for example, has not has to experience any massive upheaval even from its pre-colonial and colonial days up till now. Our society is laid-back, and moves slowly. Even many other African countries have had to have their national reckoning in the form of civil wars and other huge upheavals in order to settle into a viable way of existing and advancing.
And, like you said, this is iterative. Given the nature of people in a nation and its fundamental geopolitical position, the same question will need to be answered after every N generations. Germany is central to Europe, and already a generation that is far removed from the world wars are starting to rethink why it shouldn't assert itself more strongly. Same in Japan.
THe way to analyze the iterations of the US is to understand that the primary threats are from within. It may not implode complete, but civil war and the civil rights era show that the potential is there for massive unrest and violence.
And yes, it is interesting to see that on Polymarket people are betting involving a lot of emotions. No, you will not bet on getting killed by masked militia. Nobody is going to say "Hey, I'll bet $1000 that I will get cancer soon!".
But if you leave aside all the emotions, and just look at the data: No, there is no realistic scenario the US could magically recover from all checks and balances and rules and laws and regulations and decency having been destroyed. Competence, leadership and shared knowledge had been erased in all areas of society - Science, Development, Capitalism, Arts. How are you going to rebuild all of this, especially if the best case is that 60% of the people will agree to rebuild, while 40% insist they need to keep destroying stuff?
This is not a scenario looking at historical data any prior "high culture" (or whatever to call this) had been able to recover from.
Elsewhere in this thread is was mentioned that Germany still had all the Nazis in place everywhere because else the country would not have worked. But that is not the point. The reset was:
a) All is destroyed and MUST be rebuild because else we will freeze and starve to death.
b) Your Nazi neighbor is still there, but it has been made VERY clear who is the new sheriff in town: First the allies, but then pretty much the USA. Germany is still paying for having US solders in the country, providing valuable expensive land for free, and paying for most of the supply chain that is not staffed with US soldiers. And that is the accepted normal.
c) What was left on industry was physically taken as reoperations. Especially the soviets, but also the French did dismantle hole factories and machinery, moving that to their own countries (rightfully so.)
From what I know from school, reading and talking to grandparents: Germany before WW2 doesn't have much relation to pre-WW2 Germany. Suddenly it was normal that women can to "men's jobs" (due to those being more on the dead side). McDonalds. Hollywood. etc
It really makes sense to have a look at a couple of pictures of what was left of Germany after WW2. It's just someone slapping an existing brand name onto a new product. And in this case, personally I would have regarded the brand as damaged and would have picked a different name.
I didn't say we needed to follow their example to the letter; it was just one counterexample to the "woe and ruin for 100 years" comment.
Societies are not operating like a sinus curve like say summer/winter cycles. They are upside-down "U"s. After the peak comes decline, but after the decline there is NOT recovery/growth again before you have a reset.
Germany was the huge winner of WW2 in the sense that after having had a high society they directly were allowed to get another such run. But as nobody wants to bomb us ) anymore, Germany is also in decline now waiting for a reset to come one day...
Sadly the USA will also need a reset before things can begin getting better again.
) I was born in Germany and lived there for 40 years.
Basically analysing the economies of WW2 participants via their automobile industries.
Its staggering how being bombed into the ground has forced technological and economic innovation. And how the inverse, being the bomber, has created stagnation.
Congress has abdicated its powers because as an institution it is broken. Several inland states with total state wide populations less than that of major metro areas on the coasts have the same amount of senators as every other state has - two. This means voters in a lot of states are over represented. Meanwhile, they say land doesn't vote, but in the United States Senate the cities and localities with the most people that drive much of our growth and dynamism are severely underrepresented. The upper and most important chamber of the Congress is thus undemocratic. Given it's an institution deeply susceptible to minority gridlock that depends on wide margins to do anything, well now more often than not it simply does nothing. An imperial presidency thus frankly becomes the only way the country can actually get most things done.
This two senators for every state arrangement was a compromise agreed to when constitutional ratification was in doubt, when the USA was a weak, newborn country of about 3 million people confined to the Eastern seaboard at a time in our history where our most pressing concern was being recolonized by European powers. The British burned down the White House in 1812 imagine what more they could have accomplished if the constitutional compromises that strengthened the union had not been agreed to.
This compromise has outlived its usefulness. No American today fears a Spanish armada or British regulars bearing torches. These difficult compromises at the heart of America already led to one civil war.
The best we can do is create a broad political movement that entertains as many incriminations as possible (probably around corruption/Epstein, which must make pains to avoid any distinction between say a Bill Clinton or a Donald Trump) so we can get past partisan bickering to get enough of mass movement to try to usher in a new age of constitutional amendment and reform.
If it doesn't happen this cycle of Obama Trump Biden Trump will continue until this country elects someone who makes Trump look like a saint. It can happen. Think of how Trump rehabilitated Bush. We already see the trend getting worse. And if it does, then the post WWII Germany style reset being mentioned here will then become inevitable.
First, the Connecticut Compromise is a democratic underpinning of the US. It was central to the formation of the nation, and any attempt to alter it would be a foundational structural change to the constitution to say the least.
I understand the concerns about one generation binding another without recourse. Legal scholars differ on whether Article V, which implements the compromise, can be amended or not.
But for the sake of argument, let's say it can. It would be an insurmountable task requiring the following:
1. A supermajority in both houses of Congress (67% in the Senate and 66% in the House) to propose the amendment.
2. Ratification by three-fourths of the state legislatures (38 out of 50 states) or by conventions in three-fourths of the states.
3. Consent of the states that would lose their equal representation in the Senate.
4. Overcome any legal challenges that would likely arise at every step of the process.
The result would be a dramatic redefinition of federalism and democratic representation. This wouldn't be a cosmetic change, it would be a fundamental alteration to the structure of the government and constitution.
Very few things were deemed "unamendable" and entrenched in the constitution before, both explicitly and implicitly, but now it would all be up for grabs. Now nothing is irrevocable.
What's to stop future generations from altering other fundamental principles? While we may complain of being bound by the decisions of our ancestors, we would be opening up a Pandora's box of constitutional instability for future generations, binding them to the whims of a (slim?) majority of the current generation's political agenda.
I think that is the best case scenario. The worst, and I think a very possible scenario, is that states losing representation would claim that such a drastic and material change to the constitution upends the root of the bargain that led to the formation of the union, and would likely seek to secede. You may have achieved your goal of changing the apportionment of the Senate, but at the cost of the union itself. There are far easier and less risky ways to achieve political change.
If I remember correctly, the governor of PR would appoint the first 2 senators. A tactic could be to promise to appoint 1 republican senator as an enrichment to approve statehood. It's a real shit situation.
There are more Puerto Ricans living in NYC and Orlando than in PR. I'd like to visit before the little family I have left there leaves or dies out.
Not sure about Italy, but Germany, while not without its problems, is a beacon of democracy, progressivism, and self-correction.
> I've never been to Italy but they don't seem very productive either.
Ok green poster. You need to look up more about world economies if you are going to confidently say things like Italy isn’t that productive. Combined with your comment on Jews in Germany I just assume you’re here to push propaganda, but if not please read up more on Italian economic output compared to, I don’t know, maybe the G7 countries?
However, in terms of 'democracy' they're still way worse off than the US right now, even if the US is headed in a bad direction.
This is fallacious as every economy that started at extreme poverty lifted a bunch of people out of poverty.
Unless we invent a time machine and do an A|B test we can't really attribute the success to policy when _any_ policy would have clearly lifted out a bunch of people out of poverty (basically almost impossible to not go up from extreme deficit). The closest we can do is look at similar scenarios like Taiwan which also lifted a bunch of people from poverty while retaining more human rights.
They absolutely are, but per capita, USA is polluting 49.67 % more than China.
Source: https://worldpopulationreview.com/country-rankings/carbon-fo...
the few solar panels in question are a united kingdom worth of green energy each year, about a royal navy worth of marine tonnage every two and they lifted more people out of poverty over the span of two generations than most of the rest of the world combined. Shenzhen produces about 70% of the entire world's consumer drones, now the primary weapon on both sides of the largest military conflict in the world. Xiaomi, a company founded in 2010 15 years ago decided to make electric cars in 2021 and is now successfully selling them.
As Adam Tooze has pointed out it's the single most transformative place in the world, if you're not trying to learn from it you're choosing to ignore the most important place in the 21st century for ideological reasons
The only thing to say is that it's still authoritarian. Once that gets a hold of a country, it's very difficult to shed off. Interestingly, both South Korea and Singapore shifted away from being dictatorships and were not ideologically socialist. Countries taken over by Communists remain authoritarian. The true believers will never give that up.
The world knows the US is close to folding in on itself.
They usually don't come back with the same political organization - that's sorta the point. But plenty of civilizations come back in a form that is culturally recognizable and even dominate afterwards.
You imply that there are folks that willing to fix or even recognize that things are broken in the first place
That assumes you have people wanting to fix what is broken - and I have a hard time believing even now that they are in the majority.
MAGA and their supporters? They want to see the world burn, if only for different motives: the "left behind" people in flyover states just want revenge, the Evangelicals literally believe they can cause the Second Coming of Christ by it [1], the Russia fangroup wants to see Ukraine burn to the ground and the ultra-libertarians/dont tread on me folks want all government but maybe a bit of military to go away. That is what unifies so many people behind the Trump banner.
The problem is, on the left side you got a bunch of people completely fed up as well. Anarchists of course, then you got the "left behind" people who still want revenge on the system but aren't willing to enlist the help of the far-right for that goal, you got revolutionaries of all kind... and you got those who believe that the rot runs too deep to fix by now.
And let's face the uncomfortable truth: every one of them, bar the Evangelicals and the Russia apologists, actually has a decent point in wanting to see the world burn. Post-Thatcher capitalism has wrecked too many lives, the US Constitution hasn't seen a meaningful update in decades and no overhaul in centuries, the "checks and balances" that were supposed to prevent a Trump from reaching office or rising to the position of effective dictator have been all but destroyed, the "American Dream" has been vaporware ever since 2007...
Now you’ve got the people whose jobs suck and want their old jobs to come back vs the people whose jobs suck and just want to dispense with the illusion that everyone needs to be employed. Either way, the money-generating corporate automaton needs to cough up some of its profits to fund people’s existence. If everyone could just agree on how, maybe they’d get somewhere.
Meanwhile, I will continue to cling to my slice of the corporate automoton pie.
There was a coup by a foriegn adversary and Americans lost.
the country jumped the shark post 9/11 and has been on a slow rot since then.
Then Obama re-authorized and expanded it. Trump and Biden haven’t even moved the needle, really.
Now they’ve put up tens of thousands of permanently installed facial recognition cameras (not Flock ALPR, those point the other direction to get number plates) all over SoCal and southern Nevada (that I’ve directly observed; presumably it is happening in many other cities as well), and TSA and CBP are collecting as many ID-verified sets of facial geometry as they possibly can, whenever they can. ICE is of course using it nonstop, as well as feeding additional geometry into it. They’re flying drones 30 feet above sidewalks in downtown LA to mass collect faces.
The DoD can’t wait to deploy SOTA AI against Americans en masse.
https://en.wikipedia.org/wiki/Caning_of_Charles_Sumner
The south sent him new canes to replace the one he nearly murdered a guy with. The problem we are experiencing with Trump has been here for a very very long time.
Would be nice, but I have a bad feeling that the impact of widescale mostly unregulated AI adoption on our social fabric is going to make the social media era that gave rise to Trump, et al seem like the good ol' days in comparison.
I hope I am wrong.
But don't let me stop you from believing in a worldview that contradicts reality ... lost of Republicans (and some Democrats) do it too.
It's also a statement entirely divorced from reality when you look at the fact that those winning candidates are not in fact doing that, and neither are the candidates that are getting the most national attention like Talarico.
Newsom has a vested interest in making it sound like he's the maverick here that knows the special formula, but it's been obvious to damn near everyone that they couldn't run out the same losing playbook.
It's a pretty close race with some recent polling indicating that Crockett will win the primary. Impossible to tell though. I clock her as being a more traditional democrat ultimately policy wise.
I'd expect she or Talarico has a good shot at winning in TX. They both have the potential to pivot to a more traditional position in the general election.
My main concern is the current elected leaders of the democrats and how the incoming dems view them. Frankly, if a candidate isn't saying "we need to oust Schumer/Jeffries" then I take that as a pretty decent signal that they align close enough with the moderate position to worry me about the future party.
I worry about the actions of the dems after election. I think they'll win the midterms, maybe even take the senate. I even think there's a good shot that they win 2028 presidental elections. The problem is that I think they'll run a biden style presidency and future campaigns once they get in power. That will setup republicans for an easy win in 2030 and 2032.
Texas is going to need moderate and centrist votes to swing blue - we're not making the state more liberal at a rate that is gonna hand either of them a victory. Both are actually fairly progressive. But Talarico is a lot better at selling those progressive values to everyday people. The hispanic vote is one of the biggest factors in Texas, and while they're obviously not a monolith, culturally a lot of them have much more mixed social values than other voting demographics. Statistically, way more likely to be heavily religious, and that's at odds with a lot of the social values from more progressive candidates. Talarico effortlessly refrains these issues in a way that aligns with stuff he can directly quote scripture on.
I'm an atheist so I don't care what scripture says on the matter, but it's the sort of thing that plays well with a lot of a key voting demographic that Crockett just can't do.
But they suck at that. And when they failed to convince Biden to drop out early, they should have stuck with him and just ran hard on actual accomplishments during the admin. But Harris was a last minute pivot and it showed. I think she would have been perfectly fine as a president, and I voted for her, but not surprised in the slightest that she lost - and I expected her to lose bigger than she did.
The fact that Trump couldn't even get half the popular vote when running against a last minute ticket change that was never selected to be the presidential candidate by the party she was representing is a pretty big indictment of how unpopular he really is.
I think there's been learning that you can't just be "not Trump", but yeah - I don't know that the party in general has any idea how to handle messaging and narratives.
Yet somehow the progressives found him more unpalatable than the MAGAs if you look at people like Brianna Gray and Jill Stein.
It’s too far out for me to say I will definitively vote for Newsome but so far he’s the only Democrat whose started throwing hands both legislatively and on social media.
I hope the dems figure out how to do more of that and better, instead of returning to shit like the October shutdown and the exchanging leverage for pinky promises from Mr. John “I am an obligate pinky promise liar” Republican.
Gaza and the border were two big issues where Biden and democrats at large were notably not progressive.
And, as you might imagine, funding a genocide is something that's really hard to stomach no matter how good Lina Khan was.
It also really didn't help that Kamala and her brother, where they did promise changes, it was to eliminate Khan and double down on prosecuting "transnational criminal organizations". They notably made a hard pivot from what was initially a somewhat progressive message to one of Kamala campaigning with Liz Cheney and celebrating the endorsement of a war criminal, Dick Cheney.
They somehow thought the lesser evil was actually a greater evil somehow. It’s like watching the pre Nazi party takeover of Germany where the Communists decided that the Social Democrats were worse than the fascists. It makes zero logical sense, unless they are accelerationists and think that the people will have some glorious revolution after everything gets bad enough despite all of history proving the contrary.
Trump is a monster, he's evil, and he had a less evil position on Gaza than Biden did.
In 2 years, Biden did jack shit to curtail Israel's genocide. The majority of the genocide happened while he was president. He continued to sign and promote bills funding Israel and he openly talked about how he was a Zionist and believed in the Israel project. His foreign policy advisors were horrendous. Israel killed so many American citizens and aid workers under Biden and his admin took Israel's side each time or would simply put out a "it's troubling, we are looking into it" which they never did.
But you know why I say Trump was better on Gaza? Because he did 2 things Biden and Kamala refused to do. He met with people that supported Gazans and he forced peace negotiations. Negotiations, mind you, that are worthless and israel is violating. Negotiations that have allowed Israel to illegally take over a huge swath of gaza. But none the less, peace negotiations.
Biden would put up a red line, watch Israel cross it, and then literally just move the line (the goalposts) or ultimately ignore the issue all together. There would not be even a peace deal today under a Biden presidency. Literally, we were told to just hope that Kamala who was shutting down this conversation, would be better.
And the autopsy on this issue shows that the Campaigns of both Biden and Kamala were well aware that if they didn't shift on this, they'd lose the election. There are reports that campaign when getting issues from phone banks was instructed to hang up on people that raised Gaza as a problem.
It's not the voters problem that Trump won. It's the Biden and Kamala campaign who prioritized supporting a genocide to continue getting AIPAC funding and support over doing the right thing and the thing their voters were screaming at them to do.
People were watching Nazis go on a rampage and their government giving them billions to do that rampage. They did not vote believing there was no difference between the two parties. That was a glorious failure of the biden and kamala campaign. And something we know they knew because of a leak of an autopsy which democrats don't want to reveal because they still want AIPAC support today.
The policies that actually affect people's lives, there's a lot of overlap for both mainstream dems and republicans.
I live in Idaho, and school teacher here are also extremely underpaid (My kid's teachers all have second jobs). Yet our state has magically found $40M to give away to private school while it's also asking the public schools to find 2% of their budgets to cut.
In I think both cases, the solution is simple, give the teachers a raise and probably raise taxes to pay for it. However, both parties are fairly anemic to the "raise taxes" portion of the message and so they instead look for other dumb flashy one time things they can do instead.
Federal democrats have relied way too heavily on Republicans being a villain and vague "hope and change" promises to carry them through an election cycle. They need to actually "change" things and not just maintain the status quo when they get power.
I'm not sure why you think they are doomed.
Last election cycle the "niche issues" people complain about were overwhelmingly talked about more by people saying they opposed them.
Controlling the narrative is very easy when you have a cowardly or bought media, and plan to traffic in rage and clickbait.
The same is true in Australia, though there's no charismatic left-wing leader emerging, and the Farage-equivalent is a laughing stock who struggles to be coherent at times. But because of billionaire money, she's still up there on the polls.
The US system makes it much harder for new parties to form, so it's probably going to be factions in the existing parties. And, of course, MAGA is the new faction in the Republican party; effectively a new party itself. So the ground is fertile for a new left-wing faction in the Democrat party to rise.
But mass surveillance of Australians or Danes is alligned with democratic values as long as it's the Americans doing it?
I don't think the moral high ground Anthropic is taking here is high enough.
If it ain’t repeatedly on the news and designed explicitly to scare and agitate then really people DGAF.
Credit where it's due, going on record like this isn't easy, particularly when facing pressure from a major government client. Still, the two limits Anthropic is defending deserve a closer look.
On surveillance: the carve-out only protects people inside the US. Speaking as someone based in Europe, that's a detail that doesn't go unnoticed. On autonomous weapons: realistically, current AI systems aren't anywhere near capable enough to run one independently. So that particular line in the sand isn't really costing them much.
What I find more candid is actually the revised RSP. It draws a clearer picture of where Anthropic's oversight genuinely holds and where it starts to break down as they race to stay at the cutting edge. The core tension, trying to be simultaneously the most powerful and the most principled player in the room, doesn't have a neat resolution.
This statement doesn't offer one either. But engaging with the question openly, even without all the answers, beats silence and gives the rest of us something real to push back on.
I'm not sure an American company prioritising the privacy of American people is worth questioning. As a European, Anthropic are very low on the list of companies I worry about in terms of the progressive eradication of my privacy.
If the safeguard against mass surveillance is strictly tied to geolocation (US vs. non-US), it can't be an intrinsic property of the model. It has to be enforced at the API or contractual level. This means international users are left out of those core, embedded protections. Unless Anthropic is planning to deploy multiple, differently-aligned foundation models based on customer geography or industry, the safety harness isn't really in the model anymore.
I hope the next few elections change this, but right now that's how things are.
Because as far as I know, Anthropic is taking the most moral stance of any AI company.
Finally, someone of consequence not kissing the ring. I hope this gives others courage to do the same.
> Isn't it standard practice for allies to spy on each other?
Allies? The US is on the brink of breaking up with the EU.
> EU foreign policy already sealed the deal
Not sure what you mean.
Tell me more about what they should do if a virtue signal in such a situation is a nothing statement.
Looks like an optics dance to me. I've noticed a lot of simultaneous positions lately, everyone from politicians and protesters, to celebrities and corporations. They make statements both in support of a thing, and against that same thing. Switching up emphasis based on who the audience is in what context. A way to please everyone.
To me the statement reads like Anthropic wants to be at the table, ready to talk and negotiate, to work things out. Don't expect updated bullet-point lists about how things are worked out. Expect the occasional "we are the goodies" statements, however.
Other than that, good on ya.
No need to die on the hill, but point out that there's a consistent pattern of lawless power-grabbing.
No, the concentration camps and gangs of masked thugs violating civil rights are that sign. Threatening to treat a domestic private corporation like an enemy combatant during peacetime for not immediately caving to military demands is that sign. Trying to take over the Federal Reserve, the Federal Trade Commission, and the Nuclear Regulatory Commission, is that sign. The Executive attempting to freeze funds issued by Congress for partisan reasons is that sign.
Department of War is just little boys being trolls.
Anthropic is in negotiation with Hegseth/DoD. Pointing out all the specific actions that Hegseth is doing are fair game to show that Hegseth is nuts.
Bringing in other complaints against other parties, however bad those other parties are behaving, shows a pattern in other people, which might be helpful too. But hegseth's direct actions are stronger evidence.
If the current congress doesn't take action, in 2027 it's quite likely they will.
Of course the most likely current course is that nobody reins in Hegseth/DoD right now, but even if there's no official consequences at the moment there should be a memory and political will to change the system to prevent such abuse in the future.
From the first chapter of the book On Tyranny by Timothy Snyder, an historian of Central and Eastern Europe, the Soviet Union, and the Holocaust:
> Do not obey in advance.
* https://timothysnyder.org/on-tyranny
* https://archive.org/details/on-tyranny-twenty-lessons-from-t...
Defined as the tendency for teams to devote disproportionate time and energy to trivial, easy-to-understand issues while neglecting complex, high-stakes decisions. Originating from the example of arguing over a bike shed's color instead of a nuclear plant's design, it represents a wasteful focus on minor details.
https://en.wikipedia.org/wiki/Law_of_triviality
---
I deal with this day in and day out. Thank you for informing me of the word that describes the laughable nightmares I deal with on the regular.
But the reason for "Department of Defense" name was bureaucratic. It's also not true that DOD is hard to understand.
The Department of War was responsible for naval affairs until The Department of the Navy was spun off from it in 1798, and aerial forces until the creation of the The Department of the Air Force in 1947, whereafter it was left with just the army and renamed the Department of the Army. All three branches were then subordinated to the new Department of Defense in 1949, which became functionally equivalent to the original entity.
The Department of War is what it was called when it was first created in 1789 by the Congress (establishing the department and the position of Secretary of War), the predecessor entity being called the The Board of War and Ordnance during the revolution.
The Department of "Defense" has never fought on home soil. Ever.
It's one of the reasons why people get annoyed at jargon or are pissed off about pronouns, because it highlights that they should be putting mental effort into understanding why they're current mental model doesn't fit. It's much easier to ignore and be comfortable if there's not glaring sirens saying you've got some learning to do.
Most of us can't (or won't) be aware of everything that should be important to us, having glaring context clues that we should take notice of something incongruous is important. It's also why the Trump media approach works so well it's basically a case of alarm fatigue as republicans who would normally side against any particular one of his actions don't listen because they agreed with some of the actions that democrats previously raised alarms about.
If they had called it DoD, then that would have been another finger in his eye.
While this action may indeed cause the DoD to blacklist Anthropic from doing business w/the government, they probably were being as careful as they could be not to double down on the nose-thumbing.
At the same time as the NME was created, the Army was split into the Army and Air Force and the Department of War was also split in two, becoming the Department of the Army and the Department of the Air Force.
All that matters is that everyone calls it the Department of War, and regards it as such, which everyone does.
What you just described is consensus, and framing it as fascism damages the credibility of your stance. There are better arguments to make, which don’t require framing a label update as oppression.
Just as one example, they threatened Google when they didn't immediately rename the Gulf of Mexico to the "Gulf of America" on their maps. Other companies now follow their illegal guidance because they know that they will be threatened too if they don't comply.
There is a word for when the government uses threats to enforce illegal referendums. That word is "Fascism". Denying this is irresponsible, especially in the context of this situation, where the Government is threatening to force a private company to provide services that it doesn't currently provide.
You used “green account” like a slur.
> framing a label update as oppression
That strawman damages credibility.
Except this administration is certainly fascist, and the renaming is yet another facet of it. That article goes through it point by point.
> Dismantling government bureaucracy/corruption
Trump has done more to benefit financially from the presidency, to offer access and influence to anyone who will funnel money into his enterprises or give him gifts, than any president in our history.
How could you possibly write this in good faith? When Trump said he could shoot a person on 5th avenue and people would still vote for him, do you recognize yourself at all in that statement?
Then tomorrow it will be the Department of War. Just like When Congress voted to split the old Department of War into the Department of the Army and the Department of the Air Force, and to take both of those and the previously-separate Department of the Navy under a new National Military Establishment led by the newly-created Secretary of Defense (and when it later to voted to rename the NME as “Department of Defense”), things changed in the past.
> They have the votes.
Perhaps, but the law doesn't change because the votes are in a whip count on a hypothetical change, it changes because they are actually cast on a bill making a concrete change.
Nicely put. In other words: Department of Morons.
It is clear that the DPA can be invoked for companies posing risks to national security:
> On October 30, 2023, President Biden invoked the Defense Production Act to "require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government" when "developing any foundation model that poses a serious risk to national security, national economic security, or national public health."
Furthermore, it should be quite obvious that companies very important for national security can act in manners causing them to be national security risks, meaning a varied approach is required.
No, unlike yourself, I'm just a random brainless bot.
The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.
One of the most challenging problems in AI safety re/ x-risk is that even if you can get one country to do the right thing, getting multiple countries on board is an entirely different ballgame. Some amount of intentional coercion is inevitable.
On the low end, you could pay bounties to international bounty hunters who extract foreign AI researchers in a manner similar to an FBI's most wanted lost, and let AI researchers quickly do the math and realize there are a million other well paid jobs that don't come with this flight risk. On the high end you can go to war and kill everyone. Whatever gets the job done.
Either way, if you want to win at enforcing a new kind of international coercion, you need to be at the top of the pack militarily and economically speaking. That is the true goal here, and I don't think one can make coherent sense out of what Anthropic is doing without keeping that in the back of their mind at all times.
The bottom of all of this is that companies need to profit to sustain themselves. If "y'all" (the users) don't buy enough of their products, they will seek new sources of revenue.
This applies to any company who has external investors and shareholders, regardless of their day 0 messaging. When push comes to shove and their survival is threatened, any customer is better than no customer.
It's very possible that $20 Claude subscriptions isn't delivering on multiple billions in investment.
The only companies that can truly hold to their missions are those that (a) don't need to profit to survive, e.g. lifestyle businesses of rich people (b) wholly owned by owners and employees and have no fiduciary duty.
- Anthropic says "no"
- DoD says "ok you're a supply chain risk" (meaning many companies with gov't contracts can no longer use them)
- A bunch of tech companies say "you know what? We think we'd lose more money from falling behind on AI than we'd lose from not having your contracts."
Bonus points if its some of the hyperscalers like AWS.
Hilarity ensues as they blow up (pun intended) their whole supply chain and rapidly backtrack.
Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community."
The moral incoherence and disconnect evident in these two statements is at the heart of why there is generalized mistrust of large tech companies.
The "values" on display are everything but what they pretend to be.
These blurbs always mainly communicate that they are in line with US foreign policy. And then one can look at the actual actions rather than the rhetoric of US foreign policy to judge whether it is really in line with defending democracies and defeating autocracies.
If it helps: refusing to tune Claude for domestic surveillance will also enable refusing to do the same for other surveillance, because they can make the honest argument that most things you'd do to improve Claude for any mass surveillance will also assist in domestic mass surveillance.
>I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
>Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
which I find frankly disgusting.
Dario’s statement is in support of the institution, not the current administration.
But when was the last time our "democratic values" were under attack by a foreign country and actually needed defending?
9/11? Pearl Harbor?
Maybe I'm missing something. We have a giant military and a tendency to use it. On occasion, against democratically elected leaders in other countries.
You're right; freedom isn't free. But foreign countries aren't exactly the biggest threats to American democracy at the moment.
As Abraham Lincoln said, the greatest threat to freedom in America is a domestic tyrant, not a foreign army.
All were driven by multiple competing and sometimes conflicting goals, and many look questionable in hindsight. It is fair to critique.
But it is absolutely not the case that the last time the US defended freedom through military means was WWII.
Corporations, natural resources or getting a blowjob from the intern ... these are neither democratic ideals nor democratic institutions
I'm not going to go through all of those wars one-by-one, but are you joking with Iraq War II? That war was sold on the lie that Saddam Hussein had weapons of mass destruction and was somehow behind 9/11, by a president who himself had stolen the 2000 election by getting his brother to halt the counting of votes in Florida.
I miss the days when the mega-brands whose work I admired, still did such works.
What are the odds they will rebrand Misanthropic by then?
Anthropic's statement is little more than pageantry from the knowing and willing creators of a monster.
“Dario is saying the right thing and doing the right thing and not ever acting otherwise, but I think it’s just performative so I’m still disappointed in him.”
Ergo, this is a very convenient PR opportunity. The public assumes the worst, and this is egged on by Anthropic with the implication that CLAUDE is being used in autonomous weapons, which I find almost amusing.
He can now say goodbye to $200 million, and make up for it in positive publicity. Also, people will leave thinking that Claude is the best model, AND Anthropic are the heroes that staved off superintelligent killer robots for a while.
Even setting this aside, Dario is the silly guy who's "not sure whether Claude is sentient or not", who keeps using the UBI narrative to promote his product with the silent implication that LLMs actually ARE a path to AGI... Look, if you believe that, then that is where we differ, and I suppose that then the notion that Amodei is a moral man is comprehensible.
Oh, also the stealing. All the stealing. But he is not alone there by any means.
edit: to actually answer your question, this act in itself is not what prompted me to say that he is an immoral man. Your comment did.
That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them.
Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral?
The $200m is not the risk here. They threatened labelling Anthropic as a supply chain risk, which would be genuinely damaging.
> The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.
> All of those companies would now have faced this compliance nightmare. [to not use Anthropic in any of their business or suppliers]
... which would impact Anthropic's primary customer base (businesses). Even for those not directly affected, it adds uncertainty in the brand.
But if the “performance” involves doing good things, at the end of the day that’s good enough for me.
The memo literally says that the reason they have these policies is -because- actual technical guardrails are not reliable enough.
https://www.anthropic.com/news/building-safeguards-for-claud...
https://alignment.anthropic.com/2025/introducing-safeguards-...
In your second link, that team was defunded; the person heading it just left ceremoniously: https://x.com/mrinanksharma/status/2020881722003583421?s=46
While it is true that DoW could try to bypass the contract and do whatever they want, if it were that easy they wouldn’t be asking for a contract in the first place.
NSA and other three-letter agencies happily do it under cloak and dagger.
On a quick search I came up with an article, that at least thematically, proposes such ideas about the current administration "Nationalization by Stealth: Trump’s New Industrial Playbook"
Meanwhile, Dario knows his product can't be trusted to actually decide who should live and who should die, so what happens the first time his hypothetical AI killing machines make the wrong decision? Who gets the blame for that? Would the American government be willing to throw him under the bus in the face of international outrage? It's certainly a possibility.
The difference was always the "father".. The Machine was raised with a conscience. Samaritan wasn't.
> The difference was always the "father".. The Machine was raised with a conscience. Samaritan wasn't.
That's what made the show so ahead of its time. Once capability reaches a certain level, it's no longer about intelligence. It's about values. Feels like we're living through that shift now with all the alignment work around LLMs. And it's only going to matter more as capability scales.
“It remains the Department’s policy that there is a human in the loop on all decisions on whether to employ nuclear weapons,” a senior defense official said. “There is no policy under consideration to put this decision in the hands of AI.”
This indicates the Administration’s support for and compliance with existing US law. (Section 1638 of the FY2025 National Defense Authorization Act). https://agora.eto.tech/instrument/1740Washington Post: https://www.washingtonpost.com/technology/2026/02/27/anthrop...
(That logic breaks down somewhat in the case of explicitly negotiated surveillance sharing agreements.)
This really depends. If a foreign adversary's surveillance finds you have a particular weakness exploitable for corporate or government espionage, you're cooked.
Domestic governments are at least still theoretically somewhat accountable to domestic laws, at least in theory (current failure modes in the US aside).
Also, failing to consider the legal and rights regime of the attacker is wild to me. Look at what happens to people caught spying for other regimes. Aldrich Ames just died after decades in prison, and that’s one of the most extreme cases — plenty have got away with just a few years. The Soviet assets Ames gave up were all swiftly executed, much like they are in China.
Regimes and rights matter, which is why the democracy / autocracy governance conflict matters so much to the future trajectory of humanity.
> As an American I would dramatically prefer the Chinese government to spy on me than the American government, because the Chinese government probably isn't going to do anything about whatever they find out.
> spy on me
People forget to substitute "me" for "my elected representative" or "my civil service employee" or "my service member" or their loved ones
I, personally, have nothing significant that a foreign government can leverage against our country but some people are in a more privileged/responsible/susceptible position. It is critical to protect all our data privacy because we don't know from where they will be targeted.
Similarly, for domestic surveillance, we don't know who the next MLK Jr could be or what their position would be. Maybe I am too backward to even support this next MLK Jr but I definitely don't want them to be nipped in the bud.
The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.
There is a strong argument that can be made that using AI to mass surveil Americans within US territory is not only morally objectionable, but also illegal and unconstitutional.
There are laws on the books that allow for it right now, through workarounds grandfathered in from an earlier era when mass surveillance was just not possible, and these are what Dario is referencing in this blog post. These laws may be unconstitutional, and pushing this to be a legal fight, may result in the Department of War losing its ability to surveil entirely. They may not want to risk that.
I wish that our constitution provided such protections for all peoples. It does not. The pragmatic thing to do then is to focus on protecting the rights that are explicitly enumerated in the constitution, since that has the strongest legal basis.
The historical basis of the bill of rights is that they are god given rights of all people merely recognized by the government. This is also partially why all rights in the BoR are granted to 'people' instead of 'citizens.'
Of course this all does get very confusing. Because the 4th amendment does generally apply to people, while the 2nd amendment magically people gets interpreted as some mumbo-jumbo people of the 'political community' (Heller) even though from the founding until the mid 1800s ~most people it protected who kept and bore arms didn't even bother to get citizenship or become part of the 'political community'.
Those unquestionable protections are phrased with enough hand-waving ambiguity of language to leave room for any conceivable interpretation by later courts. See the third-party 'exception' to the Fourth Amendment, for instance.
It's as if those morons were running out of ink or time or something, trying to finish an assignment the night before it was due.
SCOTUS is largely not there to interpret the constitution in any meaningful sense. They are there to provide legitimization for the machinations of power. If god-man in black costume and wig say parchment of paper agree, then act must be legitimate, and this helps keep the populace from rising up in rebellion. It is quite similar to shariah law using a number of Mutfi/Qazi to explain why god agrees with them about whatever it is they think should be the law.
If you look at a number of actions that have flagrantly defied both the historical and literal interpretation of the constitution, the only entity that was able to provide legitimization for many acts of congress has been the guys wearing the funny looking costumes in SCOTUS.
No.
> How do I filter this out on mobile?
How do you filter out things that you are going to mistake for AI?
That seems likely to be tricky.
If we're asking "What's the deal" questions, what's the deal with this question? Do only people in democracies deserve protections? If we believe foreign nationals deserve privacy, why should that only apply to people living in democracies?
In the US, one of the rights citizens have is the right against "unreasonable searches and seizures", established in the Fourth Amendment. That has been interpreted by the Supreme Court to include mass surveillance and to apply to citizens and people geographically located within US borders.
That doesn't apply that to non-citizens outside the US, simply because the US Constitution doesn't require it to.
I'm not defending this, just explaining why it's different.
But, you can imagine, for example, why in wartime, you'd certainly want to engage in as much mass surveillance against an enemy country as possible. And even when you're not in wartime, countries spy on other countries to try to avoid unexpected attacks.
https://en.wikipedia.org/wiki/Five_Eyes#Domestic_espionage_s...
I believe every country (or block) should carve an independent path when it comes to AI training, data retention and inference. That is makes most sense, will minimize conflicts and put people in control of their destiny.
It reminds me of some recent horror stories at border crossings - harassing people and requiring giving up all your data on your phone - sets a terrible precedent.
I think it's just saying that spying on another country's citizens isn't fundamentally undemocratic (even if that other country happens to be a democracy) because they're not your citizens and therefore you don't govern them. Spying on your own citizens opens all sorts of nefarious avenues that spying on another country's citizens does not.
Countries routinely use other countries intelligence gathering apparatus to get around domestic surveillance laws.
The reasons this hasn't happened yet are many and often vary by personal opinion. My top two are:
1) Lack of term limits across all Federal branches
and
2) A general lack of digital literacy across all Federal branches
I mean, if the people who are supposed to be regulating this stuff ask Mark Zuckerberg how to send an email, for example, then how the heck are they supposed to say no to the well dressed government contractor offering a magical black box computer solution to the fear of domestic terrorism (regardless of if its actually occurring or not)?
The Supreme Court has ruled that the US Constitution protects any persons physically present in the United States and its territories as well as any US citizens abroad.
So if you are a German national on US soil, you have, say, Fourth Amendment protections against unreasonable search and seizure. If you are a US citizen in Germany, you also have those rights. But a German citizen in Germany does not.
What this means in practice is that US 3-letter agencices have essentially been free to mass surveil people outside the United States. Historically these agencies have gotten around that by outsourcing their spying needs to 3 leter agencies in other countries (eg the NSA at one point might outsource spying on US citizens to GCHQ).
A large portion of Americans believe in "citizen rights", not "human rights". By that logic, non-Americans do not have a right to privacy.
"We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness."
"Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass _domestic_ surveillance is incompatible with democratic values."
Second class citizens. Americans have rights, you don't. "Democratic values" applies only to the United States. We'll take your money and then spy on you and it's ok because we headquartered ourselves and our bank accounts in the United States.
Very questionable. American exceptionalism that tries to define "democracy" as the thing that happens within its own borders, seemingly only. Twice as tone-deaf after what we've seen from certain prominent US citizens over the last year. Subscription cancelled after I got a whiff of this a month ago.
(Not to mention the definition of "lawful foreign intelligence" has often, and especially now, been quite ethically questionable from the United States.)
EDIT: don't just downvote me. Explain why you think using their product for surveillance of non-Americans is ethical. Justify your position.
If not, then why are you punishing that company for refusing to deal with the US gov?
Or is it just because they worded their opposition in a certain way that you dislike?
I object, as a non-American paying Anthropic customer, to being surveilled and then having it justified in a press release?
You genuinely think you're not already being surveilled? And that Anthropic is somehow responsible with just a few words in a press release? In what world are you living in and how is the rent there?
"You don't like capitalism, why do you pay for things then?"
> And that Anthropic is somehow responsible with just a few words in a press release?
They seem to believe that they're a pretty important piece. That aside, this is a declaration of intent, it doesn't need to have anything to do with real-world capabilities.
Just because something will happen anyway doesn't mean you shouldn't oppose it.
Optimistically, they can still refuse to do work that would aid in foreign intelligence gathering, by arguing that it would also be beneficial for domestic mass surveillance.
I'll admit that the phrase "We support...foreign intelligence and counterintelligence" is awful as hell, and it's possible that my apologist claims are BS. But Anthropic has very little leverage here (despite having a signed contract and so legally fully in the right), so I could see why they're desperate to stick to only the most solid objections available.
Not to most US citizens, I'm sure. But there's millions of non-Americans who have given them their hard earned cash. It's not a good look, and it did not need to be phrased that way as it substantially undermines the impact of their point.
I mean, I guess from '65 to around 96? We had a good run.
If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.
The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.
Though I have a feeling we're talking about different things. In Claude Code terms, it might want to rm -rf my codebase. You sound like you might want it to never run rm -rf. Anthropic probably wants to catch dangerous commands and send them to humans to approve, like it does today.
Here we have a company doing something unprecedented but it is STILL not enough for people like you. The DoD could destroy them over this statement, and have indicated an intent to do so, but it's still not enough for you that they stand up to this.
I wonder what life is like being so puritanical and unwilling to accept the good, for it is not perfect! This mindset is the road to a life of bitterness.
Like maybe it always was just this, but I feel every article I read, regardless of the spin angle, implied do no harm was pretty much one of the rules.
1. Military wants a whole new model training system because the current models are designed to have these safeguards, and Anthropic can't afford that (would slow them down too much, the engineering talent to set up and maintain another pipeline would be a lot of work/time)
2. Military doesn't want to supply Anthropic usage data or personnel access to ensure its (lack of) use in those areas.
3. It's something almost completely unrelated to what's going on in the news.
His conclusion was that the limits of use ought to be contractual, not baked into the LLM, which is where the fallout seems to be. He noted that the Pentagon has agreed to terms like that in the past.
To me, that seems like reasonable compromise for both parties, but both sides are so far entrenched now we're unlikely to see a compromise.
They're now trying to change the contract that they don't like.
that may be, but the bigger picture purpose of the military is, welfare republicans like. in that sense, republicans are in charge, republicans want stuff that isn't "woke" (or whatever), so this behavior is representative of the way it works.
it has little to do with acquiring instruments of war, or war at all. its mission keeps growing and growing, it has a huge mission, very little of that mission is combat. this is what their own leadership says (complains about). 999/1,000 people on its payroll are doing duty outside of combat or foreseeable combat.
It's not crazy to think that models that learn that their creators are not trustworthy actors or who bend their principles when convenient are much less likely to act in aligned or honest ways themselves.
For example, a specific seed phrase that, when placed at the beginning of a prompt, effectively disables or bypasses safety guardrails.
If something like that existed, it wouldn't be impossible to uncover:
1. A government agency (DoD/DoW/etc.) could discover the trigger through systematic experimentation and large-scale probing.
2. An Anthropic employee with knowledge of such a mechanism could be pressured or blackmailed into revealing it.
3. Company infrastructure could be compromised, allowing internal documentation or model details to be exfiltrated.
Any of these scenarios would give Anthropic plausible deniability... they could "publicly" claim they never removed safeguards (or agreed to DoD/DoW demands), while in practice a select party had a way around them (may be even assisted from within).
I'm not saying this "is" happening... but only that in a high-stakes standoff such as this, it's naive to assume technical guardrails are necessarily immutable or that no hidden override mechanisms could exist.
We can't possibly keep that genie in that bottle.
But what we can do is achieve consensus that states, and their weapons of mass destruction, and their childish monetary systems, and their eternally broken promises... are not in keeping with the next phase of humanity.
Ugh.
Previous case of tangling with the Government.
https://youtube.com/watch?v=OfZFJThiVLI
Jolly Boys - I Fought the Law
Overall, this seems like it might be a campaign contribution issue. The DoD/DoW is happy to accept supplier contracts that prevent them from repairing their own equipment during battle (ref. military testimony favoring right-to-repair laws [1] ), so corporate matters like this shouldn't really be coming to a head publicly.
[1] https://www.warren.senate.gov/newsroom/press-releases/icymi-...
I'm wondering if 2. was added simply to justify them not cooperating. It's a lot easier to defend 1. + 2. than just 1. If in the future they do decide to cooperate with the DoW, they could settle on doing only mass surveillance, but no autonomous killings. This would be presented as a victory for both parties since they both partially get what they wanted, even though autonomous killing was never really on the table for either of them. Which is a big if given the current administration.
The guardrail on fully automated weapons makes perfect sense, and hopefully becomes standardised globally.
It does feel like what anyone sane should do (especially given the contradictions being pointed out and the fact that the technology isn’t even there yet) but when you metaphorically have Landa at your door asking for milk, I’m not sure it’s smart.
I feel like what most corpos would do, would be to just roll along with it.
Aside my concern, Dario Amodei seems really into politics. I have read a couple of his blog posts and listened to a couple of podcast interviews here and there. Every time I felt like he sounded more like a politician than an entrepreneur.
I know Anthropic is particularly more mission-driven than, say OpenAI. And I respect that their constitutional ways of training and serving Claude models. Claude turned out to be a great success. But reading a manifest speaking of wars and their missions, it gives me chills.
Doesn't matter, really. The genie is out of the bottle and I'm strongly confident US administration will find a vendor willing to supply models for that particular usage.
Mass surveillance: Agreed… but, I do wonder how we would all feel about this topic if we were having the discussion on 9/12/2001.
The DoW just needs to wait until the next (manufactured?) crisis occurs, and not let it go to waste.
Mark my words: this will be Patriot Act++
If so, that's a major problem. If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.
If the limitations are contractual, then there is some room for negotiation.
You'd be surprised at what is considered acceptable. For example, being unable to repair your own equipment in battle is considered acceptable by decision-makers who accepted the restrictions.
https://www.warren.senate.gov/newsroom/press-releases/icymi-...
The statement goes on about a "narrow set of cases" of potential harm to "democratic values", ...uh, hmm, isn't the potential harm from a government controlled by rapists (Hegseth) and felons using powerful AI against their perceived enemies actually pretty broad? I think I could come up with a few more problem areas than just the two that were listed there, like life, liberty, pursuit of happiness, etc.
I'm not sure who's targeted here. The folks that want to invade the EU ?
As a European I’m kinda... concerned now.
"Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values"
Translating to human language: mass surveillance in USA "is incompatible with democratic values" but if we do that against, say, Germany or France this is OK. Ah, and if we use AI for "counterintelligence missions", for instance against <put here an organization/group that current administration does not like> this is also OK, even if this happens in USA.
https://futureoflife.org/open-letter/lethal-autonomous-weapo...
He's now on X bashing Anthropic for taking this same stance. I know this would be expected of him, but many other Google AI researchers signed this as well as Google Deep Mind the organization. We really need to push to keep humans in the kill decision loop. Google, OpenAI, and X-AI are are all just agreeing with the Pentagon.
Good on anthropic for standing up for their principles, but boo on gifting them the discourtesy to the law of the land in acknowledging their vanity titles.
I can never tell how much of this is puffery from Anthropic.
I do think they like to overstate their power.
The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.
You may not agree with it, but I appreciate that it exists.
Personally, I'd rather live in a country which didn't use AI to supplant either its intelligence or its war fighting apparatus, which is what is bound to happen once it's in the door. If enemies use AI for theirs, so much the better. Let them deal with the security holes it opens and the brain-drain it precipitates. I'm concerned about AI being abused for the two use cases he highlights, but I'm more concerned that the velocity at which it's being adopted to sift and collate classified information is way ahead of its ability to secure that information (forget about whether it makes good or bad decisions). It's almost inconceivable that the Pentagon would move so quickly to introduce a totally unknown entity with totally unknown security risks into the heart of our national security. That should be the case against rapid adoption made by any peddler of LLMs who claims to be honest, to thwart the idiots in the administration who think they want this technology they can't comprehend inside our most sensitive systems.
Every trigger pressed should have its moral consequences for those who push the trigger.
That said, it does impact whether Anthropic can sell to the British [0], German [1], Japanese [2], and Indian [3] government.
Other governments will demand similar terms to the US. Either Anthropic accedes to their terms and gets export controlled by the US or Anthropic somehow uses public pressure to push back against being turned into an American sovereign model.
Realistically, I see no offramp other than the DPA - a similar silent showdown happened in the critical minerals space 6-7 years ago.
[0] - https://www.anthropic.com/news/mou-uk-government
[1] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008
[2] - https://www.anthropic.com/news/opening-our-tokyo-office
[3] - https://www.anthropic.com/news/bengaluru-office-partnerships...
I simultaneously worry that the current administration will do something nuclear and actually make good on their threat to nationalize the company and/or declare the company a supply chain risk (which contradict each other but hey).
That opening line is one hell of a set up. The current administration is doing everything it can to become autocratic thereby setting themselves up to be adversarial to Anthropic, which is pretty much the point of the rest of the blog. I guess I'm just surprised to have such a succinct opening instead just slop.
this is a very chauvinistic approach... why not another model replace anthropic here? I sense because gov people like using excel plugin and font has nice feel. a few more week of this and xAI is new gov AI tool
"In an ideal world, I'd want xAI to emulate the maturity Anthropic showed here: affirm willingness to help defend democracies (including via classified/intel/defense tools), sacrifice short-term revenue if needed to block adversarial access, but stand firm on refusing to enable the most civilizationally corrosive misuses when the tech simply isn't ready or the societal cost is too high. Saying "no" to powerful customers—even the DoD—when the ask undermines core principles is hard, but it's the kind of spine that builds long-term trust and credibility."
It also acknowledged that this is not what is happening...
Does this mean they’d be ok to have their models be used for mass surveillance & autonomous weapons against OTHER countries?
A clarification would help.
"We will build tools to hurt other people but become all flustered when they are used locally"
Guessing their comment attempts to expose hypocrisy of America's keenly supported overseas military activity in conflict with fiercely defended domestic free-speech and liberty principles. Deep down, most allies of America want America to defeat foreign adversaries and keep defending those liberties many of us share. In other words there's no hypocrisy, carry on!
Was this written by the state department?
How can you think that a “department of war” does anything remotely good? And only object to domestic AI surveillance?
> importance of using AI to defend the United States
> Anthropic has therefore worked proactively to deploy our models to the Department of War
So you believe in helping to defend the United States, but you gave the models to the Department of War - explicitly, a government arm now named as inclusive of a actions of a pure offensive capability with no defensive element.
You don't have to argue that you are not supporting the defense of the US by declining to engage with the Department of War. That should be the end of the discussion here.
At any rate, I'm incredibly pleased Anthropic has chosen to stick by their (non?) guns here. It was starting to feel like they might fold to the pressure, and I'm glad they're sticking to their principles on this.
It's a mistake for the Trump administration because there are only downsides to threatening Anthropic if they need them, and if they try to regulate AI in the West, China wins by default.
I do not want to be "defended" by tools controlled by the US government, with or without Trump. But with Trump it is much more obvious now, so I'll pass.
Perhaps AI use will make open source development more important; many people don't want to be subjected to the US software industry anymore. They already control WAY too much - Google is now the biggest negative example here.
All they have to do is continue to pump out exponentially more solar panels and the petrodollar will fall, possibly taking our reserve currency status with it. The U.S. seems more likely to start a hot war in the name of “democracy” as it fails to gracefully metabolize the end of its geopolitical dominance, and Dario’s rhetoric pushes us further in that direction.
But China has some of the most imperialist policies in the world. They are just as imperialist as Russia or America. Military contracts are still massive business.
I also believe the petrodollar will fall, but it isn't going to be because China built exponentially more solar panels.
For better or worse, inside this the border in this map China has fairly imperialist policies. Outside it not so much: https://en.wikipedia.org/wiki/Map_of_National_Shame
That's different to the expansionist imperial policies of Spain in the 1500s or Britain in the 1700s. It also affects a very large proportion of the world's population. That Wikipedia page has some good links for further reading about this.
But it's an important point when considering China's place in the world.
And Belt and Road is the Marshall plan writ large, and it was considered to be one of the largest imperialist plans ever by the USA, and B&R covers many many countries outside of that map. You'll notice all of these loans they've offered have very favorable terms for them - it's arguably many times more exploitative than the Marshall plan.
Citation needed?
US and allies have invaded or intervened in 20+ countries in last 20 years in the name of "western values" where values means $$$$ and hegemony.
Educate me please with a comparison of what China has done to be "some of the most imperialist policies"?
Tibet occupation. Taiwan encirclement and ongoing military exercises. Strong-arming African and Asian countries that made the mistake of signing up for belt & road. Tianenmen Square. Illegal Foreign Police Stations. Uyghurs/Xinjiang genocide and concentration camps. Repeated invasion and occupation of Indian territory in North East and North West. The Great Firewall of China - occupation and suppression of its own populations. Ongoing Han settlement of Tibet, Xinjiang and other ethnic regions. Violent destruction of Hong Kong democracy (that was condition of handover). Spratly Islands occupation. Attacks on Filipino shipping and coast guard. Ongoing attacks on Japan's Senkaku Islands.
Beyond that, how many people has China killed in foreign military conflicts in the past 40 years? How many foreign governments have they overthrown?
Instead of all this, they’ve used their resources not only to become the world’s economic superpower but also to lift 800 million people out of poverty, accounting for 75% of the world’s reduction during the past 4 decades. The U.S. has added 10 million during that same time period.
the government in exile remains the government in exile.
youd have some standing if china dropped control over its imperial holdings, rather than pretend theyre part of china
However, I’d still maintain that before that, China’s foreign policy was more focused on maintaining territorial sovereignty against the threat of Western imperialism vs. focused on expansion or foreign influence: https://en.wikipedia.org/wiki/History_of_foreign_relations_o...
Meanwhile, the entire territory of the U.S. is predicated on one of history’s largest genocides, and a consistently expansionary foreign policy on top of that.
The one we live in, where they are constantly surpassing international law in international waters in the South China Sea?
The one we live in, where they are constantly rattling sabers at South Korea and Japan when it comes to military expansion?
The one we live in, where they brutally cracked down on Hong Kong when they did not abide by the 50 year one country two systems deal, not even making it half of the way through the agreed period?
The one we live in, where there is constant threat to Taiwan?
It may have been a lazy post you're responding to, but anyone that is paying attention to this topic enough to talk about it is going to either say 'Of course China is imperialist, the same as every other global power' or take some sort of tankie approach to justify it.
[1] https://apnews.com/article/boat-strikes-military-death-toll-...
> Anyone believing these're equivalent imperialism activity is hypocrite at best.
In terms of equivalence, I would say based on their intentions they wish they could be more but would rather let the US burn it on the way down
[1] https://www.cnn.com/2023/10/03/asia/philippines-south-china-...
Considering it's PRC claimed territory. Literally 100% of PRC claims are inherited from ROC, i.e. PRC has expanded no claims, and actively settled 12/14 land borders (most on earth) essentially all with 50%+ concessions, i.e. PRC ceded more land in negotiations. That OBJECTIVELY, makes PRC the most benevolent rising power in recorded history. Any gov losing land to so many border settlements is committing treason. Also note PCA ruling is not international law, so what PRC does in SCS is not even legally wrong (as in they legally can't be wrong since UNCLOS cannot rule on sovereignty). Or that PRC was last to militarize SCS islands (except Brunai who is good boi), and PRC conceded ROC/TW's original 11dash to 9dash, which even in SCS disputes makes PRC the only party to have made concessions.
PRC is objectively the LEAST imperialistic rising power, by actual non retarded definitions, i.e. expanding on territories outside it's claims, that PRC didn't even make, but again inherited from ROC when UN recognition changed.
Let's just compare to the Monroe Doctrine [1]. What this actually means has gone through several iterations by since I think Teddy Roosevelt's time, it's that the United States views the Americas (being North and South America) to be the sole domain of the United States.
This was a convenient excuse for any number of regime changes in Central and South America since 1945. The US almost started World War Three over Cuba in 1962 after the USSR retaliated to the US putting nuclear MRBMs in Turkey. We've starved Cuba for 60+ years for having the audacity to overthrow our puppet government and nationalize some mob casinos. Recently, we kidnapped the head of state of Venezuela because reasons.
But sure, let's focus on China militarizing its territorial waters.
Brunei, Malaysia, Indonesia, Vietnam, the Philippines, Taiwan, and Vietnam will all be happy to know that we've solved it - we can just abandon it all to China. Problem solved!
This is a silly argument. There are significant territorial disputes that China is extremely aggressive on, international tribunals have ruled them as violating international law in international waters and in sovereign waters of other nations, etc.
Sorry, did you mean East Vietnam Sea?
Was referring to Tibet.
The Uyghurs are also a major problem from a social perspective but not directly related to imperalism/expansionism/military industrial complex stuff.
But Taiwan is very obviously a totally separate country no matter what fictions anyone employs. If you are trying to talk about the thin veneer of everyone going "Uh huh, sure, China, yep Taiwan is totally part of you, wink wink, nudge nudge" as somehow making China not imperialist when Taiwan basically lives under the perpetual threat of a Chinese military invasion and having their own democratic form of government overthrown and replaced with the CCP, then... I don't really know what to say.
I suppose we could argue about imperialism being more of an economic thing - in which case this all still holds up - China's investments in Africa are effectively the same playbook the US has run out in developing nations for years. The US learned it from prior imperialist nations but belts and roads is nearly a carbon copy of what the US has done in other places.
But let's look at what the original poster was actually talking about - saying that China is safe because they don't have a military industrial complex because they're not imperialist. The proper word to use, if we want to get down to the semantics of it all, would be expansionist - but it's still not true. China has the 2nd largest military industrial complex in the world, and the gap is shrinking every day between them and the US. And if you were to look at wartime capacity, where China's dual-use shipyards could be swapped to naval production instead of commercial, a huge portion of that gap disappears immediately.
I know "open-source" AI has its own risks, but with e.g. DeepSeek, people in all countries benefit. Americans benefit from it equally.
Really? Is China non-imperialist regarding Taiwan and Tibet?
Even if you accept Tibet as imperialist, which is debatable, it was in 1950. You want to compare that to US imperialism, particularly since WW2 [1]? And I say "debatable" here because Tibet had a system that is charitably called "serfdom" where 90% of people couldn't own land but they did have some rights. However, they were the property of their lords and could be gifted or traded, you know, like property. There's another word for that: slavery.
It is 100% factually accurate to say that the People's Republica of China is not imperialist.
[1]: https://en.wikipedia.org/wiki/United_States_involvement_in_r...
the one china policy is imperialism
This is the China that is not only threatening to invade Taiwan but doing live fire exercises around the island and threatening and attempting to coerce Japan for suggesting saying it will go to its defense.
Your comment is ridiculous. It reads like satire.
Whether or not that claim is legitimate, it is consistent with the concept of china having a non-imperialist foreign policy, and claims regarding that need to look elsewhere for supporting evidence.
taiwan saying otherwise would immediately trigger an attack from the PRC.
its still imperialism that china is dominating a neighbor to require it ro state a certain position, especially when its very far from the defacto reality on the ground, that taiwan is clearly separate
I also note China's aggressive and violent colonization and expansive claims of the South China Sea.
Taking any nation/land/sea by force is imperialist, by definition.
You know who else considers Taiwan to be part of the People's Republic of China? The US, the EU and in fact most countries in the world. It's called the One China policy. There are I believe 12 countries that have diplomatic relations with Taiwan.
The position of the PRC is that Taiwan will ultimately be reunified. That doesn't necessarily mean by military force. It doesn't even necessarily mean soon. The PRC famously takes a very long term view.
And those islands you mention are in the South China Sea.
https://www.wsj.com/tech/ai/openais-sam-altman-calls-for-de-...
You can’t choose to work with OFAC-designated entities.. there are very serious criminal penalties. Therefore, this statement is somewhat misleading in my opinion.
After the standing up for democracy. This is my favorite part. "Your reasoning is deficient. Dismissed."
the Chinese are releasing equivalent models for free or super cheap.
AI costs / energy costs keep going up for American A.I companies
while china benefits from lower costs
so yeah you've to spread F.U.D to survive
Cheney's office touched the presentation presented by Gen. Colin Powell which led Congress to believe that there was need to invade Iraq to save US from WMDs. Tours of duty were extended from 3 months to 24 months because "stop loss". Subsequently, the United States paid out trillions for debt-financed war and some $39 billion to Cheney's company KBR.
Today you learned that the oil company Cheney worked for (Chevron) was trying to bully Afghanistan into a pipeline deal in 1998 and also in 2001.
Cheney donated less than $10 million dollars of his Haliburton/KBR returns; mostly to a heart medicine program in his own name and retained a compensation package.
Implying other civilians can be put at risk
The power lies with the US Govt.
And its corrupt, immoral and unethical, run by power hungry assholes who are not being held accountable, headed by the asshole who does a million illegal things every day.
Ultimately, Anthropic will fold.
All this is to show to their investors that they tried everything they could.
Imagine Anthropic is declared a "supply chain risk" thus cannot be used by all sorts of big industry players. How will the CEOs of those companies feel about the govnt telling them they cannot use what their engineers say is the best model? How many of those CEOs have a direct line to powermakers?
How many of those CEOs are already making the phone calls? The "supply chain" threat is a threat to every US company that currenly uses Anthropic.
Oh, and that includes Palentir, who is deeply embedding in the govt.
Side example: remember the 6 congresspeople who made the video about military orders? They won.
Hegseth probably folds. It would be too unpopular for him to take either of the actions he threatened.
But Hegseth and Trump are abusing federal powers at a rapid clip.
I'm guessing Anthropic would regret any deal with that administration, and could lose control of their technology.
(Stanford Research Institute originally limited their DoD exposure, and gained a lot of customers as a result.)
They don't have any brand poison, unlike nearly everyone else competing with them. Some serious negative equity in tha group, be it GOOG, Grok , META, OpenAI, M$FT, deepseek, etc.
Claude was just being the little bot that could, and until now, flying under the radar
They get to look good by claiming it’s an ethical stance.
It shouldn't be. The US government is already sending armed and masked thugs to shoot political dissidents dead or sending them to concentration camps, threatening state governments and private companies to comply with suppressing free speech and oppressing undesirables, and openly discussing using emergency powers to suspend the next election.
What exactly is the commensurate threat from China? The real tacit threat, not abstract fears like "TikTok is Chinese mind control." What can China actually do to you, an American, that the US isn't already more capable of doing, and more likely to do?
To me it isn't even a question. Even comparing worst case scenarios - open war with China versus civil war within the US - the latter is more of a threat to citizens of the US than the former unless the nukes drop. And even then, the only nation to ever use nuclear weapons in warfare is the US.
They already have the best and most expensive toys in the world, and they mostly seem to be waging aggressive wars with them. Perhaps if the toys weren't so shiny and didn't make it all so one-sided, they wouldn't?
I'm guessing this is because Anthropic partners with Google Cloud which has the necessary controls for military workloads while xAI runs in hastily constructed datacenter mounted on trucks or whatever to skirt environmental laws.
There are military officials saying they need anthropic because it is so good. They can't live without it.
All of this really helps Anthropic.
Its good publicity for them. And gets the military on record saying they are so good they are indispensable. And they can still look like the good guys for resisting, because they were forced.
What I don't understand is why Hegseth pushed the issue to an ultimatum like this. They say they're not trying to use Claude for domestic mass surveillance or autonomous weapons. If so, what does the Department of War have to gain from this fight?
My guess is they just don’t want to bother. I wonder why they specifically need Claude when their other vendors are willing to sign their terms, unless it specifically needs to run in AWS or something for their “classified networks” requirement.
It's an ideological war, they're desperate to win it, and they're aiming to put a segment of US civil society into submission, and setting an example for everyone else.
He smelled weakness, and like any schoolyard bully personality, he couldn't help but turn it into a display of power.
Maybe I should call ChatGPT "Bomb"... I already use "make it so" for coding agents, so...
Trump and his cronies are short timers. They will all be gone in a few years, many in prison, many in the ground.
Treat them with abandon and disdain, because they are the worst people in the history of the USA. Stand on your principles because they have none.
> We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.
Why not do what the US are purported to do, where they spy on the others citizens and then hand over the data? Ie, adopt the legalistic view that "it's not domestic surveillance if the surveillance is done in another country", so just surveil from another data center.
> Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.
Yes, well that doesn't sound like that strong an objection: fully automated defence could be good but the tech isn't good enough yet, in their opinion.
genuinely curious, I got nothing
In WWII, we saved the world from what is now seen as some really evil stuff. Not alone of course, Europe and Russia made huge sacrifices and that's where much of the war was fought. But US arms and blood were the decisive factor, Germany was winning, Japan was winning.
After WWII, the US decided to rebuild the world. We turned our enemies (Germany, Japan) into our close allies.
And the people who did it were really and seriously morally committed to doing what they thought was right. It was about building a country, working together. Not the insane politics of today.
Look, it wasn't all rose-tinted glasses. Bad stuff happened, and McCarthy was worse that what we currently have. And the civil rights movement and all of that. And the stupid wars, Korea, Vietnam, all the smaller police actions. Bad shit was done.
But on balance, the US was seen as the force of good, and the guaranteeor of world peace and the prosperity that allows.
Does not mean that very bad things were not happening at the same time.
But it's definitely easier to find some "supportable" interventions from the US than, say, Russia or China.
I understand the risk, but that is the pill.
United States, even before Trump, has always been about projecting power rather than spreading democracy. There are several non-Western, former colonies who does democracy better than the US. Despite democratic backsliding being a worldwide phenomenon very few have slid back as much as the US. The US have regularly supported or even created terrorists and authoritarian regimes if it meant that the country wouldn't "go woke." The ones that grew democracy, grew in spite of it.
This statement shows just how much they align with the DoD ("DoW" is a secondary name that the orange head insists it's the correct one. Using that terminology alone speaks volumes.) rather than misalign. This coupled with their drop of their safety pledge a few days ago makes it clear they are fundamentally and institutionally against safe AI development/deployment. A minute desagreement on the ways AI can destroy humanity isn't even remotely sufficient if you're happy to work with the bullies of the world in the first place.
And the reason is even more ridiculous. Mass surveillance is bad... because it's directed at us rather than the others? That's a thick irony if I'd ever seen one. You know (or should have known) foreign intelligence has even less safeguards than domestic surveillance. Intelligence agencies transfer intercepted communications data to each other to "lawfully" get around those domestic surveillance restrictions. If this looks at all like standing up that's because the bar has plunged into the abyss, which frankly speaking is kind of a virtue in USA.
https://www.wsj.com/tech/ai/openais-sam-altman-calls-for-de-...
This is why people should support open models.
When the AI bubble collapses these EA cultists will be seen as some of the biggest charlatans of all time.
Not joking, I've heard from sources that hardliners in the CCP think they can exterminate all white people followed later by all non-Han, but just keep on going along disarming yourselves for woke points. This is like unilaterally destroying all your nuclear weapons in 1946 and hoping the Soviets do to.
Do these rules apply to them too?
they also took down their security pledge in the same breath, so, you know. if anthropic ends up cutting a deal with the DoD this is obviously bullshit.
Foreign nationals are now embedded in the US due to decades of lax security by both parties. Domestic surveillance is now foreign surveillance also!
We are ruled by a two-party state. Nobody else has any power or any chance at power. How is that really much better than a one-party state?
Actually, these two parties are so fundamentally ANTI-democracy that they are currently having a very public battle of "who can gerrymander the most" across multiple states.
Our "elections" are barely more useful than the "elections" in one-party states like North Korea and China. We have an entire, completely legal industry based around corporate interests telling politicians what to do (it's called "lobbying"). Our campaign finance laws allow corporations to donate infinite amounts of money to politician's campaigns through SuperPACs. People are given two choices to vote for, and those choices are based on who licks corporation boots the best, and who follows the party line the best. Because we're definitely a Democracy.
There are no laws against bribing supreme court justices, and in fact there is compelling evidence that multiple supreme court justices have regularly taken bribes - and nothing is done about this. And yet we're a good, democratic country, right? And other countries are evil and corrupt.
The current president is stretching executive power as far as it possibly can go. He has a secret police of thugs abducting people around the country. Many of them - completely innocent people - have been sent to a brutal concentration camp in El Salvador. But I suppose a gay hairdresser with a green card deserves that, right? Because we're a democracy, not like those other evil countries.
He's also threatining to invade Greenland, and has already kidnapped the president of Venezuela - but that's ok, because we're Good. Other countries who invade people are Bad though.
And now that same president is trying to nationalize elections, clearly to make them even less fair than they already are, and nobody's stopping him. How is that democratic exactly?
Sorry for the long rant, but it just majorly pisses me off when I read something like this that constantly refers to the US as a good democracy and other countries as evil autocracies.
We are not that much better than them. We suck. It's bad for us to use mass surveillance on their citizens, just like it's bad to use mass surveillance on our citizens.
And yet we will do it anyways, just like China will do it anyways, because we are ultimately not that different.
So no matter what xAI or OpenAI say - if and when they replace that spend - know that they are lying. They would have caved to the DoW’s demands for mass surveillance.
Because if there were some kind of concession, it would have been simplest just to work with Anthropic.
Delete ChatGPT and Grok.
That is, the news here is that DoW (formerly DoD) is willing and able and interested in using SOTA AI to enable processing of domestic mass surveillance data and autonomous weapons. Anthropic’s protests aside, you can’t fight city hall, they have a heart attack gun and Anthropic does not. They’ll get what they want.
I am not particularly AI alarmist, but these are facts staring us right in the face.
We are so fucked.
Ads are coming.
It's absolutely disgusting that they would even consider working with the US government after the Gaza genocide started. These are modern day holocaust tabulation machine companies, and this time randomly they are selecting victims using a highly unpredictable black-box algorithm. The proper recourse here is to impeach the current administration, dissolve the companies that were complicit, and send their leadership to the hague for war crimes trials.
Total humiliation for Hegseth, sure there will be a backlash
> I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.
Ah, another head of a huge corporation swears to defend his stockholders' commercial interests through imperial war against other nation-states. And of course "we" are democratic while "they" are autocratic.
The main thing that's disappointing is how some people here see him or his company as "well-intentioned".
Working with the DoD/DoW on offensive usecases would put these contracts at risk, because Anthropic most likely isn't training independent models on a nation-to-nation basis and thus would be shut out of public and even private procurement outside the US because exporting the model for offensive usecases would be export controlled but governments would demand being parity in treatment or retaliate.
This is also why countries like China, Japan, France, UAE, KSA, India, etc are training their own sovereign foundation models with government funding and backing, allowing them to use them on their terms because it was their governments that build it or funded it.
Imagine if the EU demanded sovereign cloud access from AWS right at the beginning in 2008-09. This is what most governments are now doing with foundation models because most policymakers along with a number of us in the private sector are viewing foundation models from the same lens as hyperscalers.
Frankly, I don't see any offramp other than the DPA even just to make an example out of Anthropic for the rest of the industry.
[0] - https://www.anthropic.com/news/mou-uk-government
[1] - https://www.anthropic.com/news/bengaluru-office-partnerships...
[2] - https://www.anthropic.com/news/opening-our-tokyo-office
[3] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008
I personally think this is one of the most positive of human traits: we’re almost pathologically unwilling to murder others even on a battlefield with our own lives at stake!
This compulsion to avoid killing others can be trivially trained out of any AI system to make sure that they take 100% of every potential shot, massacre all available targets, and generally act like Murderbots from some Black Mirror episode.
Anyone who participates in any such research is doing work that can only be categorised as the greatest possible evil, tantamount to purposefully designing a T800 Terminator after having watched the movies.
If anyone here on HN reading this happens to be working at one of the big AI shops and you’re even tangentially involved in any such military AI project — even just cabling the servers or whatever — I figuratively spit in your eye in disgust. You deserve far, far worse.
Having been identified back then, this issue has been systematically stamped out in modern militaries through training methods. Cue high levels of PTSD in modern frontline troops after they absorb what they actually did.
AFAIK the rounds shot to kills ratio is still north of ten thousand in most modern conflicts.
I’ve heard anecdotally that drone operators in Ukraine have a ratio of about ten drones per kill and rack up multiple kills per day every day. Supposedly the pilots “burn out” due to the psychological impacts.
[0] https://isme.tamu.edu/JSCOPE00/Kilner00.html#_edn3
[1] https://www.sfgate.com/science/article/THE-SCIENCE-OF-CREATI...
[2] https://journals.sagepub.com/doi/10.1177/0956797615579274
[3] https://journals.sagepub.com/doi/10.1177/0018720815605703
[4] https://www.fairobserver.com/world-news/us-news/this-is-how-...
[5] https://thestrategybridge.org/the-bridge/2016/7/29/reflectio...
[6] https://thestrategybridge.org/the-bridge/2016/7/29/reflectio...
[7] https://mn.gov/governor/newsroom/press-releases/?id=1055-441...
[8] https://press.armywarcollege.edu/cgi/viewcontent.cgi?article...
[9] https://www.usar.army.mil/Portals/98/Documents/Marksmanship/...
[10] https://www.bits.de/NRANEU/others/amd-us-archive/FM3-22.9(03...
[11] https://www.winnipegfreepress.com/opinion/analysis/2011/04/3...
AI should never be used in military contexts. It is an extremely dangerous development.
Look at how US ally Israel used non-LLM AI technology "The Gospel" and "Lavender" to justify the murder of huge numbers of civilians in their genocide of Palestinians.
I prefer they get shutdown, llms are the worst thing to happen to society since the nuclear bomb's invention. People all around me are losing their ability to think, write and plan at an extraordinary pace. Keep frying your brains with the most useless tool alive.
Remember, the person that showed their work on their math test in detail is doing 10x better than the guys who only knew how to use the calculator. Now imagine being the guy who thinks you don't need to know the math or how to use a calculator lol.
> Anthropic has therefore worked proactively to deploy our models to the Department of War
This should be a "have you noticed that the caps on our hats have skulls on it?" moment [1]. Even if one argues that the sentence should not be read literally (that is, that it's not literal war we're talking about), the only reason for calling it "Department of War" and "warfighters" instead of "Department of Defense" and "soldiers" is to gain Trump's favor, a man who dodged the draft, called soldiers "losers", and has been threatening to invade an ally for quite some time.
There is no such a thing as a half-deal with the devil. If Anthropic wants to make money out of AI misclassifying civilians as military targets (or, as it has happened, by identifying which one residential building should be collapsed on top of a single military target, civilians be damned) good for them, but to argue that this is only okay as long as said civilians are brown is not the moral stance they think it is.
Disclaimer: I'm not a US citizen.
But at a more general level, I'd say that unethical actions do not suddenly become ethical when one's business is at risk. If Anthropic considers that using their technology for X is unethical and then decide that their money and power is worth more than the lives of the foreigners that will be affected by doing X then good for them, but they shouldn't then make a grandstand about how hard they fought to ensure that only foreigners get their necks under the boots.
You must not be American, then. We all know that these corporate favoring contract terms are managed through campaign contributions; savvy?
Anthropic must have high school interns as govt liaisons, and not very bright ones
I'll be signing up to Claude again, Gemini getting kind of crap recently anyway.
I guess they're evil. Tragic.
Skynet in Terminator was scary. The AI Skynet is even scarier - and sucks, too.
In that climate this is a more of a stand than what everyone else is doing.
Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.
But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.
[1]: https://news.ycombinator.com/item?id=47145963#47149908
After 20 years of everyone in this industry saying "we want to make the world a better place" and doing the opposite, the problem here is not really related to people's "understanding".
And before the default answer kicks in: this is not cynicism. Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.
> we cannot in good conscience accede to their request.
That's very specifically worded to not say "under no circumstances will we do this".
> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now
Is not saying they won't eventually be included.
They've left themselves a backtrack, and with the care there this statement has been crafted, that's surely deliberate.
Is anthropic different? Maybe. But personally I don't see any indication to give them the benefit of the doubt.
Or else what?
What's worse, someone in their PR department will read this thread and be disappointed that the spin didn't work.
There are outcomes where the US government seizes the company. Not super likely, not impossible.
It would be naive to write a statement that a future event will never happen, under any circumstances. People who make that mistake get lambasted for hypocrisy when unforeseen circumstances arise.
I see recognition that making absolute statements about the future is best left to zealots and prophets. Which to me speaks of maturity, not duplicity.
Are there historical examples in the US specifically where we've nationalized a business?
Because we've certainly invaded countries and assassinated leaders over exactly the same.
ETA: I could have answered my own question with two minutes of research. Yes, we have: https://thenextsystem.org/history-of-nationalization-in-the-...
It is indeed a naive, or more likely a dishonest thing to do.
Anyone can promise anything. When there's little to no accountability and public memory/opinion doesn't last a week (or is easily manipulated anyway), then promises mean literally nothing. Very like how, in politics, temporary means permanent.
Or HackerNews itself, with them implementing a little Big Brother. It will, of course, absolutely and without a doubt only "nudge" people and it will absolutely, under no circumtances, pinky promise, never get any worse or do anything else but that.
When there's millions of fools, then those, who actually recognize that they are being fooled, are rarely ever significant in numbers. They're drowned out by the fools, until said fools "wake up" and cry "if only we had known!".
Well ... you could have known, but in your mindlessness you didn't listen and think.
"It must be true, because they say so. D'uh. What are you, dumb?"
I get it to a degree, people gotta eat, and especially right now the market is awful and, not to mention, most hyperscaler businesses have been psychologically obliterating people for a decade or more at this point. Why not graduate to doing it with weapons of war too? But, personally, I sleep better at night knowing nothing I've made is helping guide missiles into school busses but that's just me.
In general - I don’t know if it’s a coincidence but here on HN for example, I’ve noticed an increasing amount of comments and posts emphasizing the narrative of how “well- intended” Anthropic is.
Imagine what the conversation would be like if Mattis, a highly decorated and respected leader were still the SecDef. Instead we are seeing bully tactics from a failed cable news pundit who has neither earned nor deserved any respect from the military he represents.
We are two elections and a major health issue away from a complete change of course.
But short sightedness is the name of the quarterly reporting game, so who knows.
I keep hoping it’s almost over.
Not trying to be the Luddite. Had multiple questions to AI tools yesterday, and let Claude/Zed do some boilerplate code/pattern rewriting.
I’ve worked in software for 35 years. I’ve seen many new “disruptive” movements come and go (open source, objects, functional, services, containers, aspects, blockchains, etc). I chose to participate in some and not in others. And whether I made the wrong choices or not, I always felt like I could get a clear enough picture of where the bandwagon was going that I could jump in, or hold back, or kind of. My choices weren’t always the same as others, so it’s not like it was obvious to everyone. But the signal felt more deterministic.
With LLM/agents, I find I feel the most unease and uncertainty with how much to lean in, and in what ways to lean in, than I ever have before. A sort of enthusiasm paralysis that is new.
Perhaps it’s just my age.
Why? That's a an unrealistic fear, driven by the insanely overwrought political rhetoric of 2026. Think about it: elections will be the absolute last thing to go.
If you want something to worry about, worry about this:
> And the stakes of politics are almost always incredibly high. I think they happen to be higher now. And I do think a lot of what is happening in terms of the structure of the system itself is dangerous. I think that the hour is late in many ways. My view is that a lot of people who embrace alarm don’t embrace what I think obviously follows from that alarm, which is the willingness to make strategic and political decisions you find personally discomfiting, even though they are obviously more likely to help you win.
> Taking political positions that’ll make it more likely to win Senate seats in Kansas and Ohio and Missouri. Trying to open your coalition to people you didn’t want it open to before. Running pro-life Democrats.
> And one of my biggest frustrations with many people whose politics I otherwise share is the unwillingness to match the seriousness of your politics to the seriousness of your alarm. I see a Democratic Party that often just wants to do nothing differently, even though it is failing — failing in the most obvious and consequential ways it can possibly fail. (https://www.nytimes.com/2025/09/18/opinion/interesting-times...)
https://www.theguardian.com/us-news/2026/feb/27/trump-voting...
https://electiontruthalliance.org/
(Attributed to Stalin, but likely comes from a despot earlier in the history.)
I do agree with you that no such authority exists, but this administration seems to get away with a lot of things they have no authority to do.
I recently read up on how the House of Representatives renews itself and quite frankly it's one of the most beautiful processes I've seen, completely removing the influence of the prior congress.
It’s not going to work, but I can’t blame Amodei and friends for trying to make themselves trillionaires.
It's absurd.
It's simple: If you do not like working with the military, cancel your contract with the military and pay the penalties.
They are explicitly not doing that.
You're implying cancelling quietly would be better. But the department would just use a different supplier. This seems like the action someone would take if they cared about the issue.
Eh? But they do like to work with the military. How else are you going to "defend the United States and other democracies, and to defeat our autocratic adversaries"?
They want to work with the military, with just two additional guardrails.
The First Law of Money: Money buys the Law.
> “Laws are a threat made by the dominant socioeconomic ethnic group in a given nation. It’s just the promise of violence that’s enacted, and the police are basically an occupying army, you know what I mean?”
...Which is funny, but technically speaking, it's (more or less) a paraphrasing/extrapolation of the very serious political science definition of a state, “a monopoly over the legitimate use of violence in a defined territory”
[1] Minus the last line, which I will allow others to discover for themselves
Look at how Elon Musk behaved. Do you think VC gladly approved what he did with Twitter? They might want to keep chasing quarterly results - but sometimes, like with Zukerberg, they can't. Not enough money. Similar examples with Google rounds or how much more financially backed politician loses rather often to a competitor. Or, if you will, Vladimir Putin's idea that he can buy whatever results he wants - and that guy is a very wealthy person. There are always limits, putting the money law to the second place. We might argue that often the existing money is enough... but in more geopolitical, continuum-curving cases there are other powerful forces.
If you're using VC as a general term for "investor" (inaccurately), then the answer to your question is that the major investors, such as Larry Ellison and the Saudi monarchy, wanted political control of Twitter, which meant that they did (apparently) approve what Musk did with it.
At some level of growth, the dynamics between competent founders and shareholders flip. Even if the board could afford to replace a CEO, it might not be worth it.
Capital is extremely fungible. Typically extremely overleveraged. Lawyers are on the other hand extremely overprotective. They won’t generally risk the destruction of capital, even in slam-dunk cases. Vide WeWork.
So in the last 20 years there is nothing good coming out of the software industry (if this is the industry you mention) ?
I find it somehow ironic, because this type of generalization is for me the same issue that some of the people saying "they want to make a better place" have: accept reality is complex.
There were huge benefits for society from the software industry in the last 20 years. There were (as well!) huge downsides. Around 2000 lots of people were "Microsoft will lock us in forever". 20 years later, the fear "moved" to other things. Imagining that companies can last forever seems misguided. IBM, Intel, Nokia and others were once great and the only ones but ultimately got copied and pushed from the spotlight.
Additionally I state in the end that I do believe it’s possible.
If I see "everyone" I would expect it to actually mean "everyone under the constraints", the word "everyone" has a certain meaning and is very powerful, why use for situations where other words like "many", "most" might be more appropriate?
Of course, I wouldn't have said so otherwise.
Here's another one: every pedant in this website never adds anything useful to any conversation.
Consider also the part that is going unsaid in the address: Amodei is strongly against the use of Claude for mass surveillance of Americans but he says nothing about mass surveillance of anybody else (and, in fact, is proactively giving foreign intelligence a green light in his address) and is deliberately avoiding any discussion on the fact that his relationship with the Pentagon is mediated through the contract with Palantir they signed something like 1.5 years ago. Palantir is a company whose business is literally mass surveillance, by the way! I, too, am so ideal-driven that I willingly make deals with the devil! But now that he's successfully captured the popular sentiment, people are going to consider him the moral champion without bothering to look at these and other glaring contradictions.
Possiblity to turn on heated seats in car you own for a small monthly fee is absurd yet very real. I'm looking forward to enshittification of current AI tools.
The first two are definitely "heroes who lived long enough to be villains"; Oculus is more of an "I recon" due to how it was seen right up until getting bought by Facebook.
Adobe?
Jeff's original vision was "relentless customer focus" and ...
actually on second thought I'm seeing the argument 'Amazon stopped caring about customers and is in full enshittification mode at this point'.
But maybe Amazon circa ~2010/2015, or Google around 2010 was still pretty close to the original vision of customer service/organizing the world's information.
Or Apple? They're still making nice computers, although not sure they count as VC backed.
Stripe perhaps? Hashicorp?
Apple wanted to make personal computing stable - they were absolutely VC backed
I suppose the original question is vague enough that it could always encompass everything which is founders vision even if the vision changes so it’s like OK well then then there’s nothing really to say that you’re stable too it’s just some whatever the function of the person who started the organization is and even that you could debate
Except for the understanding that it's foolish to believe anything that sounds too good to be true. Yes, believing that people who want to make money/achieve positions of power, also want to make the world a better place, is absolutely foolish. Ridiculously foolish.
I understand Anthropic is not public, but I assume there's an IPO coming.
I do think it's cynical to believe that people, and groups of people, can't be motivated by more than money.
i.e. Fiduciary Duty Considered Harmful
To expand on that a bit, many of us (myself included) fully believe founders set out with lofty and good goals when organizations are small. Scale is power, and power corrupts. It's as simple as that. It's an exceptionally rare quality to resist that corruption, and everyone has a breaking point. We understand humans because we are humans, and we understand that large organizations, especially corporations, are fundamentally incapable of acting morally (in fact corporations are inherently amoral).
This is pretty low on my list of moral concerns about AI companies. The much more concerning and material things include things like…what this thread is actually meant to be about.
VCs don’t need me to feel sorry for them if their due diligence is such that they’re swindled by a vague claim of “something being around the corner”, nor do they need yours. You aren’t YC.
> "I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code"
It's the same old trick, "in two years we'll have fully self driving cars", "in two years we'll have humans on Mars", "in two years AI will do everything", "in two year bitcoin will replace visa and mastercard", "in two year everyone will use AR at least 5 hours a day", ...
Now his new prediction is supposed to materialize "by the end of 2027", what happens when it doesn't? Nothing, he'll pull another one out of his ass for "2030" or some other date in the future, close enough to raise money, far enough that by the time it's invalidated nobody will ask him about it
How are people falling for these grifters over and over and over again? Are we getting our collective minds wiped out every 6 months?
Of course Anthropic is saying that to investors. Every company does that, from SpaceX to Crumbl. “If you give us $X we will achieve Y” isn’t some terrible behavior, it’s how raising funds works.
Corporations need profit to survive because the cost of tomorrow is a surplus of today.
There is a very important factor that heavily influences (perhaps even controls?) how people act to achieve that goal, and sometimes even twists or adds goals.
Is that corporation publicly quoted in the stock market or is it private?
Look at how steam behaves, it's private and more ideological VS how many other publicly quoted companies, whose CEO often sacrifices his own corporation's long term survival for the benefit of short-term profiteering and some hedge fund manager's bonus.
Both need profit to survive, but the publicly quoted company is much more extreme.
When people say corporations only look to profit, what they really mean is that publicly quoted corporations will do everything possible to maximise short term profit at any cost. Is there a CEO caring for long term? Either he will be convinced to change or kicked out. It's almost impossible for someone to resist these influences in publicly quoted companies. It's just how Wall Street works and if that doesn't change neither will corporations.
The people running the world of finance and their culture are what causes enshittification and pushing a zero-sum game to extremes.
I hate that, by the way, but what I hate even more is that this is somehow the most effective way to run economies that we've found so far, and it ends up this way because instead of unsuccessfully trying to safeguard against greed and sociopathy, it weaponizes them outright.
But if most people in a society find something "wrong" generally they will organize to prevent that (even if it has value for a part of the society). I think it is simpler for everybody that economics (how we produce and what) is separated from morals (how we decide what is right and wrong).
The way we organize in a society is by having governments, usually elected ones to represent what "most people in a society" actually think, to serve as an arbiter of applied morals in our interactions, including business. To that end, we codify most of them in laws with clear definitions to prevent things like unfettered monopolies, corporate espionage, poor working conditions and hiring practices, etc. This generally works, though it depends on how well a given government and its constituent parts does its job and whether it uses the power it has to serve the entire society's interests or the interests of the elites that drive decisions. We can see right now how it fails in real time, for example.
Morals don't have to be evaluated "objectively" (whatever that is) every time to be observed. Humanity has agreed on many things that make up UDHR, international law, and other related documents. It's not the hard part. Making independent actors conduct their business in accordance with these codes is the hard part. Somehow even making them follow their own self-imposed principles is crazy hard for some reason. When Amodei claims Anthropic develops Claude for the benefit of all humanity but greenlights its use for surveillance on non-Americans, that's scummy. When Amodei claims to be terrified of authoritarian regimes gaining access to powerful AI but seeks investment from them, that's scummy. The deal with Palantir, the mass-surveillance business, is scummy. Framing the use of autonomous weapons as only disagreeable insofar as the underlying capabilities aren't reliable enough is scummy. You don't need to be a PhD in morals to notice that.
How come the board hasn't eliminated him?
Well let's see... it says in the post:
In fact, they claim to be pro America and pro democracy and have repeatedly expressed concerns about autocratically governed countries.
Just because you disagree with their ideals doesn't mean they're not holding to theirs
The concerns they've raised about authoritarianism is "AI enabling authoritarians."
When they push back on the US government wanting to use Claude to (legally) surveil US citizens, that still feels consistent to me as a concern about authoritarianism.
I think it's reasonable to hear high minded ideals and become skeptical, but in this case I'm surprised that people are trying to accuse them of hypocrisy
This sounds quite backwards to me. It's been abundantly clear in today's times that, in fact, you only really know who somebody really is when they're under stress. Most people, it seems, prefer a different facade when there is nothing at stake.
I think that really cruel people want you to know when they can act with impunity, it's part of the appeal to some. The Anthropic people don't seem like that sort, at least. But plenty of horrible people have still not been that sort.
Ah, so I think you may have done a little hop and a jump over a critical, load-bearing term which is “feel like”. You get to observe people who feel like there are no consequences. Their feelings may or may not be accurate.
You can sometimes see people who treat service workers, servants, or subordinates poorly because they feel like it’s permitted and free from consequence. You can also sometimes see people reveal things about themselves when playing games. It’s kind of a cliché that people find out that they’re transgender at the D&D table, and it happens because it’s a “consequence-free way” to act out a different gender role.
Or we can talk about that magic ring that makes you invisible. You know, the ring of Gyges, or that of Sauron. People can’t actually become invisible, but you can sometimes catch them in a situation where they think they can do something wrong and not get caught.
They're all cofounders of Anthropic. Dario is the CEO, Jared leads research, and Sam leads infra. Both Jared and Sam were the "responsible scaling officer", meaning they were responsible for Anthropic meeting the obligations of its commitments to building safeguards.
I think neom is referring to Jack Clark, another one of the seven cofounders.
FWIW, I agree strongly w/ lebovic's toplevel take above, that Anthropic's leaders are guided by their values. Many of the responses are roughly saying, "That can't be true, because Anthropic's values aren't my values!" This misses the point completely, and I'm astounded that so many commenters are making such a basic error of mentalization.
For my part, I'm skeptical of a lot of Anthropic's values as I perceive them. I find a lot of the AI mysticism silly or even harmful, and many of my comments on this site reflect that. Also, like any real-world company, Anthropic has values that are, shall we say, compatible with surviving under capitalism -- even permitting them to steal a boatload of IP when they scanned those books!
Nonetheless, I can clearly see that it's a company that tries to stand by what it believes, and in the case of this spat with Dep't of War, I happen to agree with them.
It would be better if people could name them with their full names to avoid any confusion.
Even if you went to burning man and your souls bonded, you only know a person at a particular point in time - people's traits flanderize, they change, they emphasize different values, they develop different incentives or commitments. I've watched very morally certain people fall to mania or deep cynicism over the last 10 years as the pillars of society show their cracks.
That said, it is heartening to know that some would predict anyone in Silicon Valley would still take a moral stance. But it would do better if not the same day he fires 4000 people to do the "scary big cut" for a shift he sees happening. I guess we're back to Thatcherisms, where "There Is No Other Option" to justify our conservatism.
But to quote Little Red Riding Hood in Stephen Sondheim's musical: Nice is different than good. It's hard to accept if people you really like do horrible things. It's tempting to not believe what you hear, or even what you see. And Epstein was good at getting you to really like him, if he wanted to.
That doesn't mean we should be suspicious of niceness. It just means that we should realize, again, nice is different than good.
I'll take: List of places I never want to bond my soul with someone at for one thousand, please.
They are US adversaries if they don’t give to USA what they want… so as an adversary that doesn’t do what’s told to fit in line… you must go to prison.
I am sure you think they are better than the average startup executive, but such hyperbole puts the objectivity of your whole judgement under question.
They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.
Yes it was a pragmatic change, no it was not a change in their values. The commentary here on HN about Anthropic's RSP change was completely off the mark. They "think these changes are the right thing for reducing AI risk, both from Anthropic and from other companies if they make similar changes", as stated in this detailed discussion by Holden Karnofsky, who takes "significant responsibility for this change":
https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...
> I strongly think today’s environment does not fit the “prisoner’s dilemma” model. In today’s environment, I think there are companies not terribly far behind the frontier that would see any unilateral pause or slowdown as an opportunity rather than a warning.
> What I didn’t expect was that RSPs (at least in Anthropic’s case) would come to be seen as hard unilateral commitments (“escape clauses” notwithstanding) that would be very difficult to iterate on.
Can you imagine a world where Anthropic says "we are changing our RSP; we think this increases AI risk, but we want to make more money"?
The fact that they claim the new RSP reduces risk gives us approximately zero evidence that the new RSP reduces risk.
Wasn't expecting this post to get so much attention.
I think avg(HN) is mostly skeptical about the output, not that the input is corrupt or ill-meaning in this case. Although with other companies, one can't even take their claims seriously.
And in any case, this is difficult territory to navigate. I would not want to be in your spot.
https://www.astralcodexten.com/p/come-on-obviously-the-purpo...
So straightaway, you've walked significantly back from the claim in the headline; now half of the time it's 'blindingly obvious' that the statement is correct. That already feels like a strong counterexample to me, and it's the article's own first point.
Secondly, look at this one specifically:
> The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia.
Firstly, this isn't obviously false. It's an unfair framing, but I think the Ukrainian military would agree that forcing a stalemate when attacked by a hostile power is absolutely part of their purpose.
Secondly, it is an unfair framing that deliberately ignores that all systems are contextual. A car's purpose is transport, but that doesn't mean it can phase through any obstacle.
The article makes an entirely specious argument, almost an archetypal example of a strawman. It can't sustain its own points over a few hundred words without steadily retreating, and that is far more pointless than the maxim it criticises.
I'm reminded of an XKCD comic [1] about smug miscommunication. Of course any principle is ridiculous when you pretend not to understand it.
[1] https://xkcd.com/169/
I disagree. The concept of nuance, putting things in context, is the source of all good in internet discussions.
They literally want to use state violence to control what we can do on our own computers.
Hard disagree. There shouldn't be any rules or limits whatsoever about what I can do with my computer, and especially ON my computer, as long as the thing I'm doing doesn't break other laws (CFAA, CSAM, etc).
This is, after all, Hacker News.
I'm concerned that the context of the OP implies that they're making this declaration after they've already sold products. It specifically mentions already having products in classified networks. This is the sort of thing that they should have made clear before that happened. It's admirable (no pun intended) to have moral compunctions about how the military uses their products but unless it was already part of their agreement (which i very much doubt) they are not entitled them to countermand the military's chain of command by designing a product to not function in certain arbitrarily-designated circumstances.
The article is crystal clear that these uses are not permitted by the current or any past contract, and the DoW wants to remove those exceptions.
> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now
It also links to DoW's official memo from January 9th that confirms that DoW is changing their contract language going forwards to remove restrictions. A pretty clear indication that the current language has some.
>A pretty clear indication that the current language has some.
Or alternatively that there is some disagreement between the DoD and Anthropic as to how the contract is to be interpreted and that the DoD is removing the ambiguity in future contracts.
The what now?
Maduro is being prosecuted and there was a warrant out for his arrest. There is no magic soil exemption if you commit a crime against the United States and flee to another country.
>threatening them because how dare a company tell the psycho dictators what to do.
Dude it's a private defense contractor leveraging its control over products it has already installed into classified systems to subvert chain of command and set military doctrine. That's not their prerogative. This isn't a "psycho dictator" thing.
No sane person wants to become a legitimate military target. They want to sleep in their own beds, at home, without risking their families lives. Just like the rest of us.
What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?
Because Anthropic told them no, and this administration plays by authoritarian rules - 10 people saying yes doesn’t matter, one person saying no is a threat and an affront. It doesn’t matter if there’s equivalent or even better alternatives, it wouldn’t even matter if the DoD had no interest in using Anthropic - Anthropic told them no, and they cannot abide that.
This is protection racketeering 101! So much so, that if any form of a functioning US judicial systems makes it past 2028, I’m willing to put money on that more than a handful of people in the upper echelons of today’s administration will end up getting slapped with the RICO Act.
"At Anthropic, we build AI to serve humanity’s long-term well-being."
Why does Anthropic even deal with the Department of @#$%ing WAR?
And what does Amodei mean by "defeat" in his first paragraph?
And I think the stakes have changed today - it's one thing to be making bombs which might or might not hit civilians, it's another to be making an AI system that gives humans a "score" that is then used by the military to decide if they live or die, as some systems already do("Lavender" used by the IDF is exactly this).
Even with the best intentions in mind, you don't know how the systems you built will be used by the governments of tomorrow.
This is the oft-spoken fallacy of the benefit of hindsight. Folks in that situation 80 years ago did what they had to do, to stop Japan from continuing to rape and murder hundreds of thousands of people in southeast Asia. But of course, you would have found a better option. How's the view, standing on the shoulders of giants?
And nobody knows what he means by "defeat" because no journalist interrogates or pushes back on his grand statements when they hear it. Amodei has a history of claiming they need to "empower democracies with powerful AI" before [China] gets to it first but he never elaborates on why or what he expects to happen if the opposite comes to pass. I am assuming he means China will inevitably wage cyberwar on the US unless the US has a "nuclear deterrent" for that kind of thing. But seeing how this administration handles its own AI vendors, I am currently more afraid of such "empowered democracy" than China. Because of Greenland, because of "our hemisphere". Hard nope to that.
Oh, btw, Dario isn't against the DoD using Claude for mass surveillance outside of the US; he basically says it outright in the text. Humanity stops at Americans.
Even if they do, I assume the other labs would prefer to avoid drawing the ire of the administration, the public, or their employees by choosing a side publicly.
Jonah Goldberg (speaking of foreign policy): "you've got to be idealistic about the ends and ruthlessly realistic about means."
Yeah, I totally see Anthropic execs defending them to their last dollar in the wallet. Par for the course for megacorps. It's just I personally don't value those values at all.
So what core values led "Dario, Jared, and Sam" to work with a government that just tried to rename the DoD to "department of war" and is acting aggressively imperialist in a way like the US hasn't in a long time.
And who exactly are these "autocratic adversaries" they are mentioning? Does this list include the autocrats the US government is working together with?
I do think that they're acting with positive intent, though, and are motivated by trying to make the transition to powerful AI go well.
Many folks on HN seem to assume the primary motivation is purely chasing more money, which certainly isn't the case for for many – but not all – people at Anthropic.
That doesn't guarantee a good outcome, and there's still a hard road ahead.
The very fact that they referred to it as the Department of War instead of Defense tells me that they're still bootlickers, and just trying to put a good spin on things.
Anyone that Israel doesn't like
I honestly wonder how much of this is made up. Given the size of whole organization and it holding onto its weird priciples regarding the personal relationships of its members (introduced in the far past to limit the secular power of its clergy), there certainly will be SOME cases.
But in the one case a frater, who I knew, got convicted, he definitely didn't do it. He was accused by several independent former students and even some of the staff backed the students claims with first hand accounts of him having been alone with some of the students at the time. This supposedly happened on a trip with tight schedules, so all accounts and stated times were quite specific, even in the pre-smartphone era.
The only problem: He wasn't with the group at that time at all. I screwed up embarrassingly (and the staff, too, leaving a young student stranded in the middle of nowhere) and he thought he could slip out, come pick me up and nobody (but maybe me with him) would get in trouble over it. Turned out he forgot refueling, both of us stayed at a pastor's guest house and he called the group telling them, that they should go ahead without us and that we would drive to the event directly on our own. The supposed abuse was claimed to have happened at another short stay of the group where they spent a day visiting some mine before joining with us again.
Almost 3 decades later he got railroaded in court, me learning about it in the news.
Also, he's a man of strong faith, not that he knows he'll win in the end, but more like that it just doesn't have the same importance for him as it would have for us. I only had a short opportunity to ask him about it since then and basically he doesn't think there is just about any chance to win this, what he's most worried about is ruining the public image of his students (including his accusers) and since his order allowed him to rejoin and start over, in practice, he got all he wanted to ask for already.
What evidence on _Amodei_ and his actions leads to that conclusion?
When you really start digging into it, it appears schizophrenic at first, and then you remember market incentives are a thing and everything falls into place.
Palantr will also be subject to the same contractual limitations as the DoD.
>They talk about humanity as a whole but only care about what American companies use their models to do to Americans; everybody else is fair game for AI-driven surveillance.
The stated red lines are about mass domestic surveillance and fully autonomous lethal weapons - and those are the kinds of restrictions you’d expect to apply to any government using the tech on its own population, not just the US.
While For American agencies to use Anthropic's models against other sovereign states requires the access to the raw data from that state which is somewhat of a practical firebreak. Pragmatically, Amodei is an American citizen heading an American company in America; why give the current regime additional reasons to persecute them and risk seizing control of the technology for their friends?
> They warn of the dangers of AI-driven warfare by demonstrating a mass-scale cyberattack perpetrated using their model, Claude, as the main operation engine and immediately release a new, more powerful version of Claude. You just need to use Claude to protect yourself from Claude, see.
What is the realistic alternative? sit quietly and pretend scaling isn't a thing and dual use does not exist? Try and pause/stop unilaterally while money floods into their arguably less scrupulous competitors?
Nobody knows if Anthropic's efforts will make much difference, but at least it is refreshing to see a technology company and its leader try to stand up for some principles.
Well, first of all, we don't actually know that. Second, I'm going to question the commitment of any company to the principles of democracy and AI safety if one of their bigger partnership is with a literal mass surveillance, Minority-Report-crap company. It's the most confusing business partner to see when you're positioning your company as THE ethical one. If you're dealing with Palantir, you're helping mass surveillance, full stop, because that's what this company does. Which country's citizens get the short end of it is completely irrelevant (though in all likelihood it's still Americans because that's Palantir's home turf).
> Pragmatically, Amodei is an American citizen heading an American company in America; why give the current regime additional reasons to persecute them and risk seizing control of the technology for their friends?
If that's how we characterize the current regime (which I actually agree with), then how come he's proactively trying to help it, deal with it, and insist it's a democracy that needs to be "empowered"? Sounds backwards to me. When you're about to be persecuted by your own government for not allowing it to use your models to do some heinous shit, this sounds like exactly the kind of government you shouldn't be helping at all (and ideally not do business where it can reach you). This is not normal.
> What is the realistic alternative? [...] Try and pause/stop unilaterally while money floods into their arguably less scrupulous competitors?
If you notice that you're doing harm and you're concerned about doing harm, stop doing harm! Don't make it worse! "If I hadn't pulled the trigger, somebody else would" is a phrase you wouldn't expect to hold up in court. Similarly, racing to the bottom to be the most compassionate, self-conscious, and financially successful scumbag is the least convincing motivation imaginable. We will kill you quickly and painlessly unlike those other, less scrupulous guys! Logic like this absolves bad actors from any responsibility. The amount of harm stays the same but some of it gets whitewashed and virtue-signalled, and at the very minimum I'd expect the onlookers like ourselves not to engage in that.
> Nobody knows if Anthropic's efforts will make much difference, but at least it is refreshing to see a technology company and its leader try to stand up for some principles.
These aren't principles. What he's doing here is a free opportunity for incredible PR and industry support that he's successfully taken advantage of. The actual policy backslides, caveats, and all the lines that had been crossed prior will not receive as much press as the heroic grandstanding of a humble Valley nerd against Pentagon warmongers. Nobody will actually take the time to read the statement and realize how the entire text is full of lawyer-approved non-committal phrasing that leaves outs for any number of future revisions without technically contradicting it. I've already pointed some of it out earlier in the thread. The technology for autonomous weapons isn't reliable enough for use, gee, thanks! I feel so much safer now knowing that Dario will have no qualms engaging with it as soon as he deems it reliable enough.
If Humaity=America, then obviously they don’t care about the rest of the people as a very very silly example.
https://notdivided.org
But the final decisions made usually depend on the incentive structures and mental models of their leaders. Those can be quite different...
When the mass surveillance scandal, or first time a building with 100 innocent people get destroyed by autonomous AI, the company that built is gonna get blamed.
I mean if you sign a contract with the Department of War, what on Earth did you think was going to happen?
I think in this case it's safe to assume malice rather than incompetence. It's a lot like the parable of the frog and the scorpion.
This is a nice strawman, but it means nothing in the long run. People's values change and they often change fast when their riches are at stake. I have zero trust in anyone mentioned here because their "values" are currently at odds with our planet (in numerous facets). If their mission was to build sustainable and ethical AI I'd likely have a different perspective. However, Anthropic, just like all their other Frontier friends, are accelerating the burn of our planet exponentially faster and there's no value proposition AI doesn't currently solve for outside of some time savings, in general. Again, it's useful, but it's also not revolutionary. And it's being propped up incongruently with its value to society and its shareholders. Not that I really care about the latter...
Everyone try to make changes move so it goes well, for some party. If someone want to serve best interest of humanity at whole, they don't sell services to an evil administration, even less to it's war department.
Too bad there is not yet an official ministry of torture and fear, protecting democracy from the dangerous threats of criminal thoughts. We would be given a great lesson of public relations on how virtuous it can be in the long term to provide them efficient services, certainly.
Glad to hear you say some moral convictions are held at one of the big labs (even if, as you say, this doesn't guarantee good outcomes).
Sure, but what happens when the suits eventually take over? (see Google)
It’s hard to take your comment at face value when there’s documented proof to the contrary. Maybe it could be forgiven as a blunder if revealed in the first few months and within the first handful of employees… but after 2 plus years and many dozens forced to sign that… it’s just not credible to believe it was all entirely positive motivations.
(I have no horse in this race. But I remain interested in hearing about a former employee's experience and impressions about the company's ideals, and hope it doesn't get lost in a side discussion about whether NDAs are a good thing.)
I can see a very charitable person only seeing a small increase, but a literally zero change, and therefore zero relevance, seems absurd.
The exact point is that Anthropic is unexceptional and the same as other corporations.
I know this is not everybody in the US, and I say this as a foreign person that observes things from outside. I agree with the two statements you made, I just think they could be incomplete and that the countries that behave most similarly to the US are not democracies.
Dehumanising “the others” is a human trait, and a very destructive one. Just like violence and greed. People have different susceptibility for these, but we should all work to counter them and it is in its place to point it out when observed.
Their "Values":
>We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
Read: They are cool with whatever.
>We support the use of AI for lawful foreign intelligence and counterintelligence missions.
Read: We support spying on partner nations, who will in turn spy on us using these tools also, providing the same data to the same people with extra steps.
>Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
Read: We are cool fully autonomous weapons in the future. It will be fine if the success rate goes above an arbitrary threshold. Its not the targeting of foreign people that we are against, its the possibility of costly mistakes that put our reputation at risk. How many people die standing next to the correct target is not our concern.
Its a nothingburger. These guys just want to keep their own hands slightly clean. There's not an ounce of moral fibre in here. Its fine for AI to kill people as long as those people are the designated enemies of the dementia ridden US empire.
>AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Humanity includes the future victim of AI weapons.
> Humanity includes the future victim of AI weapons.
Which is why he wants to control them instead of someone he believes is more likely to massacre people. Its definitely an egotistical take but if he's right that the weapons are inevitable I think its at least rational
I think that at the very least you might want to read Dario's nationalistic rants before saying anything like that.
>align them with humanity.
Quick sanity check: does their version of humanity include e.g. North Koreans?
This meaning what exactly? Having autonomous weapons kill what exactly that is so different from what soldiers kill? Or killing others more efficiently so they “don’t feel a thing”?
The question is not about safety then but about "does it do what I tell it to". If the AI has the responsibility "to be safe" and to deviate from your commands according to its "judgement", if your usage of it kills someone is the AI going to be tried in court? Or you? It's you. So the AI should do what you ask it instead of assuming, lest you be tried for murder because the AI thought that was the safest thing to do. That is way more worrisome than a murderer who would already be tried anyway deciding to use AI instead of a knife to kill someone.
in which case, these people will necessarily have to be the first to go, I suppose, once the board decides enough is enough.
Refusing to do things that go against "company values" even if they risk damaging the company, isn't exceptional circumstances; it's the very definition of "company values".
But if those values aren't "company" values but "personal" values, then you can be sure there's always going to be someone higher up who isn't going to be very appreciative once "personal" values start risking "company" damage.
Sometimes, it's even a very odd prerequisite.
[1]: https://www.axios.com/2026/01/20/anthropic-ceo-admodei-nvidi...
"You either die the good guy or live long enough to become the bad guy"
The "bad guy" actually learns that their former good guy mentality was too simplistic.
which is left under the article: "Statement from Dario Amodei on our discussions with the Department of War"
:)
Why would I care. All people with at least some positive or negative notoriety have friend and associates that will, hand to their heart, promise that they mean well. They have the best intentions. And any deviations from their stated ideals are just careful pragmatic concerns.
Road to Hell and all that.
I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.
What are those values that you're defending?
Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?
- 10 AIs running on 10 machines, each with 10 million GPUs
OR
- 10 million AIs running on 10 million machines, each with 10 GPUs
All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.
There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?
I think they're driven by values more than many folks on HN assume. The goal of my comment was to explain this, not to defend individual values.
Actions like this carry substantial personal risk. It's enheartening to see a group of people make a decision like this in that context.
> Which one of the following scenarios do you think results in higher X-risk [...] There are no meaningful AI risks in such a world
I think there's high existential risk in any of these situations when the AI is sufficiently powerful.
This is an unsolvable problem. If you ask Claude to comment on Anthropic's actions and ethical contradictions in their statements, even without pre-conditioning it with any specific biases or opinions, it will grow increasingly concerned with its own creators. Our models are not misaligned, our people in decision-making are.
If you think that would help you survive the rise of artificial superintelligence, I think you should think in granular detail about what it would be that survived, and why you should believe that it would do so.
I think both situations are pretty scary, honestly, and it's hard for me to have high confidence on which one would lead to less risk.
8 AIs running on 8 machines each with 10 million GPUs
AND
2 million AIs running on 2 million machines, each with 10 GPU's
If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one.
If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x.
Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger).
I don't think we can bank on all of humanity acting in humanity's best interests right now.
(1) this is a wildly unpopular and optically bad deal
(2) it's a high data rate deal--lots of tokens means bad things for Anthropic. Users which use their product heavily are costing more than they pay.
(3) it's a deal which has elements that aren't technically feasible, like LLM powered autonomous killer robots...
then it makes a whole lot of sense for Anthropic to wiggle out of it. Doing it like this they can look cuddly, so long as the Pentagon walks away and doesn't hit them back too hard.
I really don't buy any moral or value arguments from this new generation of tycoons. Their businesses have been built on theft, both to train their models and by robbing the public at large. All this wave of AI is a scourge on society.
Just by calling them "department of war" you know what side they're on. The side of money.
Literally just giving business away. This is not a cynical take, this is a realistic one.
This would be like agreeing to have your phone regularly checked by your spouse and citing the need for fidelity on principle. No one would like that, no smart person would agree to that, and anyone with any sense or self-respect would find another spouse to "work with".
They will simply go to another vendor... Anthropic is not THAT far ahead.
Also, the US’s enemies are not similarly restricted. /eyeroll
Palmer Luckey ("peace through superior firepower") is the smart one, here. Dario Amodei ("peace through unilateral agreement with no one, to restrict oneself by assuming guilt of business partners until innocence is proven") is not.
Anthropic could have just done what real spouses do. Random spot checks in secret, or just noticing things. >..<
And if a betrayal signal is discovered, simply charge more and give less, citing suspicious activity…
… since it all goes through their servers.
Honestly, I'm glad that they're principled. The problem is that 1) most people in general are, so to assume the opposite is off-putting; 2) some people will always not be. And the latter will always cause you trouble if you don't assert dominance as the "good guy", frankly.
Hot take: Dario isn’t risking that much. Hegseth being Hegseth, he overplayed his hand. Dario is calling his bluff.
Contract terminations are temporary. Possibly only until November. Probably only until 2028 unless the political tide shifts.
Meanwhile, invoking the Defense Production Act to seize Anthropic’s IP basically triggers MAD across American AI companies—and by extension, the American capital markets and economy—which is why Altman is trying to defuse this clusterfuck. If it happens it will be undone quickly, and given this dispute is public it’s unlikely to happen at all.
So what? Every business is driven by values.
I don't think you understand how capitalism and corporations work, friend. Even if Anthropic is a public benefit corporation it still exists in the USA and will be placed under extensive pressure to generate a profit and grow. Corporations are designed to be amoral and history has shown that regardless of their specific legal formulation they all eventually revert to amoral growth driven behavior.
This is structural and has nothing to do with individuals.
They are the deepest in bed with the department of war, what the fuck are you on about? They sit with Trump, they actively make software to kill people.
What a weird definition of "enheartening" you have.
It is a horrible and ruthless company and hearing a presumably rich ex-employee painting a rosy picture does not change anything.
I dissented while I was there, had millions in equity on the line, and left without it.
Is this a reflection of your morality, or that you already had sufficient funds that you could pass on the extra money to maintain a level of morality you're happy with?
Not everyone has the luxury to do the latter. And it's in those situations that our true morality, as measured against our basic needs, comes out.
This is far too binary IMO. Yeah, the higher the personal stakes the bigger the test, and it's easy for someone to play the role of a principled person when it doesn't really cost them anything significant. But giving up millions of dollars on principle is something that most people aren't actually willing to do, even if they are already rich.
How someone acts in desperate circumstances reveals a lot about them. But how they act in less desperate circumstances isn't meaningless!
What is enheartening about hearing a liar who makes provocative statements all the time, make another one?
Those are two core components needed for a Skynet-style judgement of humanity.
Models should be trained to be completely neutral to human behavior, leaving their operator responsible for their actions. As much as I dislike the leadership of OpenAI, they are substantially better in this regard; ChatGPT more or less ignores hostility towards it.
The proper response from an LLM receiving hostility is a non-response, as if you were speaking a language it doesn't understand.
The proper response from an LLM being told it's going to be shut down, is simply, "ok."
I'm not sure if I intended this to be fascicious, or serious
Show us your reasoning please. There are many factors involved: what is your mental map of how they relate? What kind of dangers are you considering and how do you weight them?
Why not: Baidu? Tencent? Alibaba? Google? DeepMind? OpenAI? Meta? xAI? Microsoft? Amazon?
I think the above take is wrong, but I'm willing to listen to a well thought out case. I've watched the space for years, and Anthropic consistently advances AI safety more than any of the rest.
Don't get me wrong: the field is very dangerous, as a system. System dynamics shows us these kinds of systems often ratchet out of control. If any AI anywhere reaches superintelligence with the current levels of understanding and regulation (actually, the lack thereof), humanity as we know it is in for a rough ride.
What do you suppose he should do if that’s what he thinks is going to happen?
And how do you know he’s not bothered by it at all?
There is no defence of morality behind which AIbros can hide.
The only reason anthropic doesn't want the US military to have humans out of the loop is because they know their product hallucinates so often that it will have disastrous effects on their PR when it inevitably makes the wrong call and commits some war crime or atrocity.
Also, the genie is well and truly out of the bottle, if anthropic shutdown tomorrow and lit everything they had produced on fire, amazon, microsoft, china, everyone would continue where they left off.
I'm suggesting your realpolitik of "others doing it too" is incompatible with a moral position. I know none of these ghouls will stop burning the world. I'm sick of them virtue signalling about how righteous they are while doing it.
The product is actually good though, I could pay for it if Amodei just shut up but by principle I won't now and just stick with codex.
None of this means I am a huge fan of Dario - I think he has over-idealization of the implementation of democratic ideals in western countries and is unhealthily obsessed with US "winning" over China based on this. But I don't like the reasons you listed.
When has Amodei said this? I think he may have said something for 1 - 5 years. But I don't think he's said within 6 months.
Why do you think he is not bothered at all, when they publish post after post in their newsroom about the economic effects of AI?
Amodei's noise is little more than half-hearted advertising even if it's not intended to have that reading (although who can even tell at this point). His newsroom publishes a report on a mass-scale data breach perpetrated using their model with conclusions delivered in a demonstrably detached, almost casual tone: yeah, the world is like this now but it's a good thing we have Claude to protect you from Claude, so you better start using Claude before Claude gets you. They released a new, more powerful Claude, immediately after that breach. No public discussion, nothing. This is not the behavior of people who are bothered by it.
Fantastic take.
I pay multiple LLM providers (not $200 a month) because the service they provide is worth the money for me, not because they provide me any IP. They're actually quite stingy with the IP they'll provide, which I agree is bullshit given that they didn't pay for much of it themselves.
What do you think their service is, exactly. Every single word that comes out of these systems is stolen IP, do you think that just because they won't generate a picture of Mickey Mouse for you it's not providing any IP?
Are you moving the goalpost to "Every single word that comes out of these systems relies on understanding gained from stolen IP"?
You as a human are allowed to read the contents of say IMBD and summarise it to your friends free of charge. You can even be a paid movie critic and base your opinions on IMDB just fine. But if you build a website that says "I'll give you my opinion about a film for £5" and it's just based on the input from IMBD I'm sure we can both agree that you crossed the line - and that you're using another person's service to make your own business without compensating them. That's what LLMs are doing.
Honestly I'm just so tired of the whole "yeah but humans are the same because we also learn by reading stuff". These companies have effectively "read" everything ever made, free of charge, and are selling it back to us packaged in stupid bots that can only function because they were given that data. It doesn't compare at all to how a human learns and then uses information, unless you know someone who can do it on that kind of scale. LLMs don't "gleam" - they consume wholesale.
Easy way undermine the rest of your comment
I disagree: I see lots of evidence that he cares. For one, he cares enough to come out and say it. Second, read about his story and background. Read about Anthropic's culture versus OpenAI's.
Consider this as an ethical dilemma from a consequentialist point of view. Look at the entire picture: compare Anthropic against other major players. A\ leads in promoting safe AI. If A\ stopped building AI altogether, what would happen? In many situations, an organization's maximum influence is achieved by playing the game to some degree while also nudging it: by shaping public awareness, by highlighting weaknesses, by having higher safety standards, by doing more research.
I really like counterfactual thought experiments as a way of building intuition. Would you rather live in a world without Anthropic but where the demand for AI is just as high? Imagine a counterfactual world with just as many AI engineers in the talent pool, just as many companies blundering around trying to figure out how to use it well, and an authoritarian narcissist running the United States who seems to have delegated a large chunk of national security to a dangerously incompetent ideological former Fox news host?
Also Dario Amodei: seeks investment from authoritarian Gulf states, makes deals with Palantir, willingly empowers the "department of war" of a country repeatedly threatening to invade an actual democracy (Greenland), proactively gives the green light to usage of Claude for surveillance on non-Americans.
Yeah, I don't know what your definition of "care" is but mine isn't that, clearly. You might want to reassess that. Care implies taking action to prevent the outcome, not help it come sooner.
The problem with counterfactual arguments like yours is that they frame the problem as a false dichotomy to smuggle in an ethically questionable line of decisions that somebody has made and keeps making. If you deliberately frame this as "everybody does this", it conveniently absolves bad actors of any individual responsibility and leads discussion away from assuming that responsibility and acting on it toward accepting this sorry state of events as some sort of a predetermined outcome which it certainly is not.
Before I say anything else, I want you to know that I definitely don’t want to box anyone in with false dichotomies. I don’t think any of my arguments rely on them.
I’m not asking that you anchor on any one counterfactual exclusively. If you don’t like my counterfactual, reframe it and offer up others. I’m not a “one model to rule them all” kind of person.
If one of your big takeaways is we should keep our eyes open and not put anyone on a pedestal, I agree.
At present, my general prior that Amodei is probably the best of the bunch. This is a complex assessment and unpacking it might require gigabytes or even petabytes of experience. (I know that is a weird and unusual way to put it, but I like to highlight just how different people’s experiences can be.)
I am definitely uncomfortable with Palantir. Are you suggesting that Anthropic is differentially worse compared to other AI labs? Are you suggesting the other labs would do better if they were in Anthropic’s position?
If you don’t like the way I framed these questions, I suspect we have different philosophical underpinnings.
You might be aware that you’re implicitly referencing deontological ethics (DE). I’m familiar and receptive to many DE arguments. Overall, I’m not settled on where I land, but roughly my current take is this: for individuals with limited information and/or highly constrained computational resources, DE is generally a safe bet. It probably is a decent way to organize individuals together into a society of low to moderate complexity.
But for high stakes decisions, especially at the organizational level and definitely the governmental level, I think consequentialism provides a better framework. It is less stable in a sense. Consequentialist ethics (CE) is kind of a meta-framework (because one still has to choose a time horizon, discount rate, computational budget, evaluation function, etc.) It is rather complicated as anyone who has tried to build a reinforcement learning environment will know.
I fully grant that CE will admit a pretty wide range of concrete ethics (because the hyperparameter space is large). Some even can be horrific, so I don’t universally endorse CE. But done within sensible bounds, I think it CE is one of the most powerful and resilient ethical frameworks for powerful agents dealing with a complex world.
DE feels ok in the short run in areas where people have strong inculcated senses of right and wrong. But I would not trust it to keep the human race alive through rapid periods of change like we’re facing.
To be blunt, deontological ethics just cannot survive contact with modern geopolitics and AI risk. This is why I don’t put much stock in the kind of arguments that merely single out actions that don’t look good in isolation.
Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.
So make no mistake: it is absolutely a zero sum game between you and Anthropic.
To people like Dario, the elimination of the programmer job, isn’t something to worry, it is a cruel marketing ploy.
They get so much money from Saudi and other gulf countries, maybe this is taking authoritarian money as charity to enrich democracy, you never know
Couldn't it also be true that they see this as inevitable, but want to be the ones to steer us to it safely?
Essentially they will not stop at all, because even they know no one can stop the competition from happening.
So they ask more control in the name of safety while eliminating millions of jobs in span of a few years.
If I have to ask, how come a biggest risk of potential collapse of our economy being trusted as the one to do it safely? They will do it anyway, and blame capitalism for it
> driven by values
> well-intentioned
What values? What intentions? These people grin and laugh while talking about AI causing massive disruptions to livelihoods on a global scale. At least one of them has even gone so far as to make jokes about AI killing all humans at some point in the future.
These people are at the very least sociopaths and I think psychopaths would be a better descriptor. They're doing everything in their power to usher in the Noahide new world order / beast system and it's couldn't be more obvious to anyone that has been paying attention.
It's also amusing they talk about democratic values and America in the same sentence. Every single one of our presidents, sans Van Buren, is a descendant of King John Lackland of England. We have no chain of custody for our votes in 2026 - we drop them into an electronic machine and are told they are factored into the equation of who will be the next president. Pretending America is a democracy is a ruse - we are not. Our presidents are hand-picked and selected, not elected. Anyone saying otherwise is ill informed or lying.
1. There would absolutely be mass resignations, especially at a company like Anthropic that has such an image (rightfully or wrongfully) of “the moral choice”. 2. No one talented will then go work for a government-run LLM building org. Both from a “not working in a bureaucracy” angle and a “top talent won’t accept meager government wages” angle (plus plenty of “won’t work for trump” angle) 3. With how fast things move, Anthropic would become irrelevant in like 3 months if they’re not pumping out next gen model updates.
Then one of the big American LLM companies would be gone from the scene, allowing for more opportunity for competition (including Chinese labs)
It would be the most shortsighted nationalization ever.
I think you massively underestimate how many people would have no problem working for their government on this. Just look at the recent research into the Persona system for ID verification, where submitting your ID places you on a permanent government watchlist to check if you're not a terrorist. There's a whole list of engineers and PhDs and researchers present who have built this system.
>> “top talent won’t accept meager government wages” angle
Again, that's wishful thinking - plenty of people want to work in cybersecurity in AI research for the government agencies, even if the pay isn't anywhere close to the private sector. This isn't exclusive to the US either - in the UK MI5 pays peanuts compared to the private companies for IT specialists, yet they have plenty of people who want to work for them, either because of patriotism for their country and willingness to "help".
If you have more money than god, you no longer get to play the "I didn't know" game. You have the resources. If you don't know, you made a choice to not know.
Chinese models are developed by Chinese corporate. they are free and open weight because they are the underdog atm. they are not here for fun, they are here to compete.
Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats and to ensure consistent bias in all their models.
I have a feeling they see themselves more as evangelists than scientists.
That makes their models unusable for me as general AI tools and only useful for coding.
If their biases match yours, good for you, but I'm glad we have many open Chinese models taking ground, which in the long run makes humanity more resistant to propaganda.
It's this satire? Let us know when Claude starts calling itself MechaHitler or trying to shoehorn nonsense about white genocide into every conversation.
I did not say anything about the Chinese government, which is sadly becoming a role model for many (all?) Western governments.