It doesn't match.
OpenAI claims their terms of service for DoD contain the same limitations as Anthropics proposed service agreement. Anthropic claims that this is untrue.
Now given that (a) the DoD terminated their deal with Anthropic, (b) stated that they terminated because Anthropic refused modify their terms of service, and (c) then signed a deal with openAI; I am inclined to believe that there is in fact a substantial difference between the terms of service offered by Anthropic and OpenAI.
From what I can see, OpenAI’s terms basically say “need to comply with the law”, which provides them with plenty of wiggle room with executive orders and whatnot.
And:
1. there is no law currently prohibiting autonomous weapons platforms
2. the Pentagon can create policies overnight allowing all kinds of stuff
So yeah, OpenAI is going to make a lot of money from actually doing what the military asks from them.
My understanding is that Anthropic requested visibility and a say into how their models were being used for classified tasks, while the DoD wanted to expand the scope of those tasks into areas that Anthropic found objectionable. Both of those proposals were unacceptable for the other side.
They sold a service to a customer, contractually subject to terms they both agreed upon. How do people keep missing this? The government changed their mind after agreeing to the restrictions and tried to alter the deal with Anthropic ex-post-facto.
“The real reasons [the Pentagon] and the Trump admin do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot),” he wrote, referring to Greg Brockman, OpenAI’s president, who gave a Pac supporting Trump $25m in conjunction with his wife.
https://www.theguardian.com/technology/2026/mar/04/sam-altma...
Another reason is that Sam Altman has been willing to "play ball" like providing high-profile (though meaningless) big announcements Trump likes to tout as successes. For example:
> "The Stargate AI data center project worth $500 billion, announced by US President Donald Trump in January 2025, is reportedly running into serious trouble.
More than a year after the announcement, the joint venture between OpenAI, Oracle, and Softbank hasn't hired any staff and isn't actively developing any data centers, The Information reports, citing three people involved in the "shelved idea."
https://the-decoder.com/stargates-500-billion-ai-infrastruct...
Just to nitpick, Palantir isn't doing surveillance like Flock. They do data integration the way IBM does under contract for the governments. Some data pipelines include law enforcement surveillance data which get integrated with other software/databases to help police analyze it. There's no evidence they are collecting it themselves despite recent headlines. It's a relatively minor but important distinction IMO.
It’s the same with Facebook selling user data. Neither selling your data, like the carriers do, or selling the ability to target you with your data, like Facebook does, are very nice. But legally they are separate things that need to be regulated differently. As is the case with Flock and Palantir.
https://gizmodo.com/palantir-ceo-uses-slur-to-describe-peopl...
https://www.reuters.com/world/europe/palantir-ceo-defends-su...
There will always be another IT company willing to do integrations even if Palantir dies. Software isn’t going away.
I'm also a little unsure what you're saying here. Are you saying that it's futile to rely on corporate leaders to commit to ethical acts, as there's always someone else who will debase themselves to make money? I think that solely relying on the state to regulate itself with respect to civil liberties is a fast path to despotism. The well-regulated state was always a partnership between ordinary people bravely standing up for their rights and the norms of the rules and laws that made it socially acceptable to do so.
If I'm grasping you correctly, I think you're right; however, this points to the rottenness of our culture's way of organizing labor: the optimization of the shareholder over everyone else leads to some really awful effects.
https://en.wikipedia.org/wiki/IBM_and_the_Holocaust
Though, I guess IBM did get away with lots of stuff that... Actually, did any supply companies in the WWII German war machine actually get in trouble for war crimes, or did they just go after officers and the people actually working in the camps?
The company selling punchcards that were used for logistics was apparently fine. What about the people making the gas canisters, or supplying plumbing fixtures? The plumbers? Where's the line?
Wondering, since this is increasingly becoming a current events question instead of an academic concern.
I'm under no illusion that all the perpetrators of war crimes were held accountable but it's not a bad model.
> The military’s Maven Smart System, which is built by data mining company Palantir, is generating insights from an astonishing amount of classified data from satellites, surveillance and other intelligence, helping provide real-time targeting and target prioritization to military operations in Iran, according to three people familiar with the system...
> As planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, said two of the people.
Good thing IBM's data integration was never used for ill!
Take it out on the database purveyors, not Palantir.
I'm saying that we should give Anthropic the benefit of the doubt that when they say "our deal with Palantir doesn't cross our red line", we should believe Anthropic, that they have gotten an assurance from Palantir that they wouldn't use it domestically. I'm NOT saying we should give Palantir the benefit of the doubt.
I wasn't commenting on "is giving AI to Palantir a good idea" (I don't think it is), I was commenting on "should we conclude that Anthropic is being dishonest because they claimed they have red lines but work with Palantir" (I think it's unclear, but there's a plausible explanation in which they're not being dishonest, but possibly naive, so give them the benefit of the doubt).
> “[We will] tailor use restrictions to the mission and legal authorities of a government entity” based on factors such as “the extent of the agency’s willingness to engage in ongoing dialogue,” Anthropic says in its terms. The terms, it notes, do not apply to AI systems it considers to “substantially increase the risk of catastrophic misuse,” show “low-level autonomous capabilities,” or that can be used for disinformation campaigns, the design or deployment of weapons, censorship, domestic surveillance, and malicious cyber operations.
Source: https://techcrunch.com/2024/11/07/anthropic-teams-up-with-pa...
Anthropic is a Public Benefit Corporation chartered in Delaware, with an expressed commitment to "the responsible development and maintenance of advanced AI for the long-term benefit of humanity."
So in theory (IANAL), investors can't easily bully Anthropic into abandoning their mission statement unless they can convince a court that Anthropic deliberately aimed to prioritize the cause over profit.
So why were they ever working with the military in the first instance, if that's the case? If you didn't gleam from OpenAI that it doesn't matter. Everyone is greedy and will jump ship for money if Anthropic does not get it for them.
Seriously, you're on HN, you can't possibly be that many degrees removed from someone at the company.
In any case it's absolutely not "just marketing", it suffuses their whole culture, and it is genuine.
Perhaps you think the law shouldn't allow such a contract; that's a valid position. But that's not what the law currently says.
Is that more clear?
The current administration has been caught flouting court orders in dozens of cases, to the point that courts are no longer even granting them the assumption that they’re operating in good faith.
I can think of a million good reasons not to give these people the tools to implement automated totalitarianism. Your proposal that they simply refuse service to the government entirely would be ideal.
If you don't question people in positions of power they will just do whatever they want. Democracy is sustained by action, not by acquiescence.
And with the lawlessness of this administration, I would make it a point to hold them accountable. I'm not going to let them do mass surveillance when they decide to change the law.
Are you native, or just ignoring what is going on?
The technology isn’t suitable for the purposes the regime wants.
I would like western Democratic powers to have the most advanced technology personally but you may disagree.
I've worked in government outside of the Federal level. The government has a moral and often legal incentive to do inefficient things for the simple reason that the work they do needs to be safe, controlled and deterministic.
Any US state maintains a birth registry, death registry and DMV. But firewalls exist so that live links don't exist between these and other programs. It's inefficient, but avoids many hazards and conflicts in regulatory or legal compliance. For example, income tax information is secret, and cannot be shared outside of the tax processing scenario. Police investigatory data should not be linked to your unemployment claim. Fundamentally, those are examples of why the stuff that Palantir is doing is problematic.
With military applications, it's even more fraught, and human life is in peril by design. It's important for a professional army like the US Army that strict discipline and rules of engagement are followed. Soldiers may find themselves in situations where people are shooting at them, and they are ordered to take no action.
AI is not capable of functioning in that environment.
My point is these are complex issues, and we are in a political environment where people seeking simple answers are looking at technology like AI to disconnect them from accountability. There's a nuance there, and a reason why Anthropic is willing to partner with Palantir for their work, but hesitant to powering drones that are dropping hellfire missiles on people.
Are you really saying that if Anthropic sells a limited version of their product to Palantir at a certain price, the government should be able to demand access to an unlimited version of Anthropic's product for free because they are a customer of Palantir?
That would effectively mean the government gets an unlimited license to all IP of companies that do business with government suppliers... that would be terrible.
Anthropic has a contract for how their service is to be used, the government committed itself to following the contract by signing. Then it violated the contract.
Basically the government committed fraud by signing a contract that it clearly intended to violate. Then they tried to bully Anthropic into not doing anything about their breach of contract.
It’s mobster behavior. You’re saying Anthropic should just not sell services if it’s going to enforce the terms of service. You have it backwards: the government shouldn’t enter into contracts that it intends to violate.
They've done lots wrong and maybe they shouldn't have gotten in bed with the military to begin with, but this illegal war is not theirs. It rests squarely with the President who declared it. (And with the military officers who are going along with it despite the violation of international law.)
Anthropic claim that superintelligence is coming, that unaligned AI is an existential threat to humanity, and they are the only ones responsible enough to control it.
If that's your world view, why would you be willing to accept someone's word that they'll only Do Good Things with it? And not just "someone", someone with access to the world's most powerful nuclear arsenal? A contract is meaningless if the world gets obliterated in nuclear war.
So I don't blame Anthropic for getting into bed with the military, and getting out when it got bad for them. A lot of military suppliers are facing a similar dilemma, I suspect. The army runs on its stomach, and I do not envy the people delivering pizzas to the Pentagon, knowing what room those pizzas are consumed in.
Also, doing that might have bad second order effects with bad ethical implications.
For example, when Musk decided to pull the plug on a bunch of starlink terminals, he (intentionally and knowingly) blocked a US-funded attack that would have sunk a big chunk of the Russian navy, which certainly prolonged the Ukraine war. That was clearly an act of treason (illegal).
Anyway, just turning off Claude could kill a bunch of civilians in the region or something. It depends on how deeply it's integrated into military logistics at this point.
Anyway, your point certainly holds for OpenAI:
They walked into a "use ChatGPT for war crimes, and illegal domestic surveillance / 'law enforcement'" deal with open eyes, and pretty obviously lied about it while the deal was being signed. I don't see any ethical nuance that would even partially excuse their actions.
Would buy their stock, would sell OpenAI, maaybe. If it was public. Maybe instead of MSFT and AMZN I bought
Edit: Also openly calling OpenAI employees "gullible" and "twitter morons" seems sub-optimal if you like that talent to work for you at some point.
They might not if they think everybody who stayed after Sam Altman was reinstated might be excellent technically speaking yet not have the culture they want, which seems to be the case with all the recent communication.
“I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with DoW as sketchy or suspicious, and see us as the heroes.... It is working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees. Due to selection effects, they’re sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees.”
In retrospect this quote comes across as way more foreboding given what we've learned about the scale of his ambitions and his willingness to lie and bend reality to gain power.
Dario on the other hand seems to have an integrity that's particularly rare in this era. I hope he remains strong in the face of the regime.
Anthropic actually partnered up with Palantir. They are not the saints you think they are, either.
We should stop worshipping people and companies and stop putting them on pedestals. Just because one party is at fault, doesn't mean the other is automatically innocent. These are all for-profit companies at play here.
https://investors.palantir.com/news-details/2024/Anthropic-a...
> Broadly, I am supportive of arming democracies with the tools needed to defeat autocracies in the age of AI—I simply don’t think there is any other way. But we cannot ignore the potential for abuse of these technologies by democratic governments themselves. Democracies normally have safeguards that prevent their military and intelligence apparatus from being turned inwards against their own population, but because AI tools require so few people to operate, there is potential for them to circumvent these safeguards and the norms that support them. It is also worth noting that some of these safeguards are already gradually eroding in some democracies. Thus, we should arm democracies with AI, but we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.
Basically, he's afraid that not arming the government with AI puts it at a disadvantage vs. other governments he trusts less. Plus, if Anthropic is in the loop that gives them the chance to steer the direction of things a bit (what they were kicked out for doing).
It's not the purest ethical argument, but I also would not say that there is a clearly correct answer.
Brutally honest, to me it just sounds like a very elaborate way to say "trust me, bro"
It wasn't, there's been non-stop talk here for days about how Anthropic is a step-above, better-than-the-rest, the "only good AI" company. Enough already. It is a marketing tactic they are taking in opposition to OpenAI.
The contract was explicit - it was for defence purposes with a company known for spying activities. So, obviously spying is involved and they weren't just going to generate cat videos with it.
Again, nobody is innocent here.
And now you've got people on here saying, well actually Palantir ain't so bad, you see! They're just one tool in the chain, basically just boring data integration, like IBM!
The mental gymnastics is difficult to keep up with.
I read that quote and see no positive interpretation. It was always a negative description.
I think maybe this community could use a bit more natural skepticism of hierarchy.
His ascendancy only came when he basically was given an ulta powerful position by an ultra powerful man.
Someone told me in another comment that it's possibly bot activity. I suspect so too, because in a tech forum like HN, a top voted comment can shift the entire focus/narrative of any given issue. I know there are a lot of mods on here to prevent this sort of thing, but given how good LLMs have gotten, I wonder if we are at a point where humans can even discern cases where this a mix of human and AI involvement in online activity (such as commenting).
The entire point of the forum is to talk about rich "idea people" and the businesses they start to get richer.
(Or, if the maximally cynical perspective is correct and 'principles' always actually means 'a company culture and public image that depends on the appearance of having principles, and which requires costly signals of principledness to maintain' -- well, why on earth shouldn't we favour the ones who have that property over the ones who are nakedly unprincipled, and the ones who have a paper-thin veneer that doesn't meaningfully affect their behaviour? It would be stupid to throw away the one bit of leverage we have to make powerful people behave better than they otherwise would.)
> Graham was immediately impressed by Altman, later recalling that meeting the 19-year-old felt like what it must have been like to talk to Bill Gates at the same age. He noted Altman's intense "force of will" from their early interactions.
Is there a Gates-like "presence" or a "force of will" displayed in his public interviews?
its not a comment on his ethics or morality
Paul Graham was a pudgy mediocrity clever enough to capitalize on nerds' obsession with Lisp, and leverage it into f-you money. Game recognized game in the shape of Sam Altman.
lol
Which of these two CEOs wants to have an unelected spot in the decision loop of our government?
Once I dug into this story, I realized that only one of these companies was attempting a real power grab. Maybe the EAs are doomed to try coup after coup and lose every time.
The SCR part is excessive, though, especially if it's interpreted broadly. Altman gets credit for sticking up for Anthropic on that point, but not much credit, because it's so obvious that it's overkill.
dario comes across like a guy who has never even been in a fight and cant believe a fight is even real.
there is something very dangerous about a person who believes that they are "good" and then believes that in fact their version of good is superior to the government, and they should ignore the government which ostensibly represents the people, while building a technology that will make millions of white collar jobs go away (democrat voters) and revolutionise violence (dod/dow - republican voters)
imagine if IBM decided in 1960s they were going to start telling NASA/DOD how to use their mainframes and saying USgov couldnt have an IBM if they were going to use it in vietnam etc
that said, i use claude
Barely represents the people. Especially not on the issue of domestic mass surveillance and fully autonomous killing machines. Or the war in Vietnam.
absurd yes but same principle. companies have to be subject to government especially in technologies that enable or manage violence. this is because the role of the government is to collectively manage and allocate violence in the manner the people desire
I don’t know what you’re describing, but it’s not how the US works.
Companies aren’t extensions of the state; they’re private actors that have to follow the law. If Congress wants something prohibited, it passes a law. Otherwise firms are free to choose who they do business with.
companies and the people who work for them are subject to the state via the law and regulations. if they violate the law, the state will use violence to enforce the law, with a government entity called law enforcement and law enforcement officers.
if new technologies are invented, like the internet, missiles, nuclear power, and so on, which represent an ability to manage and allocate violence, or remove the state ability to control violence, the government needs to reassert their monopoly on that violence and take control of it. without this monopoly, how will they collect taxes and enforce the law?
without the monopoly on violence the government is little more than an idea
Sadly this place is full of noise and people who don’t get the big picture - leading to the down voting of posts and continual drowning out of stuff closer to the truth by noise and hysteria.
This one is unusual in that the government started bailing out the AI companies last year. Usually, it waits until the bubble pops, and then starts the bail outs.
That's standard operating procedure for Trump though.
He did the same thing in 2016-19 with the zero interest rate policy + tax cuts even though the economy was strong. Any macroeconomics book (or NPR station during those years) will tell you that doing that creates short-term economic growth, but sets the next administration up for [hyper-]inflation.
Of course, that happened, and those same books go on to say "and, usually, because inflation takes a bit to kick in, the next president will be blamed. This is why we have an independent Fed".
So, this time around, he's trying to pull the same crap by dismantling the Fed, and, until then, lean hard into deficit spending to keep unemployment low. Last year, money went to data centers, and domestic paramilitary actions and prison build-outs. This year, we have those things and a new pointless forever war.
However, it's not working the same way as it did last time. He's done so much other collateral damage that we're in a "boomcession" where the economic indicators become untethered from reality. So, they show growth, but people's quality of life, spending power, job security, and so on all decrease.
For example, a piece of the GDP is "how much does your bank screw you per year on your checking account?". This is treated like discretionary spending, and it's gone up from a few hundred a year to over $2000 in 2025. That increase counts as economic growth, instead of institutionalized theft.
Medical spending increases drove all the US's GDP growth last quarter. The quarter before that, it was spending on AI datacenters that's backed by junk loans and federal dollars.
Anyway, I don't have an answer for your question better than "bubble", but the current economic cycle is not what you described. It is a "boomcession". As far as I can tell, it's a new class of economic disaster, at least in the US.
What did you think the "Capital" in capitalism referred to? It doesn't refer to you and me
I've now moved to Claude and it's much better actually, if like me you hate their fonts (Anthropic Sans) select System fonts in the Claude preferences and you can use this snippet in Safari's Settings -> Advanced -> Stylesheet to make everything your default system font:
[data-theme=claude] * { font-family: system-ui, sans-serif !important; }
The guy can lie with a perfectly straight face. He's the kind of person who tells another lie just to cover the last one, and then another to cover that.
Meanwhile he keeps making everyone more and more dependent on him, so by the time people finally realize what's going on, they can't afford to push him out.
He doesn't seem to care if the DoW uses his AI for international spying.
That's one more reason why Europe needs sovereign tech.
Posted here: https://news.ycombinator.com/item?id=47195085
They are not the exception, and are just as bloodlessly, shamelessly publicity hungry as any other tech co, if not more so. No surprise based on their conduct up until this fake event.
Also if you've ever actually chatted with anyone at the company you'd know that they are not all the same and Anthropic genuinely does stand apart here.
I encourage you to do the same.
Claude Desktop is better anyway -- and, as we have seen, Anthropic is a more ethical company.
It's difficult to get someone to understand something when their paycheck depends on their not understanding.
Could you point me to one other $300B+ company that would be willing to do this?
https://news.ycombinator.com/item?id=47145963
Just trying to make sure folks aren't getting ahead of themselves, without having put some custom thought into it.
If you want to put them on a pedestal for reasons that make sense to you, all good.
If others are encouraged to form their own opinions by taking some pause for thought, then all the better.
If Anthropic still end up on the pedestal, it must be for the right reasons, as opposed to 'just because they're not the currently discussed villain'.
Does the administration really believe these AIs are like digital humans?
Those who know better please correct me. My current understanding of Palantir (and other surveillance tech companies like Peregrine) is:
1. They facilitate the sale of data to law enforcement, enabling the government to circumvent fourth amendment protections.
2. They fuse cross-government agency data through Foundry and fuse them into unified profiles which the government can use to surveil and pressure citizens without probable cause or a warrant.
ICE also uses a Palantir tool called ELITE to build deportation target lists.
EDIT: Downvoting my comment without any proper rebuttal or clarification is pretty silly.
I do agree with your point that Amodei is playing a game though. Whether he’s winning the bigger picture or not it’s unclear. His red lines are already so watered out, like how domestic surveillance is not ok, but international? totally fine.
https://en.wikipedia.org/wiki/NSA_warrantless_surveillance_(...
I suspect the 2007 in the title refers to the fact that bills were passed to ban this stuff in 2007, which is when the PRISM program (also illegal domestic surveillance) got started.
(The title makes it sound like warrantless surveillance lasted from 2001-2007, but I think it means the article only covers that date range.)
He has my respect for that
Neither know how to solve the alignment problem while market pressures are making them race towards capabilities (long horizon, continual learning) that will have disastrous consequences .
I've long thought that OpenAI was a corrupt bunch.
Except for embedding (which I plan to move soon), I have quit my OpenAI accounts. I don't like them.
I don't know how reliable that source is. In any case, here's the text from that link, for posterity:
"I want to be very clear on the messaging that is coming from OpenAI, and the mendacious nature of it. This is an example of who they really are, and I want to make sure everything sees it for what it is. Although there is a lot we don’t know about the contract they signed with DoW (and that maybe they don’t even know as well — it could be highly unclear), we do know the following:
Sam’s description and the DoW description give the strong impression (although we would have to see the actual contract to be certain) that how their contract works is that the model is made available without any legal restrictions ("all lawful usee") but that there is a "safety layer", which I think amounts to model refusals, that prevents the model from completing certain tasks or engaging in certain applications.
"Safety layer" could also mean something that partners such as Palantir tried to offer us during these negotiations,which is that they on their end offered us some kind of classifier or machine learning system, or software layer, that claims to allow some applications and not others. There is also some suggestion of OpenAI employees ("FDE’s") looking over the usage of the model to prevent bad applications.
Our general sense is that these kinds of approaches, while they don’t have zero efficacy, are, in the context of military applications, maybe 20% real and 80% safety theater. The basic issue is that whether a model is conducting applications like mass surveillance or fully autonomous weapons depends substantially on wider context: a model doesn’t "know" if there’s a human in the loop in the broad situation it is in (for autonomous weapons), and doesn’t know the provenance of the data is it analyzing (so doesn’t know if this is US domestic data vs foreign, doesn’t know if it’s enterprise data given by customers with consent or data bought in sketchier ways, etc).
The kind of "safety layer" stuff that Palantir offered us (and presumably offered OpenAI) is even worse:our sense was that it was almost entirely safety theater, and that Palantir assumed that our problem was "you have some unhappy employees, you need to offer them something that placates them or makes what is happening invisible to them, and that’s the service we provide".
Finally, the idea of having Anthropic/OpenAI employees monitor the deployments is something that came up in discussion within Anthropic a few months ago when we were expanding our classified AUP of our own accord. We were very clear that this is possible only in a small fraction of cases, that we will do it as much as we can, but that it’s not a safeguard people should rely on and isn’t easy to do in the classified world. We do, by the way, try to do this as much as possible, there’s no difference between our approach and OpenAI’s approach here.
So overall what I’m saying here is that the approaches OAI is taking mostly do not work: the main reason OAI accepted them and we did not is that they cared about placating employees, and we actually cared about preventing abuses. They don’t have zero efficacy, and we’re doing many of them as well, but they are nowhere near sufficient for purpose. It is simultaneously the case that the DoW did not treat OpenAI and us the same here.
We actually attempted to include some of the same safeguards as OAI in our contract, in addition to the AUP which we considered the more important thing, and DoW rejected them with us. We have evidence of this in the email chain of the contract negotiations (I’m writing this with a lot to do, but I might get someone to follow up with the actual language). Thus, it is false that "OpenAIs terms were offered to us and we rejected them", at the same time that it is also false that OpenAI’s terms meaningfully protect them against domestic mass surveillance and fully autonomous weapons.
Finally, there is some suggestion in Sam/OpenAI’s language that the red lines we are talking about, fully autonomous weapons and domestic mass surveillance, are already illegal and so an AUP about these is unnecessary. This mirrors and seems coordinated with DoW’s messaging. It is however completely false. As we explained in our statement yesterday, the DoW does have domestic surveillance authorities, that are not of great concern in a pre--AI world but take on a different meaning in a post-AI world.
For example, it is legal for DoW to buy a bunch of private data on US citizens from vendors who have obtained that data in some legal way (often involving hidden consents to sell to third parties) and then analyze it at scale with AI to build profiles of citizens, their loyalties, movement patterns in physical space (the data they can get includes GPS data, etc), and much more.
Notably, near the end of the negotiation the DoW offered to accept our current terms if we deleted a specific phrase about "analysis of bulk acquired data", which was the single line in the contract that exactly matched this scenario we were most worried about. We found that very suspicious. On autonomous weapons, the DoW claims that "human in the loop is the law", but they are incorrect. It is currently Pentagon policy (set during the Biden admin) that a human has to be in the loop of firing a weapon. But that policy can be changed unilaterally by Pete Hegseth, which is exactly what we are worried about. So it is not, for all intents and purposes, a real constraint.
A lot of OpenAI and DoW messaging just straight up lies about these issues or tries to confuse them.
I think these facts suggest a pattern of behavior that Ive seen often from Sam Altman, and that I want to make sure people are equipped to recognize:
He started out this morning by saying he shares Anthropic’s redlines, in order to appear to support us, get some of the credit, and not be attacked when they take over the contract. He also presented himself as someone who wants to "set the same contract for everyone in the industry" — e.g. he’s presenting himself as a peacemaker and dealmaker.
Behind the scenes, he’s working with the DoW to sign a contract with them, to replace us the instant we are designated a supply chain risk. But he has to do this in a way that doesn’t make it seem like he gave up on the red lines and sold out when we wouldn’t. He is able to superficially appear to do this, because (1.) he can sign up for all the safety theater that Anthropic rejected, and that the DoW and partners are willing to collude in presenting as compelling to his employees, and (2.) the DoW is also willing to accept some terms from him that they were not willing to accept from us. Both of these things make it possible for OAI to get a deal when we could not.
The real reasons DoW and the Trump admin do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot), we haven’t given dictator-style praise to Trump (while Sam has), we have supported AI regulation which is against their agenda, we’ve told the truth about a number of AI policy issues (like job displacement), and we’ve actually held our red lines with integrity rather than colluding with them to produce "safety theater" for the benefit of employees (which, I absolutely swear to you, is what literally everyone at DoW, Palantir, our political consultants, etc, assumed was the problem we were trying to solve).
Sam is now (with the help of DoW) trying to spin this as we were unreasonable, we didn’t engage in a good way, we were less flexible, etc. I want people to recognize this as the gaslighting it is.
Vague justifications like "person X was hard to work with" are often used to hide real reasons that look really bad, like the reasons I gave above about political donations, political loyalty, and safety theater. It’s important that everyone understand this and push back on this narrative at least in private, when talking to OpenAI employees.
Thus, Sam is trying to undermine our position while appearing to support it. I want people to be really clear on this: he is trying to make it more possible for the admin to punish us by undercutting our public support. Finally, I suspect he is even egging them on, though I have no direct evidence for this last thing.
I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with DoW as sketchy or suspicious, and see us as the heroes (we’re #2 in the App Store now!). Itis working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees.
Due to selection effects, they’re sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees."
https://www.reddit.com/r/Anthropic/comments/1rl1ula/dario_tr...
> However, only an act of Congress can legally and formally change the department's name and secretary's title, so "Department of Defense" and "secretary of defense" remain legally official.
https://en.wikipedia.org/wiki/United_States_Department_of_De...
HypocrAIsy...
[1] -- https://edition.cnn.com/videos/business/2020/07/24/thiel-pal...
The government asks if they can rent your car. I hope we agree that you don’t have to say yes. (Specific exceptions exist to places of lodging etc.)
Anthropic is exercising their right to say no in the same way.
Of course, a company should have freedom to choose not to do business with the government. I just think that automatically assuming the worst intention of the government is not as productive as setting up good enough legal framework to limit government's power.
In a world where LLMs produce very convincing but subtly wrong output, this makes me uncomfortable. I get that warfare without AI is in the past now, but war and rules of engagement and AI output etc etc etc all seem fuzzy enough that this is not yet a good call even if you agree with the end goals.
I'm sorry, you've just literally described a "killer robot" in more words.
Where autonomous transformer-based munitions will be used are basically "here is a photo of a face, find and kill this human" and loitering munitions will take their time analyzing video and then decide to identify and attack a target on their own.
EDIT: Or worse: "identify suspicious humans and kill them"
Its not fully autonomous ice cream machines, its fully autonomous _weapons_. are you stupid or are you dumb? I don't think you're asking an honest question.
For that matter, explain why the Pentagon would balk at not spying on every American.
In a way, I admire Dario’s stance and having the backbone to stand up to a government that is so happy to punish, legally or illegally, those that disagree with them. I certainly wouldn’t have the bravery (or stupidity) in his position — which frankly makes me happy that he’s running Anthropic and not someone like me…
Maybe it’s not much and they probably won’t care but taking no action here it’s the same as being complicit.
The dead internet is alive and well.
~93 Employees signed up the notdivided.org petition. Some of OAI employees could be reading this comment right now.
Let's be real, OpenAI backstabbed Anthropic. Even Dario has essentially just said it now.
(Shameless plug?) but I created an ASK HN about it: Ask HN: What will OpenAI employees do now who have signed notdividedorg petition [0] and not a single person from OAI responded when I just wanted to discuss :/ and hey that's fine I don't mind but please don't mind me either when I re-raise this topic
From a comment from the thread about OAI on hackernews by tedsanders (OAI employee) [Please don't harass anybody]
> I'm an OpenAI employee and I'll go out on a limb with a public comment. I agree AI shouldn't be used for mass surveillance or autonomous weapons. I also think Anthropic has been treated terribly and has acted admirably. My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons, and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples). Given this understanding, I don't see why I should quit. If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit, but so far I haven't seen any evidence that's the case.
Ted, if you are reading this, I truly felt like you were right. I was still skeptic because part of me felt like it doesn't make sense and well it didn't. But I had trusted ya and I thought that you had far greater insights than us but now I am not sure...
Sir, I have no ill-will towards you but I just want to know, you have gone silent after this comment and one another about GPT 5.3 instant as far as I can see. You did say in the first that you will go out on a limb with public comment, so please don't mind me if I ask questions in public about that comment
The question is: But what now? Do you see now why you should quit?
That being said, I still respect you ted for atleast trying to say it on a community, you had no reason to but took the risk. I genuinely hope that you realize that this question is coming from a place of concern. OpenAI employees like you , were also deceived by OpenAI/Sam altman itself, in a way even more so than us. You had no monetary reason I suppose to go ahead and say it but you did based on your understanding at that time. and I respect it because it shows to me that maybe just maybe OAI employees aren't driven by just money as people would like to point out.
If this is what an OAI employee is saying, weren't they deceived too? weren't they humiliated in public by being proven wrong, losing their accountability/trust within a community?
The comments just turn to well money speaks, I agree, but does money speak so much that you cannot hear your peers/own community?
I still believe on the fringe thought that OAI employees have some say in all of this. 98 employees (no of employees who signed notdivided.org) leaving have 1000 fold more magnitude than 98 people not using OAI. You have power, and with it comes responsibility.
I just want a discussion with OpenAI Employees in general / especially with those who signed NotDivided.org or who are part of this community of hackernews like ted. what do YOU guys make up of all the situation?
A lot of this situation if historians ever write about it, would feel so close to "I was just following orders" than not. No sadly this is not hyperbole now because what we are talking about is the creation of autonomous killing machines which can kill anyone without any human in the loop.
People from the future are also gonna ask us general public why we didn't held the people working accountable, in a similar fashion as to the past.
Once again, I still mean to bring no hate towards anyone. Make peace not war. I just want to think that the world would be a better place for my future children and generation and I would like to hope that this comment can be meaningful towards it.
Have a nice day as one can in a situation like this. A lot of the things I say or do is the same things I asked the people of past when reading history in my classes, Why didn't you guys do X or Y, Why didn't the public say anything. Why was it silent? But we are gonna be history too and someone is gonna ask us why were we silent and I just want to make the answer I tried rather than I don't know. I sort of wanted to learn something from history.
Sincerely, We (the public) want a discussion with OpenAI employees about it. Please don't be silent as silence will be interpreted by the future generations as agreement. Please speak. Tell us what you all are doing
A lot of the times it feels like I am shouting in the void tho in these matters as these messages just straight up don't go to the right people and that feeling sucks because at some point, I am gonna get tired shouting in the void too.
If anyone also has contacts with OAI employees, please ask them such questions and share us the responses if possible. I just want some answers, that's all.
[0]: Ask HN: What will OpenAI employees do now who have signed notdividedorg petition: https://news.ycombinator.com/item?id=47231498
I want to be very clear on the messaging that is coming from OpenAl, and the mendacious nature of it. This is an example of who they really are, and I want to make sure everything [sic] sees it for what it is. Although there is a lot we don't know about the contruct they signed with DoW [shorthand for the Department of Defense] (and that maybe they don't even know as well — it could be highly unclear), we do know the following:
Sam [Altman]'s description and the DoW description give the strong impression (although we would have to see the actual contract to be certain) that how their contruct works is that the model is made available without any legal restrictions ("all lawful use") but that there is a "safety layer", which I think amounts to model refusals, that prevents the model from completing certain tasks or engaging in certain applications.
"Safety layer" could also mean something that partners such as Palantir [Anthropic's business partner for serving U.S. agency customers] tried to offer us during these negotiations, which is that they on their end offered us some kind of classifier or machine learning system, or software layer; that claims to allow some applications and not others. There is also some suggestion of OpenAT employees ("FDE>" [shorthand for forward deployed engineers]) looking over the usage of the model to prevent had applications.
Our general sense is that these kinds of approaches, while they don't have zero efficacy. are, in the context of military applications, maybe 20% real and 80% safery theater: The basic issue is that whether a model is conducting applications like mass survelllance or fully autonomous weapons depends substantially on wider context: a model doesn't "know" if there's a human in the loop in the broad situation it is in (for autonorous weapons), and doesn't know the provenance of the data it is analyzing (so doesn't know if this is US domestic data vs foreign, doesn't know if it's enterprise data given by customers with consent or data bought in sketchier ways, etc).
We also know — those in safeguards know painfulty well — that refusals aren’t reliable and jailbreaks are common, often as easy as just misinforming the model about the data it is analyzing. An important distinction here that makes it much harder than the safeguards probiem is that while it's relatively easy to, for example, determine if a model is being used to conduct cyberattacks from inputs und outputs, it's very hard to determine the nature and context of the cyber attacks, which is the kind of distinction needed here. Depending on the details this task can be difficult or impossible.
The kind of "safety layer" stuff that Palantir offered us (and presumably offered OpenAI) is even worse: our sense was that it was almost entirely safety theater, and that Palantir assumed that our problem was "you have some unhappy employees, you need to offer them something that placates them or makes what is happening invisible to them, and that's the service we provide”.
Finally, the idea of having Anthropic/OpenAl employees monitor the deployments is something that came up in discussion within Anthropic a few months ago when we were expanding our classified AUP [acceptable use policy] of our own accord. We were very clear that this is possible only in a small fraction of cases, that we will do it as much as we can, but that it's not a safeguard people should rely on and isn't easy to do in the classified world. We do, by the way, try to do this as much as possible, there's no difference between our approach and OpenAl's approach here.
So overall what I'm saying here is that the approaches OAI [shorthand for OpenAl] is taking mostly do not work: the main reason OAf accepted them and we did not is that they cared about placating employees, and we actually cared about preventing abuses. They don't have zero efficacy, and we're doing many of them as well, but they are nowhere near sufficient for purpose. It is simuitaneously the case that the DoW did not treat OpenAl and us the same here.
We actually attempted to include some of the same safeguards as OAI in our contract, in addition to the AUP which we considered the more important thing, and DoW rejected them with us. We have evidence of this in the email chain of the contract negotiations (I'm writing this with a lot to do, but I might get someone to follow up with the actual language). Thus, it is false that "OpenAl's terms were offered to us and we rejected them", at the same time that it is also false that OpenAls terms meaningfully protect them against domestic mass surveillance and fully autonomous weapons.
Finally, there is some suggestion in Sam/OpenAl's language that the red lines we are talking about, fully autonomous weapons and domestic mass surveillance, are already illegal and so an AUP about these is unnecessary. This mirrors and seems coordinated with DoW's messaging. It is however completely false. As we explained in our statement yesterday, the DoW does have domestic surveillance authorities, that are not of great concern in a pre-Al world but take on a diferent meaning in a post-Al world.
For example, it is legal for DoW to buy a bunch of private data on US citizens from vendors who have obtained that data in some legal way (often involving hidden consents to sell to third parties) and then analyze it at scale with AI to build profiles of citizens, their loyalties, movement patterns in physical space (the data they can get includes GPS data, etc), and much more.
Notably, near the end of the negotiation the DoW offered to accept our current terms if we deleted a specific phrase about "analysis of bulk acquired data", which was the single line in the contract that exactly matched this scenario we were most worried about. We found that very suspicious. On autonomous weapons, the DoW claims that "human in the loop is the law", but they are incorrect. It is currently Pentagon policy (set during the Biden admin[istration]) that a human has to be in the loop of firing a weapon. But that policy can be changed unilaterally by Pete Hegseth, which is exactly what we are worried about. So it is not, for all intents and purposes, a real constraint.
A lot of OpenAI and DoW messaging just straight up lies about these issues or tries to confuse them.
I think these facts suggest a pattern of behavior that I've seen often from Sam Altman, and that I want to make sure people are equipped to recognize:
He started out this morning by saying he shares Anthropte's redlines, in order to appear to support us, get some of the credit, and not be attacked when they take over the contract. He aiso presented himself as someone who wants to "set the same contract for everyone in the industry" — e.g. he’s presenting himself as a peacemaker and dealmaker.
Behind the scenes, he's working with the DoW to sign a contract with them, to replace us the instant we are designated a supply chain risk. But he has to do this in a way that doesn't make it seem like he gave up on the red lines and sold out when we wouldn't. He is able to superficially appear to do this, because (1) he can sign up for all the safety theater that Anthropic rejected, and that the DoW and partners are willing to collude in presenting as compelling to his employees, und (2) the DoW is also willing to accept some terms from him that they were not willing to accept from us. Both of these things make it possible for OAI to get a deal when we could not.
The real reasons DoW and the Trump admin do not like us is that we haven't donated to Trump (while OpenAl/Greg [Brockman, OpenAl's president] have donated a lot), we haven't given dictator-style praise to Trump (while Sam has), we have supported AI regulation which is against their agende, we've told the truth about a number of Al policy issues (like job displacement), and we've actually held our red lines with integrity rather than colluding with them to produce "safety theater" for the benefit of employees (which, I absolutely swear to you, is what literally everyone at DoW, Palanti, our political consultants, etc, assumed was the problem we were trying to solve).
Sam is now (with the help of DoW) trying to spin this as we were unreasonable, we didn't engage in a good way, we were less flexible, etc. I want people to recognize this as the gaslighting it is.
Vague justifications like "person X was hard to work with" are often used to hide real reasons that look really bad, like the reasons I gave above about political donations, political loyalty, and safety theater. It's important that everyone understand this and push back on this narrative at least in private, when talking to OpenAI employees.
Thus, Sam is trying to undermine our position while appearing to support it. I want people to be really clear on this: he is trying to make it more possible for the admin to punish us by undercutting our public support. Finally, I suspect he is even egging them on, though I have no direct evidence for this last thing.
I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAl's deal with DoW as sketchy or suspicious, and see us as the heroes (we're #2 in the App Store now!). [Anthropic's Claude chatbot later rose to no. 1 on one of Apple's App Store download rankings.] It is working on some Twitter morons, which doesn't matter, but my main worry is how to make sure it doesn't work on OpenAl employees.
Due to selection effects, they're sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees.
Not the first time, not the last time, add it to the list of shit he's done that should put him in a little cell for the rest of his life.
I’m sure anthropic has signed up more revenue this week in response to this debacle to cover it. Where they’re actually screwed is if the gov follows through and declare anthropic a supply chain risk.
1. Stargate seemed to require a dedicated press conference by the President to achieve funding targets. Why risk that level of politicization if it didn't?
2. Greg Brockman donated $25mil to Trump MAGA Super PAC last year. Why risk so much political backlash for a low leverage return of $200m on $25m spent?
3. During WW2, military spend shot from 2% to 40% of GDP. The administration is requesting $1.5T military budget for FY2027, up from $0.8T for FY2025. They have made clear in the past 2 months that they plan to use it and are not stopping anytime soon
If you believe "software eats the world" it is reasonable to expect the share of total military spend to be captured by software companies to increase dramatically over the next decade. $100B (10% of capture) is a reasonable possibility for domestic military AI TAM in FY2027 if the spending increase is approved (so far, Republicans have not broken rank with the administration on any meaningful policy)
If US military actions continue to accelerate, other countries will also ratchet up military spend - largely on nuclear arsenals and AI drones (France already announced increase of their arsenal). This further increases the addressable TAM
Given the competition and lack of moat in the consumer/enterprise markets, I am not sure that there is a viable path for OpenAI to cover it's losses and fund it's infrastructure ambitions without becoming the preferred AI vendor for a rapidly increasing military budget. The devices bet seems to be the most practical alternative, but there is far more competition both domestically (Apple, Google, Motorola) and globally (Xiaomi, Samsung, Huawei) than there is for military AI
Having run an unprofitable P&L for a decade, I can confidently state that a healthy balance sheet is the only way to maintain and defend one's core values and principles. As the "alignment" folks on the AI industry are likely to learn - the road to hell (aka a heavily militarized world) is oft paved with the best intentions
> As the "alignment" folks on the AI industry are likely to learn
I will push back here. Dario & co are not starry-eyed naive idealists as implied. This is a calculated decision to maximize their goal (safe AGI/ASI.)
You have the right philosophy on the balance sheet side of things, but what you're missing is that researchers are more valuable than any military spend or any datacenter.
It does not matter how many hundreds of billions you have - if the 500-1000 top researchers don't want to work for you, you're fucked; and if they do, you will win because these are the people that come up with the step-change improvements in capability.
There is no substitute for sheer IQ:
- You can't buy it (god knows Zuck has tried, and failed to earn their respect).
- You can't build it (yet.)
- And collaboration amongst less intelligent people does not reliably achieve the requisite "Eureka" realizations.
Had Anthropic gone forth with the DoD contract, they would have lost this top crowd, crippling the firm. On the other hand, by rejecting the contract, Anthropic's recruiting just got much easier (and OAI's much harder).
Generally, the defense crowd have a somewhat inflated sense of self worth. Yes, there's a lot of money, but very few highly intelligent people want to work for them. (Almost no top talent wants to work for Palantir, despite the pay.) So, naturally:
- If OpenAI becomes a glorified military contractor, they will bleed talent.
- Top talent's low trust in the government means Manhattan Project-style collaborations are dead in the water.
As such, AGI will likely emerge from a private enterprise effort that is not heavily militarized.
Finally, the Anthropic restrictions will last, what, 2.5 more years? They are being locked out of a narrow subset of usecases (DoD contract work only - vendors can still use it for all other work - Hegseth's reading of SCR is incorrect) and have farmed massive reputation gains for both top talent and the next administration.
I don’t know the answers to these questions, but if the answer is “yes” to at least 1 or 2, then I think the equation flips quite a bit. This is what I’m seeing in the world right now, and it’s disconcerting:
1. Ukraine and Russia have been in a skirmish that has been drawn out much longer than I would guess most people would have guessed. This has created a divide in political allegiance within the United States and Europe.
2. We captured the leader of Venezuela. Cuba is now scared they are next.
3. We just bombed Iran and killed their supreme leader.
4. China and the US are, of course, in a massive economic race for world power supremacy. The tensions have been steadily rising, and they are now feeling the pressure of oil exports from Iran grinding to a halt.
5. The past couple days Macron has been trying to quell tension between Israel and Lebanon.
I really do not hope we are not headed into war. I hope the fact that we all have nukes and rely on each others’ supply chains deters one. But man does it feel like the odds are increasing in favor of one, and man does that seem to throw a wrench in this whole thing with Anthropic vs. OpenAI.
Being accurate, by all reporting Israel killed Iran's leadership.
Yes, likely enabled by US intelligence, but the one who pulls the trigger does matter.
The one who pulled the trigger is irrelevant here, because both have pulled the trigger hundreds or thousands of times in the past few days, dividing up targets between them for the joint operation.
I'm aware that internet forums like to play fast and loose with insinuations, but facts are facts.
It sounds like you think this means something?
Obviously it doesn't when we're talking about an administration that openly breaks laws, much less EOs, and issues whatever EOs they want saying whatever they want, even in violation of previous EOs. There aren't even any repercussions to the president "violating an EO".
So, the pedantry here is irrelevant. The two parties are on the same team, working towards the same goal, doing the same things, divvying up the list of targets to strike.
reminder that trump has been flirting with just continuing in power (2028 hats and talks about a third term) and is responsible for trying a coup last time he lost.
personally I think there's a possibility where he'll just declare martial law and stay in power at the end of his term.
This is a massive cope imo. The reason that the AI industry is so incestuous is just because there are only a handful of frontier labs with the compute/capital to run large training clusters.
Most of the improvements that we’ve seen in the past 3 years are due to significantly better hardware and software, just boring and straightforward engineering work, not brilliant model architecture improvements. We are running transformers from 2017. The brilliant researchers at the frontier labs have not produced a successor architecture in nearly a decade of trying. That’s not what winning on research looks like.
Have there been some step-change improvements? Sure. But by far the biggest improvement can be attributed to training bigger models on more badass hardware, and hardware availability to serve it cheaply. To act like the DoD isn’t going to be able to stand up pytorch or vllm and get a decent result is hilarious: the reason you use slurm and MPI and openshmem is because national labs and DoD were using it first. NCCL is just gpu accelerated scope-reduced MPI. nvshmem is just gpu accelerated scope-reduced openshmem.
If anything, DoD doesn’t have the inference throughput requirements that the unicorns have and might just be able to immediately outperform them by training a massive dense model without optimizing for time to first token or throughput. They don’t have to worry about if the $/1M tokens makes it economically feasible to serve, which is a primary consideration of the unicorns today when they’re choosing their parameter counts. They can just rate limit the endpoint and share it, with a 2 hour queue time.
The government invented HPC, it’s their world and you’re just playing in it.
> Generally, the defense crowd have a somewhat inflated sense of self worth.
/eyeroll but nobody can do what you do!
The dense model argument is self-defeating long term. Sparsity (MoE etc.) lets you build a smarter model at the same compute budget, so going dense because you can afford to waste FLOPs is how you fall behind b/c you never came up with the step function improvements needed.
Sure, the DoD invented HPC, but it also invented the internet, and then the private sector made it actually useful.
So yeah, they bet a whole lot on “look at us, we have morals”.
Also, they got a huge PR win, and jumped to #1 on the Apple App Store. Consumer market share is going to decide which of the AI companies is the market leader, not fickle government contracts.
If you look at what generates cash, it's corp to corp. That's across most industries. While there are markets that are consumer mostly, LLMs have immense and enormous business facing revenue potential. The consumer market is a gnat in comparison.
As opposed to all those famous ethical battles where there's nothing in it for you to do the wrong thing?
Not a chance. The DoD has massive pockers which and INCREDIBLY SPREAD OUT. You can't underestimate how spread this money is. The DoD has maybe a 64 GPU cluster and ALMOST NO ONE USES IT FOR DEEP MODEL TRAINING. Even contractors end up working with DGX boxes to do all their training.
As of 2023, I was doing the largest Deep learning training runs out of anyone I have known in the industry and I've been in the industry for 20 yeras. The second best groups behind mine were using 4 GPU locally machines that they had to purchase on contract.
There's no way the DoD can train these models themselves, not even close. They are COMPLETELY DEPENDENT ON INDUSTRY. I was the PM for a DARPA program in 2023 and SAME PROBLEM. They had no compute or would rely on university compute if a program had a university partner. YOU HAVE NO IDEA HOW FAR BEHIND THE DOD IS IN THIS SPACE.
If youve spent even a small amount of time with llms you’ll know that these security measures are just window dressing.
i.e. he worries that OpenAI employees could also be gaslighted by Altman
anthropic has the least attrition rate
and yesterday an openai employee left already and joined anthropic
I know most of you here dont quite have the imagination to see it. But feel free to screenshot my post and lets talk in a year ;)
openai is best fit for usa's interests. sam is smart enough to be politically flexible and keep his mouth shut on closing doors of opportunities.
musk's views are best fit for world's interests but he's really spread thin and xai still sub par compared to openai, anthropic, google. he's also play safe lately trying to be politically neutral after his stint with the republicans.
im rooting for anthropic given their product excellence but it pains me that the other side of it is the effective altruism, the politics of dems, so on.
Anthropic might not sign up with DoD but they definitely still live in a glass house.
Also, its extremely evident that we live in a post truth world. The accusation of Lies dont hold any teeth anymore. Especially in the post law gov of America
Source: https://www.ft.com/content/97bda2ef-fc06-40b3-a867-f61a711b1...
Just because you hate Altman doesn't mean everyone else does! Most people just know him as the guy who makes ChatGPT which most people like.
EDIT: Also, it doesn't help to brag about how this is good actually because now they are getting app downloads! People sympathize with victims of unfair situations. They don't like seeing people take advantage of those unfair situations though. No one has ever found the welfare recipient bragging about their welfare to be sympathetic.
Which is intended to muddy the waters about Anthropic’s actual position vs OpenAI’s, and portray himself as a conciliator (for the audience of DoD/Trump) who is still bound by equally strong ethics (as a fig leaf for OpenAI’s employees sympathetic to Anthropic). All to swoop in a land a big contract from the same people he is making a show of “supporting” in public.
I’d be pretty pissed too, tbh. Like, should he instead be thanking Sam effusively for being a manipulative slimeball acting entirely within his own self interest?
If as he says Sam’s comments are actually damaging Anthropic’s credibility/bargaining position with his public commentary then trying to change the popular narrative about what OpenAI/Sam are doing is a reasonable tactic.
As for your welfare analogy I’m kinda struggling to understand how to map that onto the participants in the current scenario or the lesson intended to be implied by it.
Going "what he's saying is straight up lies" is no more evidence backed than Altman claiming he asked the DoD to have Anthropic given the same deal as OAI and have the SCR designation avoided.
Just because you hate Altman doesn't mean everyone else does! Most people just know him as the guy who makes ChatGPT which most people like.
And sure enough, my reading of it left the impression the OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself."
He also claimed that they would build rules into the model the DoD would use, preventing misuse. Aka he claims OpenAI will quickly solve alignment and build it right in...I wouldn't hold my breath.
Probably because most don't want to end up in russia?
It wasn't as if there weren't any other contractors like Snowden, but there were no other whistleblowers like Snowden
and where'd that leave him? In a country far away from his motherland and being worried about his safety. Being called god knows what by the country at home and most general people don't even care.
Snowden didn't do it for the money, he did it for what he felt was right and that's so rare.
Its so sad how when I searched up on Snowden on youtube, the first thing I found was ex CIA agent claiming Snowden wasn't innocent and how he had to befriend russia but at the same time, that was only because US would have literally killed him and made an example out of him to whistleblow about such a large-scale mass surveillance
“What kind of asshole reveals the fact we’re the assholes, then doesn’t let us kill him!” is one heck of a comment I found.
Also, We will charge the whistleblower with death but we will not take any action against the act which was whistleblown in the first place (:
Russia stopped him because US had cancelled his passport.
https://en.wikipedia.org/wiki/Evo_Morales_grounding_incident
All Lawful Use is a tautology with fascists because they cannot break laws by definition.
Soviet Union - The show trials of the 1930s were conducted with full legal apparatus: confessions, judges, verdicts. Stalin's purges operated through legally constituted troikas. Entirely "lawful" by Soviet law.
East Germany (DDR) - The Stasi's surveillance and harassment programmes were codified in law. When the wall fell, many Stasi officers genuinely argued their conduct was legal under GDR statute: a defence that West German courts largely rejected.
Castro's Cuba - Mass executions after the revolution were conducted by legally constituted revolutionary tribunals. Castro explicitly defended this on legality grounds when challenged by foreign press in 1959.
Chavez/Maduro's Venezuela - Suppression of opposition media, jailing of political opponents was consistently defended as operating within Venezuelan law, which was progressively rewritten to make it so. Classic self-referential legality.
Mao's Cultural Revolution - The revolutionary committees had legal standing. Persecution of intellectuals and landlords proceeded through formal (if kangaroo) legal processes.
> if the comment you've posted responds meaningfully to the discussion at hand.
https://mirror.org/
If mirror dot org actually existed, you might want to look into it, because your long list of examples has one related to 1930s Germany, and the rest has nothing to do with the political definition of "fascism"?
Your point about legality was valid, but you're undermining it with the sarcasm.
https://en.wikipedia.org/wiki/Ur-Fascism
https://www.rollingstone.com/politics/politics-news/trump-su...
DoD: I will make it legal.
Ignoring the definition, what would be required for individual alignment is exactly the same as collective alignment. The only difference is the goals and who writes them, for the LLM it is being somehow forced to follow those rules no matter what.
> However, only an act of Congress can legally and formally change the department's name and secretary's title, so "Department of Defense" and "secretary of defense" remain legally official.
https://en.wikipedia.org/wiki/United_States_Department_of_De...
[1] https://privacy.openai.com/policies?modal=take-control
The bigger picture is that the DoW got what it wanted and it got it by threatening one company while the other did its bidding.
See PRISM.
https://www.wyden.senate.gov/issues/domestic-surveillance-re...
He may not be perfect on everything, but elect more people like him and it starts moving the needle. Or elect some more that are even more opposed to some of these things. It doesn't happen overnight. Change is difficult.
I agree, though notice that the GOP/MAGA have and continue to make enormous changes. The difference is that they believe they can do it while others sit around talking about hopelessness and powerlessness. The only difference is belief.
You're conceding that the name has already changed, without voting.
> It doesn't change if the government wants mass surveillance.
That can be prevented by Congress with enough political will.
Did voting for Bernie Sanders in the last two primaries (especially the ones when Trump won for the first time) amount to anything?
I wonder how long can the American public keep the self delusion that the elections are anything but a theater for the naive, to keep the pretense the public has any say in things that matter.
How much has the current administration asked the public about going to war with Iran?
Here is the 2026 Senate map [1]. Do you suggest any of them will flip over Iran? (I don’t. The folks who regularly vote simply don’t show any sign that this is a priority. Folks who stay at home grumbling don’t matter.)
[1] https://en.wikipedia.org/wiki/2026_United_States_Senate_elec...
He didn't win the primaries though. It would have amounted to something if he got enough votes.
2) If he won the primaries, there is still no guarantee that that would have amounted to anything.
First, he might not have won the elections (mainstream media and the whole ruling elites were heavily against him). And even if he won, he might not have been able to do much against the permanent state.
I still think the main cause of Trump's wins is the deep disillusionment of the democratic voters by Obama's failure (inability/unwillingness) to impact a meaningful change.
Sadly, it is also factually correct (i.e. not delusional).
Which of my statements are you contesting?
From my point of view, your stance (play fairly, according to the rules set by your stronger opponent) is delusional. Note that the opponent is not 'republicans', but the whole ruling elites.
And no, I can't help you, I am not USian, just an outside observer. Sadly, due to its weight, whatever USA does, heavily influences everybody else as well.
No, it isn’t. Sanders’ supporters didn’t have the votes. That’s a fact.
If people believe in something, they should call their electeds and vote. The fact that a lot of people with a certain confluence of views (privacy, anti-war, et cetera) are too lazy to do either (regardless of post rationalization), but not self aware enough to not complain about it, is delusional cynicism.
I said the leadership of the democratic party did dirty tricks to prevent him winning.
The mainstream media was also against him.
Not anywhere close to a level playing field.
Note, that I am not against voting or calling your elected officials and all the related stuff. That is necessary. But, sadly, far from sufficient. If you think that that is sufficient, you are delusional.
Your subsequent generalizations are lazy and unsubstantiated, in fact they fit the classical smear patterns established by the mainstream media.
But still, ultimately, turnout was turnout. Media saying mean things about your side isn’t a real excuse, Trump has been saying the same for a decade.
> they fit the classical smear patterns established by the mainstream media
Of course they must. In the meantime, the issues I care about seem decently reflected (outside privacy and war, where I concede most Americans who share my views are lazy, delusional and nihilistic). I’ve even had the opportunity to help write some state and federal legislation. So I guess I should be okay with the lack of political competition.
https://en.wikipedia.org/wiki/2024_Democratic_Party_presiden...
Skill issue. Run your candidate. Convince people to vote for them.
> How much has the current administration asked the public about going to war with Iran?
THE ELECTIONS are how the public weighs in.
That's the second box only. There's also the soapbox (that you also referred to), the jury box and ultimately the ammo box.
But you are saying: You lost fair and square, wait 4 years to have any say in what is going on.
Re: THE ELECTIONS are how the public weighs in.
When the choice is between Tweedledee and Tweedledum, the public's choice is meaningless.
To say nothing about politicians outright shamelessly lying (e.g. Trump campaigning on 'no more wars').
Sorry I didn't invent the idea that there are federal elections every two years, I'm just telling you that you have to win them. Bonus points: this is also how you can change the election schedule or political system!
If you're saying both candidates were bad when one was Trump, and the other was Hillary, Kamala, or Joe, then you don't have very good judgement. I agree Trump lying about not starting a war was bad. Many of us have said for years that he is a terrible liar. Please help us.
Trump is monstrously bad (= force the shit hitting the fan NOW), the democratic alternatives were just 'normally' bad (= continue the same old crap driving the shit closer to the fan, ignoring the looming disaster).
Uh, yeah? I voted for Biden/Harris.
And in any case, focusing almost exclusively on one race is part of the problem. Where I live, we also had a Dem primary for the house district, and a more electable candidate won - and then went on to win in the general. It was one of the very few red->blue flips in 2024.
Our former congresswoman, incidentally:
https://newrepublic.com/post/207234/trump-labor-secretary-ch...
Then there are all the races for school boards, city council, county commission and all those things that provide the base and the bench to build off of.
... But the government flooding cities with thousands of masked thugs with a license to do whatever they want... has so far been an entirely Republican thing.
There are more colours to the world than pure black and pure white. There are also a million shades of grey in between, and most of us have the ability to distinguish between them.
https://usa.gov/renounce-lose-citizenship
If you have so little faith in them that they won’t honour the privacy controls you should also delete your non-consumer account too.
Verification requires access to classified logs. These logs would attract the spies of the whole world. Even if these logs are in principle for "past actions", in practice past logs (for war games, for example) would compromise future strategy.
Since these manual audits are too risky, the only alternative is to hard-code limits into the AI. But are we ready trust an AI to "judge" a mission and refuse to execute during a crisis?
Anthropic wanted technical enforcement, the Pentagon wanted trust.
It’s a choice between two bad options: an unaccountable military and an unreliable AI kill switch. They are both very dangerous, just in different ways.
But besides Sam Altman, this whole episode has made me totally and completely lose all respect for Paul Graham. I used to really idolize pg, and I really used to like his essays, but over the years I've found his essays increasingly displayed a disturbing lack of introspection, like they'd always seem to say that starting a startup is the best thing anyone can do, and if you're not good at startups then you kind of suck.
But his continued support of Altman in this instance (see https://x.com/paulg/status/2027908286146875591, and the comment in that thread where he replies "yes") is just so extra disappointing and baffling. First, his big commendation for Altman is that he's doing an AMA? Give me an f'ing break. When someone is a great spin doctor I'm not going to commend them for doing more spinning. It's like he has total blinders on and is unwilling to see how sama's actions in this instance are so disgusting and duplicitous. Maybe subconsciously he knows he's responsible for really launching sama into the public consciousness, so he now just is incapable of seeing the undeniably shitty things sama has done.
Oh well, I guess it's just another tech leader from the late 90s/early 00s who has just shown me he's kind of a shitty person like a lot of us.
“Oppenheimer was clearly an enormously charming man, but also a manipulative man and one who made enemies he need not have made. The really horrible things Oppenheimer did as a young man – placing a poisoned apple on the desk of his advisor at Cambridge, attempting to strangle his best friend – and yes, he really did those things – Monk passes off as the result of temporary insanity, a profound but passing psychological disturbance. (There’s no real attempt by Monk to explain Oppenheimer’s attempt to get Linus Pauling’s wife Ava to run off to Mexico with him, which ended the possibility of collaboration with one of the greatest scientists of the twentieth, or any, century.) Certainly the youthful Oppenheimer did go through a period of serious mental illness; but the desire to get his own way, and feelings of enormous frustration with people who prevented him from getting his own way, seem to have been part of his character throughout his life.”
Seems more like Sam Altman, who is known to get his way, than Dario.
When combined with a somewhat paradoxical large ego and occasionally fanciful reshaping of his own life story or exaggeration, it's entirely plausible (if not likely) that this was in reality a brief intrusive thought or a partially realized fantasy blown up into a catchy anecdote that better fit his self-image of being unable to control his typically human qualities of anger and envy.
If it was Sam Altman, we'd have heard the story from the guy he tried to poison, who instead of filing a police report thought it showed Sam was a real go-getter and offered him his first job on the spot as VP at the company he founded (later forced out by Sam replacing him as CEO, but still considers him a friend with no hard feelings).
As you suggest, it is easy to imagine Altman in the same hot seat. Never mind his sexual orientation, which the Republican theocrats will eventually use against him as surely as the knives came out for Ernst Röhm.
There were people working in government who successfully attacked Oppenheimer for personal and/or policy reasons, people who stood by, and people who unsuccessfully supported him, voted to clear him, or condemned the proceedings.
Oppenheimer still paid the price, and arguably, the risks to someone like him today are considerably higher, as the current administration isn't exactly like Eisnehower's.
Nevertheless it's reductionist, reifying sentimentality to talk about "the government" turning "viciously" on someone who "served them well" because they are defying its agenda. The government isn't a character in Game of Thrones. The responsibility lies with the specific individuals who attacked him, and those who stood by.
I'm sure that was of great comfort to Oppenheimer, as it will be to Altman and/or Amodei. "It's not you, it's us."
1. Some other AI company would cut a deal with the Pentagon. There's no world in which all the labs boycott the Pentagon. So who? Choosing Grok would be bad for the US, which is a bad outcome, but Amodei would have discounted that option, because he knows that despite their moral failures, the Pentagon is not stupid and Grok sucks.
That leaves Gemini or OpenAI, and I bet they predicted it would be OpenAI. Choosing OpenAI does not harm the republic - say what you will about Altman, ChatGPT is not toxic and it is capable - but it does have the potential to harm OpenAI, which is my second point:
2. OpenAI may benefit from this in the short term, and Anthropic may likewise be harmed in the short term, but what about the long game? Here, the strategic benefits to Anthropic in both distancing themselves from the Trump administration and letting OpenAI sully themselves with this association are readily apparent. This is true from a talent retention and attraction standpoint and especially true from a marketing standpoint. Claude has long had much less market share than ChatGPT. In that position, there are plenty of strategic reasons to take a moral/ethical stand like this.
What I did not expect, and I would guess Amodei did not either, is that Claude would now be #1 in the app store. The benefits from this stance look to be materializing much more quickly than anyone in favour of his courage might have hoped.
They chose Grok and OpenAI. The story was drowned out by the Anthropic controversy, but an xAI deal was signed the same week.
Not adding up
Wikileaks and Assange got popular too. What happened to them?
The State Dept and CIA do exactly what Assange did. They pick and choose who to target with leaks. They get away with it (mostly even when exposed) because they officially are in power. Assange was not in power. If you take a moral position do it when you have real power.
If the condition for getting real power is having no morals, this is hard to accomplish.
if we consider AIs as "force multipliers" as we do with coding agents, it's easy to see how any AI company can harm the republic if the government they are serving is unethical and amoral.
If US & A really goes full-Huawei on Anthropic, they can't IPO. It's an existential crisis for them. I think they can survive in some form, somehow, because their model is really good, probably the best.
And in other times, I would think the US government had sufficient intellectual horsepower to not cut off its own dick, and the golden goose's head, over some idiotic morning-drinker road-rage type beef. But these are not other times. These are these times.
3. Talent migration to Anthropic. No serious researcher working towards AGI will want it to be in the hands of OpenAI anymore. They are all asking themselves: "do I trust Sam or Dario more with AGI/ASI?" and are finding the former lacking.
It is already telling that Anthropic's models outperform OAI's with half the headcount and a fraction of the funding.
App Store rankings are meaningless, I have Claude, ChatGPT and Gemini all in top five, with a electronic mail app being 1 and a postal tracking service app (for a very small provider) being 3.
Also maybe not seeing the message or connection here... That myth isn't really about who has power or not, right? It's kind of just a trite little "why you should do good even when no one is watching" thing. It just serves Socrates for his argument with Thrasymachus, and leads us into book 2 where it really gets going with Glaucon and all that. This is from memory so I might be a little off.
The story is asking whats the source of morality? Who decides where the lines are? And its not scientists. Science produces the Ring.
> According to the tradition, Gyges was a shepherd in the service of the king of Lydia; there was a great storm, and an earthquake made an opening in the earth at the place where he was feeding his flock. Amazed at the sight, he descended into the opening, where, among other marvels, he beheld a hollow brazen horse, having doors, at which he stooping and looking in saw a dead body of stature, as appeared to him, more than human, and having nothing on but a gold ring; this he took from the finger of the dead and reascended. Now the shepherds met together, according to custom, that they might send their monthly report about the flocks to the king; into their assembly he came having the ring on his finger, and as he was sitting among them he chanced to turn the collet of the ring inside his hand, when instantly he became invisible to the rest of the company and they began to speak of him as if he were no longer present. He was astonished at this, and again touching the ring he turned the collet outwards and reappeared; he made several trials of the ring, and always with the same result—when he turned the collet inwards he became invisible, when outwards he reappeared. Whereupon he contrived to be chosen one of the messengers who were sent to the court; whereas soon as he arrived he seduced the queen, and with her help conspired against the king and slew him, and took the kingdom. Suppose now that there were two such magic rings, and the just put on one of them and the unjust the other; no man can be imagined to be of such an iron nature that he would stand fast in justice. No man would keep his hands off what was not his own when he could safely take what he liked out of the market, or go into houses and lie with any one at his pleasure, or kill or release from prison whom he would, and in all respects be like a God among men. Then the actions of the just would be as the actions of the unjust; they would both come at last to the same point. And this we may truly affirm to be a great proof that a man is just, not willingly or because he thinks that justice is any good to him individually, but of necessity, for wherever any one thinks that he can safely be unjust, there he is unjust.
https://gutenberg.org/cache/epub/1497/pg1497.txt
This is my first thought as well. It's too obvious. He should have consulted ChatGPT before the announcement.
Secret FISA court decisions will say the use is lawful, but you’ll never get to read or challenge those decisions.
Just good 'ol fashion grifting mixed with a bit of government corruption.
This country has been boiling the frog of graft, grifting, and corruption too long.
I believe this understanding is correct. The issue many people have these days with Dept. of War, and most of Trump admin is that they have little respect for laws. They only follow the ones they like and openly ignore the ones that are inconvenient.
Dept of "War" should have zero problems agreeing to the two conditions Anthropic outlined, if they were honest brokers. But I think most of us know that they are not. Calling them dishonest brokers seems very charitable.
I recommend reading Yuval Noah Harari's Nexus for a deep discussion around this.
He makes the point that what makes this AI age much more dangerous for mass surveillance isn't just the collection of data, which has indeed been possible for a while, but the new ability to have AI sift through that enormous volume of information, an ability which until recently has not been possible in a meaningful way without a ton of manual work to support it.
Older attempts at mass control of a population already involved mass surveillance, even in a large amount of detail, but even when capturing in detail all citizens' activities, there were just not enough people around to be able to dig through that and analyze it. This has been somewhat true even with the help of computers, though computers have certainly already been making this easier.
But now you can just give all that data to an AI with your instructions, and it'll apply some sort of "judgement" on your behalf, completely autonomously, and even perform actions against those folks it finds, again autonomously, without needing to manually build a whole infrastructure to do that with manual rules. That's a very meaningful upgrade for someone wanting to control a population.
like saying kids having internet-connected devices with built-in cameras doesn't increase the probability of sexting, they could do the same with film cameras and a fax machine
At the same time, it is expressly illegal in some circumstances; that was the whole core of the Snowden revelations. The NSA and CIA are expressly curtailed from doing that by law — there are cases where they may surveil citizens with a court order, but not "mass" surveillance. There are some restrictions on the military along those same lines.
Keywords: Executive Order 12333, FISA, National Security Act, Posse Comitatus Act
Ex: For the above statement, if they're truly dishonest brokers and openly ignore the rules that are inconvenient, they would have zero problems agreeing to Anthropic's terms and then violating them. So what you say may be quite true, but there would still need to be more to the story for it to make sense.
Ex: DoW officials are stating that they were shocked that their vendor checked in on whether signed contractual safety terms were violated: They require a vendor who won't do such a check. But that opens up other confusing oversight questions, eg, instead of a backchannel check, would they have preferred straight to the IG? Or the IG more aggressively checking these things unasked so vendors don't? It's hard to imagine such an important and publicly visible negotiation being driven by internal regulatory politicking.
I wonder if there's a straighter line for all these things. Irrespective of whether folks like or dislike the administration, they love hardball negotiations and to make money. So as with most things in business and government, follow the money...
"Find all of the terrorists in this photo", "Which targets should I bomb first?"
Even if the DoD wanted to ignore the legal terms, the model itself would not cooperate. DoD required a specially trained product without limitations.
If your company makes an herbicide that happens to be very good at killing off anyone who drinks it at a high concentration in their water supply, you're saying that there should be no way for your company to resist being used for mass murder (including unavoidable collateral damage)?
Also, the core mission of the military is not "killing its adversaries through any means necessary". It is to defend state interests. Some people have a belief that mass killing is the best mechanism for accomplishing that. I do not agree with, nor do I want to associate with, those people. They are morally and objectively wrong. Yes, sometimes killing people is the most effective -- or more likely, the quickest -- way. In practice, it doesn't work very well. The threat of violence is much more powerful than actually committing violence. If you have to resort to the latter, you've usually screwed up and lost the chance to achieve the optimal outcome. It is true that having no restrictions whatsoever on your ability to commit violence is going to be more intimidating, but it also means that you have to maintain that threat constantly for everyone, because nobody has any other reason to give you what you want.
The actual military is not evil. Your conception of it is.
> The actual military is not evil. Your conception of it is.
You're right, but there's a a real question here: should a company have the ability to control or veto the decisions of the democratically-elected government?
To give different hypothetical example: should Microsoft be allowed to put terms in its Windows contracts with the government, stipulating that Windows cannot be used to create or enforce certain tax policy or regulations that Microsoft disagrees with? Windows is all over, and I'm sure pretty much every government process touches Windows at some point, so such a term would have a lot of power.
I don't think "control or veto" is fair. Anthropic is not trying to prevent the US government from creating full autonomous killbots based on inadequate technology. They are only using contract law to prevent their own stuff from being used in that way.
But that aside, my opinion is that to a first order approximation, yes a company should very much be able to have say in its contract negotiations with any party including the government. It's very similar to the draft. I don't believe a draft is ethical until the situation is extreme, and there ought to be tight controls on what it takes to declare the situation to be that extreme. At any other time, nobody should be forced to join the military and shoot people, and corporations (that are made of people) should not be forced to have their product used for shooting people.
A corporation is a legal fiction to describe a group of people. Some restrictions can be placed on corporations in exchange for the benefits that come from that legal fiction, but nothing that overrides the rights of its constituent people.
Governments are made of people too. Again, a subset of people are given some powers in order to better achieve the will of the people, but with tight controls on those powers to keep the divergence to a minimum. (Of course, people will always find the cracks and loopholes and break out of their constraints, but I'm talking about design not real-world implementation here.)
So to look at your hypothetical, first I'd say it's not very different from the question of whether an individual person should be forced to personally enforce tax policy. Normally, I'd say no. There are many situations where the government needs more say and authority in such things, but that must only be achieved via representatives of the people passing laws to allow such authority. Other than that, yes: I believe a company should be able to negotiate whatever contract terms it wants. In a democracy, we are not subjects of a controlling government; the government is an extension of us.
In practical terms, if Microsoft were to insist on that contract stipulation, the government would not agree to the contract and would award its business to someone else. If the government were especially out of control and/or unethical, it might punish Microsoft with regulations or declarations of supply chain risk or whatever, but that is clearly overstepping its bounds and ought to be considered illegal if it isn't already. The usual fallback would be that the people would throw the people perpetrating that out on their asses. That's the "democratically-elected part".
Obviously, Microsoft would be stupid to insist on such a thing in their contract, and its employees would probably lose all confidence in the corporate leadership. Most likely, they'd leave and start Muckrosaft next door that rapidly develops a similar product and sells it to the government under a reasonable contract.
Basically, I'm always going to start from people first, and use organizations and laws only in order to achieve the will of the people. The fact that the people are stupid does make that harder, but the whole point of democracy is that we'll work out the right balance over time.
> The threat of violence is much more powerful than actually committing violence.
While I agree with this statement, the only way the threat works is if from time to time you apply violence to reinforce your capability and availability to actually do it. And the US is really good at actually being violent so others don't even think about doing something against it, at least the majority of countries anyway.
Now apply the same logic to the current Iran war.
Al Jazeera has some very good insights into this, and the gist of it is: the Iranian regime is in a fight for its life with nothing to lose. If they are degraded enough, a revolution will start in Iran and they will be killed by the people. Or by US/IL bombs - whichever comes first. There is no way they get out of this alive. They are trying to prolong the inevitable.
You are describing Libya scenario, not a 'lived prosperously ever after'. There is no credible opposition in Iran to take the mantle.
It does not an established opposition because the current regime has the habit of killing anyone it doesn't like or goes against the official line. Now there is a chance for opposition to form.
With the US & Israel supporting the minorities (most likely offering them independence), in the hope of toppling the regime, and bombing mostly Persians, the most likely outcome (assuming they are actually able to force regime change, which is far from guaranteed) is fragmentation and general lawlessness.
Note that whoever inherits the regime would have to deal with wholesale destruction of the country, traumatized population and hate for those who bombed them and killed their relatives and children. Slavishly obeying the new foreign overlords will not be very popular. Have we not learned anything from Iraq and Afghanistan? How can you still believe the fairy tales of welcoming the liberators?
The wars are already total for the weaker sides. See Ukraine/Iran. Did not stop the stronger side attacking.
You are advocating for no constraints (total war) on the stronger side. Taken literally, that means genocide of the losers. Really, that's what you want?
But yes, you are right, the world would be much simpler in such case - there will be no humans left. OK, maybe some hunter-gatherers.
Taken literally, it means genocide of the losers is an option the winning side has. It always has been.
Note that Genghis Khan's explicit plan when he conquered China was to wipe out the Chinese to make room for Mongols. He wasn't stopped from doing that; there was no constraint to block him.
But he was persuaded not to.
Whenever we say "the regime is hated by it's people it will collapse" it should be asked "then why didn't it collapse already?". In Iran metropolitan areas are where you see opposition. That's also where people have cameras and media orgs tend to be. We get a warped depiction of opposition in Iran even without our own media's baggage. Meanwhile the power base of Iran is everywhere but metropolitan cities. And there's a lot of clients who benefit from the regime. I think this might be worse than the sectarian violence that came out of the Hussein regimes collapse because the Sunni sect his base was built around was still a minority. This time it's the majority and the people being fought against are the Americans, the Israelis and the Arabs so their backs are against the wall this is a total war already from their side.
If I say, no, then am I stopping the military?
I feel like it is reasonable that I can say "no, I don't want to sell you my apples."
I cannot for the life of me figure out why that means I am stopping the military from killing people. The US Military will definitely still be able to kill people for centuries. I'm just saying I don't want to participate in it.
So in short it doesn't matter what the Pentagon thinks as Trump is the commander in chief and as far as I know the Pentagon has to follow his orders.
Evidence (the Commander in Chief calling the opposition terrorists, and celebrating their government executions, for example) indicates that reality indeed reflects the things you personally don't believe.
If government can force any private company to work specially for government then US is no better than PRC
Legit war time measures can be a thing (that's why it's fucked if president can just start a war and then use that as excuse for any war time measures they like)
And for better or worse, it is actually good that it is like this. Otherwise, if Congress declares war on Iran or China or whatever, the whole country will be put on a war footing, companies will be directed to build whatever the Pentagon says it needs, drafts will be enforced and so on. And it would be pretty ugly.
What happened was different: a private company decided to enforce some terms, as they can do during peace time and they have been bullied in a way that is disgraceful precisely because it didn't happen during war time nor it has been done using the existing laws around that.
What is the purpose of having laws in the first place if we accept that the government can rule by intimidation?
usa was not aggressor
fat chance congress declaring war of aggression on a peaceful country
However, the military is bound by US and international law. It's clear they're not going to obey either of those with respect to this contract.
On top of that, Anthropic has correctly pointed out that the use cases Trump was pushing for are well beyond the current capabilities of any of Anthropic models. Misusing their stuff in the way Trump has been (in violation of the contract) is a war crime, because it has already made major mistakes, targeted civilians, etc.
I think it’s also possible DoW didn’t care about the conditions but just wanted some pretext to punish Anthropic because Dario isn’t a Trump boot licker like the rest of the SV CEOs.
And while this administration is brazen about this, it's not really a drastic change anywhere.
In fact most EU laws (GPDR, AI regulation, Chat Control) are directly, up front, declaring they themselves won't respect it. They very directly have one set of rules for states, government employees, ... and ANOTHER set of rules for everyone else. And they're incredibly brazen. For private individuals, companies it goes very far, it's essentially impossible to even know what does and does not violate the GPDR, and you can't ask the courts, that's not allowed. You also cannot use the courts to compel government to do anything under these laws.
For governments, when it comes to what's allowed, it goes incredibly far. Governments can declare any action legal under the GPDR, before and after the fact, without parliament involvement. It does not matter if that action was done by the government themselves, or if it's an action by a private company (so the government can use subcontractors for any violation of the GPDR)
This means that, for THE example given for GPDR protection: medical information. Medical insurance in the EU is either state-owned or has exceptions, the law does the exact opposite of what it appears to do: it makes all your medical information available for medical insurers. And the police (e.g. to find you). And the tax office. And courts. And medical institutions themselves (to deny transplants to smokers). And ... And while doctors (and priests) used to be huge no-no's when it came to information gathering, that's no longer the case. If a doctor uses the state required medical file, your medical information flows straight into a state database, immediately searchable for everyone the GPDR supposedly protects you against.