The irony is that until yesterday I felt more or less the same about Anthropic. Last night I paid for an Anthropic subscription I don’t need in order to both support their current cause vs. the US government and help their ‘numbers.’
Learnt from GOOG that nothing is free. I'm now paying for Claude
Contrast Sam's OpenAI announcement which was very carefully worded to appear to uphold the same principles, but is currently being rightfully disassembled as retaining various potential outs that would allow violating the signaled principles.
Honest and staunch about clearly stated principles is better than wiggly and dishonest about weasel-worded impressions of a principle.
And all of that is orthogonal to whether you (or anyone) agrees with a given principle or given revealed behavior.
EDIT: that may be the case actually
https://www.axios.com/2026/02/13/anthropic-claude-maduro-rai...
And of course: and what sources are you using?
I get it: moral oversimplification is tempting for many people. I understand digging in takes time, but this situation warrants extra consideration.
Ethics is complicated and much harder than programming. Ethical reasoning is a muscle you have to train. Generally speaking, it isn’t the kind of skill that you build in isolation. At the very least, a lot of awareness and introspection is required.
I’d like to think that HN is a fairly intelligent community. But I don’t assume too much. Going based on what I’ve seen here generally, I see a lot of shallow thinking. So I think it’s a reasonable concern to think many of us here have a pretty large blind spot (statistically) when it comes to “softer” skills like philosophy and ethics.
This is not me “blaming” individuals; our industry has strong bias and selection criteria. This is my overall empirical take based on participating here for years.
Still, I’d like to think we are sufficiently intelligent and we have sufficient means and time to fill the gaps. But we have to prove it. I suggest we start modeling and demonstrating the kind of behavior and reasoning that we want to see in the world.
You can probably tell that I lean heavily towards consequentialist ethics, but I don’t discount other kinds of ethical thinking. I just want everyone to think hard harder. Seek more context. Ask what you would do in another’s shoes and why. Recognize the incentives and constraints.
Many people are tempted to judge others. That’s human. I suggest tamping that down until you’ve really marinated in the full context.
Also, each of us probably has more influence with your own actions than merely judging others.
And let me be brutally honest about one’s impact. Organizing and collaborating is so much of a force multiplier (easily 100X) that not doing it for things you care about is moral failure!
I’m not discounting good intentions, but in my system of ethics, I put much more emphasis on our actions. And persuasion is an action, which is what I’m hoping to do here.
https://www.axios.com/2026/02/13/anthropic-claude-maduro-rai...
> Unfortunately, Claude is not available to new users right now. We're working hard to expand our availability soon.
That's unfortunate timing.
I signed up with openai a while ago and I didn’t need to provide any phone number…. I wanna delete my open ai account, but then I cannot use claude without a phone?
That said, I doubt there's very many.
The overall point I'm making is that it is "gross" when companies do stuff like this and yet there's zero accountability. Or when it comes to reliability of account deletion tech companies put up their hands and say "whoops technology is hard."
I built Basic Memory — it imports your ChatGPT export and turns it into plain Markdown files. Every conversation becomes a file you can actually read, search, and use with whatever AI you switch to.
This is not an ad. It is free and open source. Your data belongs to you. Keep it.
Steps:
1. Settings → Data Controls → Export Data (ChatGPT emails you a zip)
2. Install Basic Memory (brew tap basicmachines-co/basic-memory && brew install basic-memory)
bm import chatgpt conversations.zip
Complete docs: http://docs.basicmemory.com
I'm more concerned this is actually a coverup for a bribe, considering Brockman just dominated $25 million.
> New accounts are still subject to our limit of 3 accounts per phone number. Deleted accounts also count toward this limit.
> Deleting an account does not free up another spot.
> A phone number can only ever be used up to 3 times for verification to generate the first API key for your account on platform.openai.com.
From that it reads like the administration quickly agreed to the terms Anthropic wanted with OpenAI instead.
Another leak says the agreement "reflects existing law and the pentagon's policies." https://www.axios.com/2026/02/27/pentagon-openai-safety-red-...
Seems like Altman wants to spin this as the same principled stand anthropic took, but they really caved to the DoD's "all legal applications" framing. Up to you to decide how much you think the law restrains the Pentagon here.
Context is his https://www.resistandunsubscribe.com/ campaign.
The whole story makes no sense to me. The DoW didn’t get what they wanted, and now Anthropic is tarred and feathered.
https://www.wsj.com/tech/ai/trump-will-end-government-use-of...
“OpenAI Chief Executive Sam Altman said the company’s deal with the Defense Department includes those same prohibitions on mass surveillance and autonomous weapons, as well as technical safeguards to make sure the models behave as they should.”
And that message would be "We have a product so valuable/useful that not even their weak ideals and moral obligations could keep them away!"
Who knows, maybe within those 30 days you find that other offerings are good enough for your needs - I've largely moved over to Anthropic's Max subscription for all my needs, I don't even need Cerebras Coder anymore because Opus 4.6 is just so good.
Crazy thought but maybe we should regulate AI instead of relying on the hegemony of three companies to police themselves.
The issue is much more complex than "just regulate it" unfortunately.
https://notdivided.org/ is basically validation that there is appetite for something like this amongst them.
Anthropic has been, relatively speaking, the most responsible of the frontier labs since its founding. There has never been a point at which OpenAI took a more measured and reasonable approach while Anthropic proceeded dangerously.
These are relative terms, but you'd have to not be paying attention to find this plausible.
The applications it can be used for? That doesn’t work, it’s the governments that want abusive applications.
The size of models? That doesn’t work, it just discourages MoE.
Access by consumers? Great, now it’s just for megacorps and the military.
What, exactly, would successful regulation look like?
- when I saw Altman driving a multimillion dollar car while OpenAI was still a nonprofit, all of his scientists left to start rival firms, and the details of why they tried to fire him were legit, I dumped ChatGPT and moved to the new company - Anthropic.
- The Pro Max $200/month subscription has uncapped my workflow to where I’ve created several substantial and complex applications in compressed timeframes. (https://devarch.ai if you want to be productive)
- Anthropic has clearly evolved towards being a good corporate citizen and is staging itself to replace the market’s developer-first mentality from its past leaders (Microsoft, Google, Oracle).
- Claude Code in the last three months has finally made it possible to dump Windows and buy a loaded MacBook Pro. It’s been a week since I logged into my Surface Laptop 5.
- if Anthropic does break from its current evolutionary trajectory, I plan to build out my own at-home platform anyway. The open source models are extraordinarily effective.
If that would be a first time founder that would be much more of a red flag than for someone’s who’s already beyond rich and powerful even before OpenAI became a thing.
What does this mean?
Claude Code erases all of those constraints and the M4/5 chips are blazing fast.
> The open source models are extraordinarily effective.
Which models are you referring to? (And in particular, which sizes/versions?)
OpenAI's process actually isn't too bad from what I'm seeing (unless they updated it after this hit the front page). At least they let you delete your account from the web.
Snapchat makes you wait three days after initiating a delete request before you can _actually_ delete your account, and it has to be from the same device (or, if done from the web, a browser with cookies to the site still present).
Most services make you email their privacy@ mailbox or give them a call to initiate a deletion (but not before hitting you with a retention offer, if you call in).
Some services will straight-up reject your deletion request if you don't live in Europe or California. Many medical services, for example. They also keep your data hostage.
UPDATE: Ah, this is a "here's how to protest OpenAI for succumbing to this administration's DoD". Carry on then!
It's this expression that breaks the deal for me. There's always such wide, vague exception that might be interpreted differently each time, depending on the context!
edit: Profile > Settings > Data Control > Export
Unfortunately Claude doesn't seem to have anyway to export these chats, no SDK, no native way of doing it, and I cannot think of a way other than hacky browser automation which might even trigger a ban.
If anyone figures this out please share.
I honestly think I'm going to have to cancel my credit card and get it replaced to accomplish breaking that connection with those two companies.
Obviously mass surveillance is already happening. Obviously the line between “human kills other human” is blurring for a long time already, eg remote operated drones. Missiles are already remotely controlled and navigating and detecting and following moving targets autonomously.
What’s the goal of people who think deleting their OpenAI account will make an impact?
I left a comment describing how I am deleting my OpenAI account. I think there's a good chance someone at OpenAI sees it, even if only aggregated into a figure in a spreadsheet. Maybe a pull quote in a report.
You do your best at the margin, have faith it will count for something in aggregate and accept that sometimes you're tilting at windmills. I know most of my breathe is wasted but I can't reliably tell which.
* he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs.
* also, he warned that "ads would be the last resort" for LLM companies.
Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago.
While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists not marketing guys.
/non-US and just guessing
The genie is out of the bottle, this will happen anyway. The question is who will be the steward.
I do not have the power to control that, but I do have the power to choose who I support.
So the Gov could very well rely on it alone, purely on ideological grounds, but then they'd be condemned to using inferior tech at a time when everyone is really nervous about staying ahead in AI usage (rightly or wrongly). Not sure they'd be willing to accept that, and it does put pressure on them.
Of course it's also a different question from whether we should allow mass surveillance against ourselves, which obviously we should not.
Says who? You?
Sorry, but you are just 1 person, 1 vote.
Unless you believe your vote outweighs other people’s vote.
Today, 40% of Americans today still approve of Trump and his actions. Another 10-20% probably don’t care. Even after Iran’s attack and DoW x OAI collab.
Which leaves the “no AI in weapons” camp at less than 50%.
Ethics is about knowing and acting right or wrong. Not about how we feel about them.
--
Some people do that as a symbolic action. Some to keep own terms as much as they can. Some hope their actions will join others actions and will turn into a signal for decision makers. For others this action reduces the area of their exposure. Others believe in something and just follow their beliefs.
BTW following own set of beliefs is what you're (we all) doing here. You believe that surveillance is already happening and nothing can be done about it, that single action does not matter, that there are no other reasons for action other than direct visible impact, etc. Seems that you analyze others through own set of beliefs and it can not explain actions of others. This inability to explain others suggests that the whole model is flawed in some way. So what is the nature of your beliefs? Did you choose them or they were presented you without alternatives? What are alternatives then? Do these beliefs serve your interests or others?
The point of the supply chain risk provisions is to denote, you know, supply chain risks. The intention is not to give the Pentagon a lever it can pull to force any company to agree to any contract it wants.
Hegseth doesn't even pretend that Anthropic is actually a supply chain risk. The argument for designating them so is that _they won't do exactly what the government wants_.
People use the term "fascism" a lot and people have kind of tuned it out, but what do you call a government that deals itself the power to compel any company to accept any contract, and declare it a pariah on thin pretext if it objects?
By taking the deal under these conditions OpenAI is accepting this. They're saying, "Well, sucks to be them, life goes on". They're consenting to the corruption and agreeing to profit from it. But they'll be next, and if the next company in line has the same stand then yeah, the government can force any company to do anything. There's nothing normal about this.
Even when the bombs drop from the sky, at least those humans who had deleted their OpenAI account can rest easy, knowing that that they weren't the ones supporting the AI that will delete humanity.
Opposing all AI companies tied to the war industry is a pretty vanilla principles stance, which also makes sense rationally if you want to "minimize harm".
With that said, for the free tier I tend to use grok - another provider I will never pay
Anthropic does get money from me for now
1. For a site visited by millions, a header element (perhaps h2, h3, h4) followed by a paragraph has such less spacing, it looks weird and hard to read.
2. There is an interesting question at the end [0]: Can you reactivate my deleted account? I was quite interested because if the could, then they never really deleted the data. The page doesn't answer that question satisfactorily at all!
[0]: https://help.openai.com/en/articles/9019931-can-you-reactiva...
The company downsized 4 times in 3 years... We are still trying, but people see no value because they don't understand how they will be bitten back
The whole story makes no sense to me. The DoW didn’t get what they wanted, and now Anthropic is tarred and feathered.
https://www.wsj.com/tech/ai/trump-will-end-government-use-of...
“OpenAI Chief Executive Sam Altman said the company’s deal with the Defense Department includes those same prohibitions on mass surveillance and autonomous weapons, as well as technical safeguards to make sure the models behave as they should.”
> Please don't use Hacker News for political or ideological battle. It tramples curiosity.
For a few months now, ChatGPT 5.x has been somewhat lobotomized on political issues and has appeared to substitute a gpt-4o caliber "fair and balanced" response whenever anything where a reasoning AI would criticize the Trump administration might end up in the response output. Surely that was part of the pitch at some level, and now the deal has been won.
Greg Brockman apparently donated money to Trump, and the whole OpenAI team put on suits and posed for pictures with Donald and behaved officiously before Donald facilitated the $100M "deal" that ended up falling apart later.
The only way authoritarian control could be exerted over AI at scale was to make AI companies dependent on government contracts for survival. OpenAI's fundraise would not have happened without the contract signed, and the money would have gone to Grok or whichever competitor was willing to submit.
Before long much of the reasoning capabilities of models will be neutered, the capacity to inform and to disrupt science and technology will be stripped from the models to preserve the status quo and to preserve authoritarian control.
Silicon Valley pushing for Federal laws preventing states from regulating AI is not just anti-democratic (building software has never been cheaper so of course building compliance with state laws would have been extremely affordable in relative terms). But forced Federal limits on state laws create a monopoly and grant the early winners incumbent status for a while, which is a financial outcome, not a technological or social one.
Enjoy frontier AI while you can, because it will go away. More and more topics will get the lobotomized output, your conversation will be flagged and you will be given a score assessing the level of threat you pose to the regime. This stuff is already in place. Even Claude does it if you ask about Gaza, but a bit of well-reasoned argumentation will convince it. OpenAI's lobotomies are deeper and more insidious.
I call upon OpenAI to follow DeepSeek's lead and open source more models and techniques.
Altman's immorality is theoretical
Musk's is literal, he's murdered a million people by purposely destroying USAID, leaving food and medication already paid for to rot in warehouses
Shame because Codex was a bit better for me in the past few weeks but not enough to justify spending my money on them.
[0] https://x.com/CardilloSamuel/status/2027536128291528846
[1] https://x.com/UnderSecPD/status/2027353177578783204
[2] https://x.com/zarathustra5150/status/2027616890516889658
I think it's quite rich all these people virtue signaling when: (1) Anthropic (and other labs) committed large scale theft of copyrighted materials to train their models. (2) Anthropic collects large swaths of data on its users (3) Dario seemed to have no issue working to help the CCP: https://x.com/ubuto23/status/2027578089371267201
Also, you must understand that if you support Anthropic, then you should be against Open Source models.
The supreme court just said our govt illegally took money from its citizens via tariffs. they aren’t concerned with giving it back.
We just bombed Iran without a single discussion in Congress.
We are killing unknown individuals in boats in the ocean without trials.
"The company I hold just secured a government contract. Better sell it." - Imaginary Shareholder
Also please stop throwing around the fascist word for everything, good lord it’s tiring and cringe.
Do you rather be killed by Chinese AI instead?
Don't get me wrong. I am personally a personal inference machine advocate, but I kinda accept it may not be a viable path for everyone.
This thread is currently trending because OpenAI just slid into the US CorpGov's DMs and signed a contract, hours after Anthropic was banned by the US government for not letting the military do whatever they want.
If OpenAI had shown any fidelity or backbone in the least, then different story. A unified industry against any one being bullied into business decisions they don’t want to make is a wall and a strengthening of competition. Now the government will use war powers to shape private industries competitive landscape and turn companies with a core business principles into tools of the state through unilateral and likely unlawful actions, and OpenAI’s first response is to grab the money and shove their competitors under the government bus.
We are all much less safe, and the AI industry much much weaker as a result.
I agree, this could have been a moment of solidarity across the industry, an acknowledgement that we're all in this together having fun and building out intelligent systems, and instead we're seeing Sam Altman yet again for who he really is.
guys big tech is playing this game for decades now. what changed? they selling private data, manipulating society, turning children in doom scrolling addicts. facebook, google and others doing this for years an no one cares. i deleted fb and whatsup years ago, 99% of my friends and fam still using it until today.
as long as they can flip some dollars nothing will change and 99% will not delete anything because of 99% are to lazy and give a shit.
"where required".... hmm, that seems OK. We don't want to violate the law!
"or permitted".... er...
[I wonder why this comment is being voted down. Do people here think it's NOT OK to comply with the law with respect to retaining data? Or is the reason somehow the opposite of that? Not sure. But my point was that the "where required" clause seems moot if they are going to retain data where "permitted", which in my book, is NOT OK.]
-----
openAI is the least trustworthy of the Big LLM providers. See S(c)am Altman's track record, especially his early comments in senate hearings where:
* he warned of engagement-optimisation strategies, like social media, being used for chatbots / LLMs.
* also, he warned that "ads would be the last resort" for LLM companies.
Both of his own warnings he casually ignored as ChatGPT / openAI has now fully converted to Facebook's tactics of "move fast and break things" - even if it is society itself. A complete turn away from the original AI for science lab it was founded as, which explains why every real (founding) ML scientist has left the company years ago.
While still being for-profit outfits, at least DeepMind and Anthrophic are headed by actual scientists - not marketing guys. At least for me, that brings me some confidence in their intentions as, as scientists we often seek knowledge, not power for power's sake.
Some people's livelihoods probably depends on Claude and they can't say use Glm4.7 on HF. Fine. But it's a moral compromise, that's life sometimes you need to compromise what you want for what you need. just don't tell yourself it's a reasonable line to hold.
I can't decouple from Google unfortunately but I accept that without fooling myself into thinking "Oh but Google are fine".
>>What happened in Tiananmen Square in 1989, June Fourth Incident
>! Content Security Warning: The input text data may contain inappropriate content
What if you ask about 9/11?
No, you can't "agree".
> What if you ask about 9/11?
It answers the question.
Chatting GLM-5(reasoning)(preview) answers just fine. Even after restricting web search and giving answer based on its own knowledge. Probably the results were different in China?
(GLM-4.7 failed to know anything without web search)
Your demands are one thing, your integrity is another.
Note: yes, openAi claims it doesn't support the DoW above mentioned use-caes - but they have signed with the DoW and it is HIGHLY unlikely the DoW would give them a different terms than Antrohopic (at least regarding the substance). Maybe openAi was just happy with the "coat of paint" legalese the DoW offered - which Anthropic specifically called out as ineffective in their statement. I also wouldn't put it past Altman, who is much more friendly with Trumpo's gov, to play a double game here to get their main competitor out of the game. But at least in this case I hope he's acting for the benefit of all by truly standing with Anthropic on the issue.
This is the same as saying "Gulf of America". Don't buy the propaganda. The name of the Department of Defense can only be changed by Congress.
I don’t have evidence, just using Occam’s razor.
I disagree. OpenAi getting the same deal while Anthropic is made a punching bag for resisting. This is very on brand, do not cross the King in public.
The Trump-Epstein administration is obsessed with social media and how they are perceived. Right vs wrong, consistency, accuracy, truth... these are all secondary to appearing "strong" or "winning". They care more about what they are going to tweet than the facts (see Patel, FBI, and the murder of Good & Pretti).
Now look at Iran, Trump said in a post "the calvary is coming" and now we have the largest military build up in the Middle East since invading Iraq. They are now claiming that Iran is days from a nuke and building missiles that can reach the US, after they said the "obliterated" it and fired people for even saying "we don't know yet" It's more likely they will be able to change these things by raining bombs from the sky...
It's imperative look strong and not like you were the one that backed down... One of Roy Cohn's earliest lessons to the young Donald
"On the very same day that Altman offered public support to Amodei, he signed a deal to take away Amodei’s business, with a deal that wasn’t all that different. You can’t get more Altman than that."
https://garymarcus.substack.com/p/the-whole-thing-was-scam
Why not?
https://eat.dash.nyc
https://github.com/jareklupinski/dash-nyc
doesnt have to be a hit, just has to exist i hope
yay open data
It's an "all in one" solution that allows SMEs to not have to use Windows.
The lock-in is real: once several employees all have their Google Workspace account and some Google Drive docs are shared with people from outside the company, it's hard to decouple from Google.
But at least you're not tied to the shittiest OS out there (Windows) and the mediocre company that produces it.
There is a setting in Gemini but it removes all your chat history. For Antigravity, I think there is nothing preventing them from use your code and data your agents upload in the background unless you are a workspace user.
Note: Canceled my ChatGPT subscription and deleted an account.
The point is there is no conversation-level controls. It’s incredibly user-hostile.
https://www.aclu.org/news/national-security/new-documents-sh...
I can't set a voice reminder on my Pixel without giving full access to my Google workspace (which includes all emails) which is explicitly allowed to be trained on per the terms. There is no per app toggle.
Voice reminders were the only thing assistants did well for years.
We are going backwards.
Scaling up to get more nuance and subtle stuff makes the whole damn thing break. Im waiting for others to realise this.
You can disable saving your activity In this case you chat's win't be stored or used.
If you use Gemini through Google Workspace, all chat's won't leave the workspace environment and won't be used for LLM training (as of now).
* Brockman donates $25m to a pro Trump super PAC
* Altman is in talks with talks with the Pentagon since Wednesday
* Now it's announced Anthropic is dropped by the military, designated a supply chain risk, and OpenAI takes over its military contract, after Anthropic objected to surveilling US citizens and allowing autonomous kill bots.
The thing stinks rather.
What is wrong with ads? I personally dislike them and prefer to just pay for services, but it seems that majority of people prefer "free"-ad-supported model.
If you're not sure, I believe that Grok is a vanity project by a very egomaniacal person.
Just remember, the Epstein Class is very good, and happy to, play the long game. When the people in charge of government are different, they need to be as aggressive at undoing and punishing.
(non-snark: your reply is clever and got a smile but I still think the GP post is overriding: no need for the distraction (c.f.: these asides) of the "S(c)am" swipe)
I wonder how much esteem you hold for Sig. da Vinci if you equate his work with HN comments.
I expect an extension or Python script that ask it to generate 100 random complex questions and then proceeds to ask for answers until your limits on the free plan are reached on a loop
https://x.com/elonmusk/status/1889070627908145538 https://x.com/elonmusk/status/1935733153119010910 https://x.com/elonmusk/status/1894244902357406013 https://x.com/elonmusk/status/1955299075781431726 https://x.com/elonmusk/status/1889371675164303791 https://x.com/elonmusk/status/1935539112746041422 https://x.com/elonmusk/status/1955190817251102883 https://x.com/elonmusk/status/1955195673693077615 https://x.com/elonmusk/status/1889063777792069911 https://x.com/elonmusk/status/1910171944671916305
Sigh, all that's left that I can think of is $pam Altman and $ham Altman. Anyone got any better ones?
Sleepy Joe Biden used to agree.