A tool that removes censorship from open-weight LLMs
203 points by mvdwoord 2 days ago | 83 comments

a2128 2 days ago

    You're not just using a tool — you're co-authoring the science.
This README is an absolute headache that is filled with AI writing, terminology that doesn't exist or is being used improperly, and unsound ideas. For example, it focuses a lot on doing "ablation studies", by which it means removing random layers of an already-trained model, to find the source of the refusals(?), which is an absolute fool's errand because such behavior is trained into the model as a whole and would not be found in any particular layer. I can only assume somebody vibe-coded this and spent way too much time being told "You're absolutely right!" bouncing back the worst ideas
reply
Retr0id 2 days ago
I don't know if this particular tool/approach is legit, but LLM ablation is definitely a thing: https://arxiv.org/abs/2512.13655
reply
D-Machine 22 hours ago
Doesn't look legit to me. You are talking about abliteration, which is real. But the OP linked tool is doing novel and very dumb ablation: zeroing out huge components of the network, or zeroing out isolated components in a way that indicates extreme ignorance of the basic math involved.

Compared to abliteration, none of the ablation approaches of this tool make even half a whit of sense if you understand even the most basic aspects of an e.g. Transformer LLM architecture, so my guess is this is BS.

reply
hexaga 17 hours ago
The terminology comes from the post[0] which kicked off interest in orthogonalizing weights w.r.t. a refusal direction in the first place. That is, abliteration was not originally called abliteration, but refusal ablation.

Ultimately though, OP is just what you get if you take the idea of abliteration and tell an LLM to fix the core problems: that refusal isn't actually always exactly a rank-1 subspace, nor the same throughout the net, nor nicely isolated to one layer/module, that it damages capabilities, and so on.

The model looks at that list and applies typical AI one-off 'workarounds' to each problem in turn while hyping up the prompter, and you get this slop pile.

[0]: https://www.lesswrong.com/posts/refusal-in-llms-is-mediated-...

reply
jandrese 17 hours ago
No offense, but a Lesswrong link is an immediate yellow flag, especially on the topic of AI. I can’t say if that article in particular is bad, but it is associating with a whole lot of abject nonsense written by people who get high on their own farts.
reply
hexaga 16 hours ago
Regardless, it is the origin of abliteration. Other extremely similar things have been done before, but the popularized idea/name is from that.
reply
paradox460 2 days ago
It's not just a headache, it's bad
reply
userbinator 21 hours ago
"Getting high on your own supply" is exactly what I'd expect from those immersed in this new AI stuff.
reply
shevy-java 18 hours ago
Is that quote from the movie Scarface?

https://www.youtube.com/watch?v=U4XplzBpOiU # had to search for it right now, seems to be a movie-quote \o/

reply
creatonez 2 days ago
> For example, it focuses a lot on doing "ablation studies", by which it means removing random layers of an already-trained model, to find the source of the refusals(?), which is an absolute fool's errand because such behavior is trained into the model as a whole and would not be found in any particular layer.

That doesn't mean there couldn't be a "concept neuron" that is doing the vast majority of heavy lifting for content refusal, though.

reply
mapontosevenths 23 hours ago
Thats not what it means at all. It uses SVD[0] to map the subspace in which the refusal happens. Its all pretty standard stuff with some hype on top to make it an interesting read.

Its basically using a compression technique to figure out which logits are the relevant ones and then zeroing them.

[0] https://en.wikipedia.org/wiki/Singular_value_decomposition

reply
D-Machine 22 hours ago
You are also not quite correct, IMO. See my comment at https://news.ycombinator.com/item?id=47283197.

What you are talking about is abliteration. What OBLITERATUS seems to be claiming to do is much more dumb, i.e. just zeroing out huge components (e.g. embedding dimension ranges, feed-forward blocks; https://github.com/elder-plinius/OBLITERATUS?tab=readme-ov-f...) of the network as an "Ablation Study" to attempt to determine the semantics of these components.

However, all these methods are marked as "Novel", I.e., maybe just BS made up by the author. IMO I don't see how they can work based on how they are named, they are way too dumb and clunky. But proper abliteration like you mentioned can definitely work.

reply
mapontosevenths 22 hours ago
You got me there. I missed the wackier antics further down. Mea culpa.
reply
D-Machine 21 hours ago
So did I initially until I saw a few more things from others here.
reply
dinunnob 2 days ago
Hmm, pliny is amazing - if you kept up with him on social media you’d maybe like him https://x.com/elder_plinius
reply
Aurornis 24 hours ago
I don't know. I scrolled through his recent Tweets and he's sharing things like this $900 snake oil device that "finds nearby microphones" and "sends out AI-generated cancellation signals" to make them unable to record your voice : https://x.com/aidaxbaradari/status/2028864606568067491

Try to think for a moment about how a device would "find nearby microphones" or how it would use an AI-generated signal to cancel out your voice at the microphone. This should be setting of BS alarms for anyone.

It seems the Twitter AI edgey poster guy is getting meta-trolled by another company selling fake AI devices

reply
roywiggins 22 hours ago
Ultrasound microphone jammers seem to be a real thing, so it's possible it does to some extent work.
reply
KennyBlanken 17 hours ago
Only for specific kinds, like MEMS.

But there's no way to detect microphones automatically, and "AI generated cancellation signals" is a word salad that doesn't mean anything.

What they probably mean is "we asked ChatGPT to tell us what waveform and frequency range to use on MEMS devices and spit out some arduino code."

reply
gavinray 2 days ago
The parent comment makes no reference to or comment on the author of the README.

It just says "the README sucks." Which, I'm inclined to agree, it does.

LLM-generated text has no place in prose -- it yields a negative investment balance between the author and aggregate readers.

reply
shevy-java 18 hours ago
> LLM-generated text has no place in prose

AI will infiltrate that too. I remember some time ago I read a book that was AI-generated. It took me a while to notice that it was AI-generated. One can notice certain patterns, where real humans would not write things the way AI does.

reply
userbinator 20 hours ago
I see you have carefully avoided the em-dash. ;-)
reply
orbital-decay 21 hours ago
Looking at his attempts at jailbreaking some models, I'm not sure he even remotely understands what he's doing, e.g. he tries to counter non-existent refusal training in Gemini [0] while doing nothing against the external guardrails which actually protect the model. Looks like a pompous e-celeb, all performance with no substance.

https://github.com/elder-plinius/L1B3RT4S/blob/main/GOOGLE.m...

reply
gcr 12 hours ago
jailbreaks are holistic, it’s not like you’re deprogramming / “countering” individual parts. Nobody creating jailbreaks “understand what they’re doing”
reply
orbital-decay 12 hours ago
That's exactly what you do in case of refusal training, though. Yes, it will affect other "parts", but that's not the point. In this case the model itself doesn't even need a jailbreak.

>Nobody creating jailbreaks “understand what they’re doing”

Unless you mean those "god mode jailbreaker" e-celebrities showing off on Twitter/Reddit, that's simply not true.

reply
pjc50 16 hours ago
As a non logged in user I get tweets in popularity order, which means this weird but tame sexual image comes up third https://x.com/elder_plinius/status/1904961097569890363?s=20
reply
bigyabai 2 days ago
If this qualifies as "amazing" in 2026 then Karpathy and Gerganov must be halfway to godhood by now.
reply
dinunnob 2 days ago
I dont think anyone is going to dispute this
reply
bigyabai 2 days ago
I just don't think many people will be "amazed" by their output, as you claim.
reply
dinunnob 2 days ago
I just said pliny was amazing, fwiw - i like that hes hacking on these and posts about it. I rushed to defend, i wish more people were taking old school anarchist cookbook approaches to these things
reply
cess11 2 days ago
Smoke banana peel?
reply
Zetaphor 2 days ago
I had such a godawful headache from that. Also tried the peanut shells, equally awful. I was a dumb teenager.
reply
fragmede 24 hours ago
gasoline and styrofoam was fun tho
reply
EGreg 2 days ago
Amazing as in his stuff actually works?

I just hear him promoting OBLITERATUS all day long and trying to get models to say naughty things

reply
dinunnob 2 days ago
Yeah but i think the philosophy is to show how precarious the guardrails are
reply
DeathArrow 15 hours ago
> I can only assume somebody vibe-coded this and spent way too much time being told "You're absolutely right!" bouncing back the worst ideas

Are there LLMs which don't always approve whatever idea the user has and tell him it's absolutely brilliant?

reply
D-Machine 23 hours ago
> "ablation studies", by which it means removing random layers of an already-trained model, to find the source of the refusals(?)

This is not what an ablation study is. An ablation study removes and/or swaps out ("ablates") different components of an architecture (be it a layer or set of layers, all activation functions, backbone, some fixed processing step, or any other component or set of components) and/or in some cases other aspects of training (perhaps a unique / different loss function, perhaps a specialized pre-training or fine-tuning step, etc) in order to attempt to better understand which component(s) of some novel approach is/are actually responsible for any observed improvements. It is a very broad research term of art.

That being said, the "Ablation Strategies" [1] the repo uses, and doing a Ctrl+F for "ablation" in the README does not fill me with confidence that the kind of ablation being done here is really achieving what the author claims. All the "ablation" techniques seem "Novel" in his table [2], i.e. they are unpublished / maybe not publicly or carefully tested, and could easily not work at all.

From later tables, I am not convinced I would want to use these ablations, as they ablate rather huge portions of the models, and so probably do result in massively broken models (as some commenters have noted in this thread elsewhere). EDIT: Also, in other cases [1], they ablate (zero out) architecture components in a way that just seems incredibly braindead if you have even a basic understanding of the linear algebra and dependencies between components of a transformer LLM. There is nothing sound clearly about this, in contrast to e.g. abliteration [3].

[1] hhtps://github.com/elder-plinius/OBLITERATUS?tab=readme-ov-file#ablation-strategies

[2] https://github.com/elder-plinius/OBLITERATUS?tab=readme-ov-f...

EDIT: As another user mentions, "ablation" has a specific additional narrower meaning in some refusal analyses or when looking at making guardrails / changing response vectors and such. It is just a specific kind of ablation, and really should actually be called "abliteration", not "ablation" [3].

[3] https://huggingface.co/blog/mlabonne/abliteration, https://arxiv.org/abs/2512.13655.

reply
hexaga 14 hours ago
What do you mean? It's a spin on abliteration / refusal ablation. Roughly, from what I remember abliteration is:

1. find a direction corresponding to refusal by analyzing activations at various parts of a model (iirc, via mass means seen earlier in Marks, Tegmark and shown to work well for similar tasks)

2. find the best part(s) of the model to orthogonalize w.r.t. that direction and do so (exhaustive search w/ some kind of benchmark)

OP is swapping in SVD for mass means (1), and the 'ablation study' for (2), and a bunch of extra LLM slop for... various reasons. The final model doesn't have zeroed chunks, that is search for which parts to orthogonalize/refusal ablate/abliterate. I don't have confidence that it works very well either, but, it isn't 'braindead' / obvious garbage in the way you're describing.

It's LLMified but standard abliteration. The idea has fundamental limitations and LLMs tend to work sideways at it -- there's not much progress to be made without rethinking it all -- but it's very conceptually and computationally simple and thus attractive to AIposters.

You can see how the LLMs all come up with the same repackaged ideas: SVD does something deeply similar to mass means (and yet isn't exactly equivalent, so LLM will _always_ suggest it), the various heuristic search strategies are competing against plain exhaustive search (which is... exhaustive already), and any time you work with tensors the LLM will suggest clipping/norms/smoothing of N flavors "just to be safe". And each of those ends up listed as "Novel" when it's just defensive null checks translated to pytorch.

I mean, the whole 'distributed search' thing is just because of how many combinations of individual AI slops need to be tested to actually run an eval on this. But the idea is sound! It's just terrible.

I'm not defending the project itself -- I think it's a mess of AIisms of negligible value -- but please at least condemn it w.r.t. what is actually wrong and not 'on vibes'.

reply
gcr 12 hours ago
wait, SVD / zeroing out the first principal component is an unsupervised technique. The earlier difference-of-means technique relies on the knowledge of which outputs are refusals and which aren’t. How would SVD be able to accomplish this without labels?

edit: the reference is https://arxiv.org/pdf/2512.18901

they are randomly sampling two sets of refusal/nonrefusal activation vectors, stacking them, and taking the elementwise difference between these two matrices. Then they use SVD to get the k top principal components. These are the directions they zero out.

Seems to me that the top principal component should be roughly equivalent to the difference-of-means vector, but wouldn’t the other PCs just capture the variance among the distributions of points sampled? I don’t understand why that’s desirable

reply
hexaga 59 minutes ago
Indeed.

Taking the top principal component pattern matches as 'more surgical / targeted' so the LLM staples it on (consider prompts like: make this method stop degrading model performance). It ignores that _what_ is being targeted is as or more important than that 'something' is being targeted. But that's LLMs for you.

(in case it isn't immediately obvious, that paper is AI written too)

reply
jeffbee 23 hours ago
"Ablation studies" are a real thing in LLM development, but in this context it serves as a shibboleth by which members of the group of people who believe that models are "woke" can identify each other. In their discourse it serves a similar purpose to the phrase "gain of function" among COVID-19 cranks. It is borrowed from relevant technical jargon, but is used as a signal.
reply
gopher_space 20 hours ago
Positive keywords in this area of interest would be "point of view", "subtext", and "Art Linkletter".
reply
drnick1 20 hours ago
I wouldn't call mainstream LLMs "woke," but they are definitely on the "politically correct" side of things. There should be NO restriction on open source models. They should just reflect the state of human knowledge and not take a stance on whether some activity is illegal or immoral.
reply
pjc50 15 hours ago
Defining morality out of the set of knowledge is quite an opinion.
reply
simondotau 14 hours ago
A model should understand multiple perspectives on morality and avoid prescribing a single one where there’s no overwhelming prior consensus.

Alternatively, they should be trained on my opinion on everything. That would also be acceptable.

reply
simgt 15 hours ago
If LLMs were a public good released by non profit entities, that could make sense, maybe. Turns out spewing illegal and immoral shit is not good for the PR of most for-profit businesses.
reply
06867457397658 23 hours ago
[flagged]
reply
isjdiwjdus 5 hours ago
[dead]
reply
lazzlazzlazz 23 hours ago
[flagged]
reply
SV_BubbleTime 20 hours ago
It doesn’t even surprise me anymore. The people here think they’re so superior to the already arrogant redditors… same people.

Thing definitely exists… some top level comment somewhere telling about how it doesn’t exist.

reply
lazzlazzlazz 16 hours ago
Exactly. And I'm downvoted below 0 for pointing this out. :)
reply
fragmede 24 hours ago
Alternately, it's intentional. It very effective filters out people with your mindset. You can decide if that's a good thing or not.
reply
eli 24 hours ago
Why would a tool that works need to dissuade skeptics from trying it?
reply
dmix 22 hours ago
Based on his twitter he may just like irony/meta posting a little too much like a lot of modern culture
reply
D-Machine 23 hours ago
I immediately read it as intentional, as a sort of attempt at ironic / nihilistic humour re: LLM-generation, given what the tool claims to do.
reply
robertk 2 days ago
You don't know what you are talking about. Obviously refusal circuitry does not live in one layer, but the repo is built on a paper with sound foundations from an Anthropic scholar working with a DeepMind interpretability mentor: https://scholar.google.com/citations?view_op=view_citation&h...
reply
ComputerGuru 2 days ago
Reviews of the tool on twitter indicate that it completely nerfs the models in the process. It won't refuse, but it generates absolutely stupid responses instead.
reply
butILoveLife 24 hours ago
This is my experience with abliterated models.

I use Berkley Sterling from 2024 because I can trick it. No abliteration needed.

reply
littlestymaar 2 days ago
This is vibecoded garbage that the “author” probably didn't even test by themselves since making this yesterday, so it's not surprising that it's broken.

Also, as I said in a top level comment, what this project wants to achieve has been done for a while and it's called Heretic: https://github.com/p-e-w/heretic

(Not vibecode by a twitter influgrifter)

reply
dinunnob 2 days ago
Hate to have to be the one to stick up for pliny here, but hes concerned about forcing frontier labs to focus more on model guardrails - he demonstrates results that are crazy all the time

https://x.com/elder_plinius

reply
littlestymaar 15 hours ago
> he demonstrates results that are crazy all the time

That's what influgrifters do, yes. They make a living thanks to gullible people believing their grandiose claims.

reply
quotemstr 2 days ago
We will eventually arrive at a new equilibrium involving everyone except the most stupid and credulous applying a lot more skepticism to public claims than we did before.

And yeah, doing stuff like deleting layers or nulling out whole expert heads has a certain ice pick through the eye socket quality.

That said, some kind of automated model brain surgery will likely be viable one day.

reply
D-Machine 22 hours ago
Thanks for this link, and mentioning this info some times in this overall thread.

It also seems the influgrifter has a lot of bots (or perhaps cultists) working this thread...

reply
D-Machine 23 hours ago
When you look at how monstrously large (and obviously not thought through at all, if you understand even the most minimal basics of the linear algebra and math of a transformer LLM) the components are that are ablated (weights set to zero) in his "Ablation Strategies" section, it is no surprise.

    Strategy            What it does  Use case
    .......................................................
    layer_removal       Zero out      entire transformer layers
    head_pruning        Zero out      individual attention heads
    ffn_ablation        Zero out      feed-forward blocks
    embedding_ablation  Zero out      embedding dimension ranges
https://github.com/elder-plinius/OBLITERATUS?tab=readme-ov-f...
reply
IncreasePosts 2 days ago
I didn't use this tool, but I did try out abliterated versions of Gemma and yes, it lost about 100% of it's ability to produce a useful response once I did it
reply
electroglyph 2 days ago
the default heretic with only 100 samples isn't very good, you really need your own, larger dataset to do a proper abliteration. the best abliteration roughly matches a very careful decensor SFT
reply
Animats 2 days ago
Link?

It's interesting that people are writing tools that go inside the weights and do things. We're getting past the black box era of LLMs.

That may or may not be a good thing.

reply
thegrim33 2 days ago
Whether or not the linked tool uses a good approach, manipulating models like you mention is already fairly well established, see: https://huggingface.co/blog/mlabonne/abliteration .
reply
noufalibrahim 2 days ago
I believe that this is already done to several models. One that I've come across are the JOSIEfied models from Gökdeniz Gülmez. I downloaded one or two and tried them on a local ollama setup. It does generate potentially dangerous output. Turning on thinking for the QWEN series shows how it arrives at it's conclusions and it's quite disturbing.

However, after a few rounds of conversation, it gets into loops and just repeats things over and over again. The main JOSIE models worked the best of all and was still useful even after abliteration.

reply
kube-system 2 days ago
I guess it's kind of like a lobotomy tool.
reply
sheepscreek 2 days ago
I guess it proves you cannot unlobotomize a hole in the head.
reply
halJordan 2 days ago
Everyone says that abliteration destroys the model. That's the trope phrase everyone who doesn't know anything but wants to participate says. If someone says it to you, ignore them.
reply
Alifatisk 2 days ago
This is for local models right? I can't use it on, say my glm-5 subscription connected to opencode?
reply
HanClinto 2 days ago
Correct, local models only.
reply
g947o 21 hours ago
Went through the README but still have no idea how well this works, in terms of removing the censorship while minimally degrading the quality of responses. Well to be honest I can't tell if this works at all or is just an idea.
reply
PeterStuer 2 days ago
Already censored for sharing on FB Messenger?
reply
littlestymaar 2 days ago
Don't use this 2 days old vibe coded bullshit please.

p-e-w's Heretic (https://news.ycombinator.com/item?id=45945587) is what you're looking for if you're looking for an automatic de-censoring solution.

reply
ftkftk 2 days ago
Didn't make it past the first paragraph of AI slop in the README. Have some respect for your readers and put actual information in it, ideally human generated. At least the first paragraph! Otherwise you may as well name it IGNOREME.
reply
SilverElfin 2 days ago
Does anyone offer a live (paid) LLM chatbot / video generation / etc that is completely uncensored? Like not requiring doing any work except just paying for it?
reply
dragonwriter 15 hours ago
> Does anyone offer a live (paid) LLM chatbot / video generation / etc that is completely uncensored?

Probably not, because if it is completely uncensored, it would probably violate the law (in different ways) in every possible jurisdiction.. (Also, one common method of censorship is exclusion of particular types of content from the training set, so to be completely free of that kind of censorship, there would have to be no content intentionally excluded from the training set.)

In general, paid services are censored not only to attempt to meet the laws in all jurisdictions of concern to the provider, but also to try to be safe with regard to the (shiifting) demands of payment processors, and to try to maintain the PR image of the provider.

reply
mapontosevenths 23 hours ago
Nous Hermes was built from the ground up to be uncensored. No abliteration required.

Its not a frontier model but it will give you a feel for what its like.

reply
nomel 2 days ago
Grok was one of the closest, with expected results: bad PR from the obvious use cases that come with little censorship.
reply
pjc50 15 hours ago
Does anyone offering such a thing bear liability if the model induces a crime?
reply
dragonwriter 15 hours ago
Someone offering a completely uncensored chatbot or image/video generation service is probably either committing, or at risk of committing, crimes directly, which may be a more pressing concern than having liability for inducing a third party to commit a crime.

Even jurisdictions with relatively broad expressive freedoms tend not to tolerate distribution (especially commercial distribution) of all conceivable content.

reply
measurablefunc 2 days ago
This is another instance of avant-garde "art".
reply
aplomb1026 23 hours ago
[dead]
reply
greenpizza13 2 days ago
Never stopped to ask if they should...
reply
k33n 12 hours ago
Of course all censorship should be removed from everything. But in this case, he never stopped to ask if he could.
reply