Case study: recovery of a corrupted 12 TB multi-device pool
113 points by salt4034 18 hours ago | 60 comments

yjftsjthsd-h 15 hours ago
> This is not a bug report. [...] The goal is constructive, not a complaint.

Er, I appreciate trying to be constructive, but in what possible situation is it not a bug that a power cycle can lose the pool? And if it's not technically a "bug" because BTRFS officially specifies that it can fail like that, why is that not in big bold text at the start of any docs on it? 'Cuz that's kind of a big deal for users to know.

EDIT: From the longer write-up:

> Initial damage. A hard power cycle interrupted a commit at generation 18958 to 18959. Both DUP copies of several metadata blocks were written with inconsistent parent and child generations.

Did the author disable safety mechanisms for that to happen? I'm coming from being more familiar with ZFS, but I would have expected BTRFS to also use a CoW model where it wasn't possible to have multiple inconsistent metadata blocks in a way that didn't just revert you to the last fully-good commit. If it does that by default but there's a way to disable that protection in the name of improving performance, that would significantly change my view of this whole thing.

reply
rincebrain 14 hours ago
As far as I can see, no, the author disabled nothing of the sort that he documented.

I suspect that the author's intent is less "I do not view this as a bug" and more "I do not think it's useful to get into angry debates over whether something is a bug". I do not know whether this is a common thing on btrfs discussions, but I have certainly seen debates to that effect elsewhere.

(My personal favorite remains "it's not a data loss bug if someone could technically theoretically write something to recover the data". Perhaps, technically, that's true, but if nobody is writing such a tool, nobody is going to care about the semantics there.)

reply
yjftsjthsd-h 14 hours ago
> I suspect that the author's intent is less "I do not view this as a bug" and more "I do not think it's useful to get into angry debates over whether something is a bug".

Agreed, and I appreciate the attempt to channel things into a productive conversation.

reply
rcxdude 13 hours ago
btrfs's reputation is not great in this regard.
reply
stingraycharles 9 hours ago
As far as I understand, single device and RAID1 is solid, but as soon as you want to do RAID1+0 or RAID5/6 you’re entering dangerous territory with BTRFS.
reply
bombela 3 hours ago
I had a metadata corruption in metadata raid1c3 (raid1, 3 copies) over 4 disks. It happened after an unplanned power loss during a simulated disk failure replacement. Since manual cleanup of the filesystem metadata (list all files, get IO errors, delete IO errored files), the btrfs kernel driver segfaults in kernel space on any scrub or device replacment attenpt.

Honestly the code of btrfs is a bit scary to read too. I have lost all trust in this filesystem.

Too bad because btrfs has pretty compelling features.

reply
Retr0id 14 hours ago
Unless I missed it the writeup never identifies a causal bug, only things that made recovery harder.
reply
harshreality 11 hours ago
Using DUP as the metadata profile sounds insane.

Changing the metadata profile to at least raid1 (raid1, raid1c3, raid1c4) is a good idea, especially for anyone, against recommendations, using raid5 or raid6 for a btrfs array (raid1c3 is more appropriate for raid6). That would make it very difficult for metadata to get corrupted, which is the lion's share of the higher-impact problems with raid5/6 btrfs.

check:

    btrfs fi df <mountpoint>
convert metadata:

    btrfs balance start -mconvert=raid1c3,soft <mountpoint>
(make sure it's -mconvert — m is for metadata — not -dconvert which would switch profiles for data, messing up your array)
reply
throwaway270925 8 hours ago
This should be at the top, using metadata DUP on a 3 disk volume is already asking for it, and of course you loose data when you just use it as jbod with data stored only once. Unless this are enterprise disks with capacitors anything can happen when it suddenly looses power. Not the FSes fault.

With the same configuration this can happen with ZFS, bcachefs etc just as well.

reply
rcxdude 8 hours ago
Will it render the whole filesystem inaccessible and unrepairable on those filesystems as well? One of the issues with btrfs is that it's brittle: failure tends not to cause an inconsistency in the affected part of the filesystem but bring down the whole thing. In general people are a lot more understanding of a power failure resulting in data corruption around the files that are actively being written at the time (there are limits to how much consistency can be achieved here anyway), much less so when the blast radius expands a lot further.
reply
adrian_b 6 hours ago
A few decades ago, XFS was notorious because a power failure would wipe out various files, even if they had been opened only for reading. For instance, I had seen many systems that were bricked because XFS wiped out /etc/fstab after a power failure.

Nevertheless, many, many years ago, the XFS problems have been removed and today it is very robust.

During the last few years, I have seen a great number of power failures on some computers without a UPS, where XFS was used intensively at the moment of the power failure. Despite that, in none of those cases there was any filesystem corruption whatsoever, but the worst that has ever happened was the loss of the last writes performed immediately before the power failure.

This is the behavior that is expected from any file system that claims to be journaled, even in the past many journaled file systems failed to keep their promises, e.g. a few decades ago I had seen corrupted file systems on all existing Linux file systems and also on NTFS. At that time only the FreeBSD UFS with "soft updates" was completely unaffected by any kind of power failures.

However, nowadays I would expect all these file systems to be much more mature and to have fixed any bugs long ago.

BTRFS appears to be the exception, as the stories about corruption events do not seem to diminish in time.

reply
bombela 3 hours ago
I still got corrupted metadata with metadata raid1c3 on btrfs on a power loss. I never had this happen with ext4 alone or atop Linux raid.

I want to be clear that losing (meta)data in flight during a power loss is expected. But a broken filesystem after that is definitely not acceptable.

Some postgresql db endedup soft corrupted. Postgresql could not replay its log because btrfs threw IO errors on fsync. That's just plain not acceptable.

reply
throwaway270925 8 hours ago
> A hard power cycle on a 3 device pool (data single, metadata DUP, DM-SMR disks) left the extent tree and free space tree in a state that no native repair path could resolve.

As a ZFS wrangler by day:

People in this thread seem to happily shit on btrfs here but this seems to be very much not like a sane, resilient configuration no matter the FS. Just something to keep in mind.

reply
scottlamb 6 hours ago
Might be true, but I don't see any aspect of that which is relevant to this event:

* Data single obviously means losing a single drive will cause data loss, but no drive was actually lost, right?

* Metadata DUP (not sure if it's across 2 disks or all 3) should be expected to be robust, I'd expect?

* I certainly eye DM-SMR disks with suspicion in general, but it doesn't sound like they were responsible for the damage: "Both DUP copies of several metadata blocks were written with inconsistent parent and child generations."

reply
zootboy 5 hours ago
> Metadata DUP (not sure if it's across 2 disks or all 3) should be expected to be robust, I'd expect?

No. DUP will happily put both copies on the same disk. You would need to use RAID1 (or RAID1c3 for a copy on all disks) if you wanted a guarantee of the metadata being on multiple disks.

reply
scottlamb 4 hours ago
Wow, yuck. (The "Why do we even have that lever?!" line comes to mind.)

...even so, without a disk failure, that probably wasn't the cause of this event.

reply
zootboy 3 hours ago
The DUP profile is meant for use with a single disk. The RAID* profiles are meant for use with multiple disks. Both are necessary to cover the full gamut of BTRFS use cases, but it would probably be good if mkfs.btrfs spat out a big warning if you use DUP on a multi-disk filesystem, as this is /usually/ a mistake.
reply
Retr0id 14 hours ago
This is obviously LLM output, but perhaps LLM output that corresponds to a real scenario. It's plausible that Claude was able to autonomously recover a corrupted fs, but I would not trust its "insights" by default. I'd love to see a btrfs dev's take on this!
reply
number6 14 hours ago
This is also my first impulse. The second was, if this happened to me, I would not be able to recover it. All the custom c tool talk... If you ask Claude Code it will code something up.

Well that he recovered the disks is amazing in itself. I would have given up and just pulled a backup.

However, I would like to see a Dev saying: why didn't you use the --<flag> which we created for this Usecase

reply
salt4034 6 hours ago
See this Reddit post for background: https://www.reddit.com/r/ClaudeAI/comments/1sdabux/hats_off_...

TLDR: The user got his filesystem corrupted on a forced reboot; native btrfs tools made the failure worse; the user asked Claude to autonomously debug and fix the problem; after multiple days of debugging, Claude wrote a set of custom low-level C scripts to recover 99.9% of the data; the user was impressed and asked Claude to submit an issue describing the whole thing.

reply
Retr0id 5 hours ago
Good to know that my claude-dar is still working.
reply
yjftsjthsd-h 14 hours ago
I was assuming real scenario with heavy LLM help to recover. Would be nice for the author to clarify. And, separately, for BTRFS devs to weigh in, though I'd somewhat prefer to get some indication that it's real before spending their time.
reply
nslsm 13 hours ago
An LLM wouldn't make a mistake like "One paragraph summary"
reply
londons_explore 2 hours ago
Btrfs allows migration from ext4 with a rather good rollback strategy...

Post-migration, a complete disk image of the original ext4 disk will exist within the new filesystem, using no additional disk space due to the magic of copy-on-write.

Why isn't the repair process the same? Fix the filesystem to get everything online asap, and leave a complete disk image of the old damaged filesystem so other recovery processes can be tried if necessary.

reply
jamesnorden 10 hours ago
People swear btrfs is "safe" now, but I've personally been bitten by data corruption more than once, so I stay away from it now.
reply
Avamander 7 hours ago
I installed Fedora on BTRFS using their installer and I lost that partition entirely. Couldn't wrestle it back to life to even copy stuff off it.

I think what happened was that the machine ran out of battery in suspend, but an unclean shutdown shouldn't cause such a deep corruption.

reply
c-c-c-c-c 11 hours ago
Added to my list of reasons to never use btrfs in production.
reply
stinkbeetle 15 hours ago
> Case study: recovery of a severely corrupted 12 TB multi-device pool, plus constructive gap analysis and reference tool set #1107

Please don't be btrfs please don't be btrfs please don't be btrfs...

reply
curt15 10 hours ago
Where are all of the ZFS corruption stories? Or are there simply fewer of those?
reply
Neikius 9 hours ago
Not sure about the stats, but it does feel like there are fewer. So from what I know encryption and sending fs state had bugs in ZFS.

And on btrfs anything above raid1 (5,6 etc) has had very serious bugs. Actually read an opinion somewhere (don't remember where) raid5,6 on btrfs cannot work due to on-disk format being just bad for the case. I guess this is why raid1c3/c4 is being promoted and worked on now?

reply
nubinetwork 8 hours ago
Most of them are from new features that didn't get a proper shakedown test, like encryption.
reply
toaste_ 14 hours ago
I mean, the only other option was bcachefs, which might have been funny if this LLM-generated blogpost were written by the OpenClaw instance the developer has decided is sentient:

https://www.reddit.com/r/bcachefs/comments/1rblll1/the_blog_...

But no. It was btrfs.

As a side note, it's somewhat impressive that an LLM agent was able to produce a suite of custom tools that were apparently successfully used to recover some data from a corrupted btrfs array, even ad-hoc.

reply
yjftsjthsd-h 13 hours ago
It could be ZFS. I'd be much more surprised, but it can still have bugs.
reply
praseodym 13 hours ago
ZFS on Linux has had many bugs over the years, notably with ZFS-native encryption and especially sending/receiving encrypted volumes. Another issue is that using swap on ZFS is still guaranteed to hang the kernel in low memory scenarios, because ZFS needs to allocate memory to write to swap.
reply
nubinetwork 8 hours ago
The swap issue isn't zfs' fault though, it works just fine on FreeBSD and illumos... its a issue with how the Linux kernel handles things.
reply
badgersnake 12 hours ago
The zero copy that zero copied unencrypted blocks onto encrypted file systems was genius. It’s almost like they don’t test.
reply
duskdozer 9 hours ago
Welp. Guess I need to figure out another fs to use for a few drives in a nonraid pool I haven't gotten around to setting up yet. I forget why zfs seemed out. xfs?
reply
Filligree 7 hours ago
ZFS is out because the Linux developers refuse to cooperate by providing the hooks it would need to avoid duplicating the disk cache.

That’s the only real reason. There are some papercuts, but they don’t compare to the risks described in this article.

reply
lnx01 5 hours ago
bulletproof/bulletproof/bulletproof .... Gemini LLM
reply
phoronixrly 15 hours ago
To theal author: did you continue using btrfs after this ordeal? An FS that will not eat (all) your data upon a hard powercycle only at the cost of 14 custom C tools is a hard pass from me no matter how many distros try to push it down my throat as 'production-ready'...

Also, impressive work!

reply
fpoling 12 hours ago
What are the alternatives to btrfs? At 12 TB data checksums are a must unless the data tolerate bit-rot. And if one wants to stick with the official kernel without out-of-tree modules, btrfs is the only choice.
reply
aktau 11 hours ago
I tried btrfs on three different occasions. Three times it managed to corrupt itself. I'll admit I was too enthousiastic the first time, trying it less than a year after it appeared in major distros. But the latter two are unforgiveable (I had to reinstall my mom's laptop).

I've been using ZFS for my NAS-like thing since then. It's been rock solid ().

(): I know about the block cloning bug, and the encryption bug. Luckily I avoided those (I don't tend to enable new features like block cloning, and I didn't have an encrypted dataset at the time). Still, all in all it's been really good in comparison to btrfs.

reply
simoncion 4 hours ago
Additional anecdata:

I've been using btrfs as the primary FS for my laptop for nearly twenty years, and for my desktop and multipurpose box for as long as they've existed (~eight and ~three years, respectively). I haven't had troubles with the laptop FS in like fifteen years, and have never had troubles with the desktop or multipurpose box.

I also used btrfs as the production FS for the volume management in our CI at $DAYJOB, as it was way faster than overlayfs. No problems there, either.

Go figure, I guess.

reply
raron 4 hours ago
I think you could use dm-integrity over the raw disks to have checksums and protect against bitrot then you can use mdraid to make a RAID1/5/6 of the virtual blockdevs presented by dm-integrity.

I suspect this is still vulnerable to the write hole problem.

You can add LVM to get snapshots, but this still not an end-to-end copy-on-write solution that btrfs and ZFS should provide.

reply
egorfine 12 hours ago
> if one wants to stick with the official kernel without out-of-tree modules

I wonder how could a requirement like that possibly arise. Especially with an obvious exception for zfs.

reply
ThatPlayer 11 hours ago
Bcachefs also fulfills the requirement of checksums (and multi device support).

Also out of tree.

reply
Neikius 9 hours ago
Isn't bcachefs even younger and less polished than btrfs? It does show more promise as btrfs seems to have fundamental design issues... but still I wouldn't use that for my important data.
reply
ThatPlayer 9 hours ago
I don't disagree. Gotta backups for important data either way too!

Just talking about filesystems with checksumming (and multidevice). Any new filesystem to support these features is going to be newer.

I've had both btrfs and bcachefs multidevice filesystems lock up read-only on me. So no real data loss, just a pain to get the data into a new file system, the time it was an 8 drive array on btrfs.

reply
phoronixrly 11 hours ago
Does it not also eat data though?
reply
Sesse__ 9 hours ago
Good thing all disks these days have data checksums, then!

(50TB+ on ext4 and xfs, and no, no bit rot. Yes, I've checked most of it against separate sha256sum files now and then. As long as you have ECC RAM, disks just magically corrupting your data is largely a myth.)

reply
rincebrain 6 hours ago
Less mythic on SSDs than spinning rust, in my experience.

Not particularly frequent either way, but I have absolutely had models of SSDs where it became clear after a few months of use that a significant fraction of them appeared to be corrupting their internal state and serving incorrect data back to the host, leading to errors and panics.

(_usually_ this was accompanied by read or write errors. But _usually_ is notable when you've spent some time trying to figure out if the times it didn't were a different problem or the same problem but silent.)

There was also the notorious case with certain Samsung spinning rust and dropping data in their write cache if you issued SMART requests...

reply
phoronixrly 11 hours ago
lvm offers lvmraid, integrity, and snapshots as one example. It's old unsexy tech, but losing data is not to my taste lately...
reply
fpoling 10 hours ago
lvm only supports checksums for metadata. It does not checksum the data itself. For checksums with arbitrary filesystems one can have dm-integrity device rather than LVM. But the performance suffer due to separated journal writes by the device.
reply
phoronixrly 9 hours ago
reply
fpoling 4 hours ago
But that is just raid on top of dm-integrity. And Redhat docs omits an important part when suggesting to use the bitmap mode with dm-integrity:

man 8 integritysetup:

       --integrity-bitmap-mode. -B
           Use alternate bitmap mode (available since Linux kernel 5.2) where dm-integrity uses bitmap instead of a journal. If a bit in the bitmap is 1, then corresponding region’s data and integrity tags are not synchronized - if the machine crashes, the unsynchronized regions will be recalculated. The bitmap mode is faster than the journal mode, because we don’t have to write the data twice, but it is also less reliable, because if data corruption happens when the machine crashes, it may not be detected.
I just do not see how without a direct filesystem support one can have both reliable checksums and performance.
reply
Joel_Mckay 11 hours ago
Could try ZFS or CephFS... even if several host roles are in VM containers (45Drives has a product setup that way.)

The btrfs solution has a mixed history, and had a lot of the same issues DRBD could get. They are great until some hardware/kernel-mod eventually goes sideways, and then the auto-heal cluster filesystems start to make a lot more sense. Note, with cluster based complete-file copy/repair object features the damage is localized to single files at worst, and folks don't have to wait 3 days to bring up the cluster on a crash.

Best of luck, =3

reply
stinkbeetle 10 hours ago
What devices are you talking about, what's the UBER, over what period of time?

RAID and logical block redundancy has scaled to petabytes for years in serious production use, before btrfs was even developed.

reply
devnotes77 6 hours ago
[dead]
reply
weiyong1024 12 hours ago
[dead]
reply
blae 6 hours ago
oh great here comes all the zfs fanboys to shit on btrfs again with made up stories of corruption
reply
yjftsjthsd-h 5 hours ago
Why would you assume that people are making up reports of corruption? Is it really inconceivable to you that the thing could have bugs?
reply
Aachen 4 hours ago
> Is it really inconceivable to you that the thing could have bugs?

Or user error, or hardware setups where the docs didn't say "don't do that". If zfs is somehow better in any of those three areas, that would result in fewer corruption stories as well. Hard to know without being able to control for popularity though

Seems really weird to me to assume people make up stories to promote their favorite filesystem. Of course I have one to share as well (opened the thread without knowing it was about btrfs to begin with, I'm not brigading...)

---

I tried btrfs once in my life. I wanted to (1) mirror two disks so a routine disk failure doesn't mean I lose X hours of updates since the last off-site backup, and (2) detect bit rot. And of course it resulted in a giant headache:

The disks got out of sync, put themselves in read-only mode with different data on each (which one has the latest data? Do they both have new fragments?), I eventually figured out which one has the latest data, and I mix up the source and destination device in the recovery command. Iirc the latter was caused by me stopping to read the man page when I found the info I was after and didn't read the whole thing carefully, where subsequent text would have clued me in

The recovery mess-up is user error but if this happens to people on btrfs more often than zfs, maybe zfs is more recommendable anyway. But I've not tried zfs so that's not a statement I can make

I'm back to ext4. Will just use backups and hope for the best. This constant risk of full-filesystem corruption isn't worth it to catch the few files that changed in the last hours, or the few bytes that will rot over my lifetime. On my todo list is writing a little tool that just stores sha2sum+mtime for each file and alerts me if the former changed without the latter, then I can retrieve it from backup and perhaps swap out the disk

reply