It's all a blur
353 points by zdw 7 days ago | 67 comments

jeremyscanvic 2 days ago
Blur is perhaps surprisingly one of the degradations we know best how to undo. It's been studied extensively because there's just so many applications, for microscopes, telescopes, digital cameras. The usual tricks revolve around inverting blur kernels, and making educated guesses about what the blur kernel and underlying image might look like. My advisors and I were even able to train deep neural networks using only blurry images using a really mild assumption of approximate scale-invariance at the training dataset level [1].

[1] https://ieeexplore.ieee.org/document/11370202

reply
deaddodo 2 days ago
Just to add to this: intentional/digital blur is even easier to undo as the source image is still mostly there. You just have to find the inverse metric.

This is how one of the more notorious pedophiles[1] was caught[2].

1 - https://en.wikipedia.org/wiki/Christopher_Paul_Neil

2 - https://www.bbc.com/news/world-us-canada-39411025

reply
criddell 2 days ago
Isn't that roughly (ok, very roughly) how generative diffusion AIs work when you ask them to make an image?
reply
jeremyscanvic 2 days ago
You're absolutely right! Diffusion models basically invert noise (random Gaussian samples that you add independently to every pixel) but they can also work with blur instead of noise.

Generally when you're dealing with a blurry image you're gonna be able to reduce the strength of the blur up to a point but there's always some amount of information that's impossible to recover. At this point you have two choices, either you leave it a bit blurry and call it a day or you can introduce (hallucinate) information that's not there in the image. Diffusion models generate images by hallucinating information at every stage to have crisp images at the end but in many deblurring applications you prefer to stay faithful to what's actually there and you leave the tiny amount of blur left at the end.

reply
dangond 2 days ago
I believe diffusion image models learn to model a reverse-noising function, rather than reverse-blurring.
reply
jeremyscanvic 2 days ago
Most of them do but it's not mandatory and deblurring can be used [1]

[1] Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise, Bansal et al., NeurIPS 2023

reply
dekhn 2 days ago
I didn't learn about this trick (deconvolution) until grad school and even then it seemed like spooky mystery to me.
reply
swiftcoder 2 days ago
One salient point not touched on here, is that an awful lot of the time, the things folks are blurring out specifically is text. And since we know an awful lot about what text ought to look like, we have a lot more information to guide the reconstruction...
reply
jlokier 2 days ago
Good point, though you have to beware that text-aware image enhancement sometimes replaces characters with what it thinks is a more likely character from context.

I've seen my phone camera's real-time viewfinder show text on a sign with one letter different from the real sign. If I wasn't looking at the sign at the same time, I might not have noticed the synthetic replacement.

reply
wffurr 2 days ago
>> sometimes replaces characters with what it thinks is a more likely character from context

Like the JBIG2 algorithm used in a zero click PDF-as-GIF exploit in iMessage a while back: https://projectzero.google/2021/12/a-deep-dive-into-nso-zero...

The vulnerability of that algorithm to character-swapping caused incorrect invoices, incorrect measurements in blueprints, incorrect metering of medicine, etc. https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...

reply
gwbas1c 2 days ago
And older people are very good at reading blurry text.

(My grandmother always told me to "never get old." I wish I followed her advice.)

reply
siofra 2 days ago
Beautiful walkthrough. The key insight people miss is that "looks unreadable to humans" and "is information-theoretically destroyed" are very different bars. The blur looks opaque because our visual system is bad at detecting small per-pixel differences, but the math does not care about our perception.

Same principle applies to other "looks safe" redactions — pixelation with small block sizes, partial masking of credentials, etc. If you can describe the transform as a linear operation, there is probably a pseudoinverse waiting to undo it.

reply
derektank 2 days ago
Captain Disillusion recently covered this subject in a more popular science format as well

https://youtu.be/xDLxFGXuPEc

reply
lupire 2 days ago
8 months ago, for those of us who got excited by the idea of a "recent" new video from CD.
reply
derektank 2 days ago
In my defense, that is quite literally the most recent full video the Captain has uploaded!
reply
coldtea 2 days ago
>But then, it’s not wrong to scratch your head. Blurring amounts to averaging the underlying pixel values. If you average two numbers, there’s no way of knowing if you’ve started with 1 + 5 or 3 + 3. In both cases, the arithmetic mean is the same and the original information appears to be lost. So, is the advice wrong?

Well, if you have a large enough averaging window (like is the case with bluring letters) they have constraints (a fixed number of shapes) information for which is partly retained.

Not very different from the information retained in minesweeper games.

reply
cornhole 2 days ago
reminds me of the guy who used the photoshop swirl effect to mask his face in csam he produced, who was found out when someone just undid the swirl
reply
jszymborski 2 days ago
This is the case I always think of when it comes to reversing image filters.
reply
lupire 2 days ago
Action Lab just did a video on physical swirling vs mixing. Swirling is reversible.
reply
bmandale 2 days ago
> This nets us another original pixel value, img(8).

This makes it all seem really too pat. In fact, this probably doesn't get us the original pixel value, because of quantizing deleting information when the blur was applied, which can never be recovered afterwards. We can at best get an approximation of the original value, which is rather obvious given that we can vaguely make out figures in a blurred image already.

> Nevertheless, even with a large averaging window, fine detail — including individual strands of hair — could be recovered and is easy to discern.

The reason for this is that he's demonstrating a box blur. A box blur is roughly equivalent to taking the frequency transform of the image, then multiplying it by a sort of decaying sin wave. This achieves a "blur" in that the lowest frequency is multiplied by 1 and hence is retained, and higher frequencies are attenuated. However, visually we can see that a box blur doesn't look very good, and importantly it doesn't necessarily attenuate the very highest frequencies by much more than far lower frequencies. Hence it isn't surprising that the highest frequencies can be recovered in good fidelity. Compare a gaussian blur, which is usually considered to look better, and whose frequency transform focuses all the attenuation at the highest frequencies. You would be far less able to recover individual strands of hair in an image that was gaussian blurred.

> Remarkably, the information “hidden” in the blurred images survives being saved in a lossy image format.

Remarkable, maybe, but unsurprising if you understand that jpeg operates on basically the same frequency logic as described above. Specifically, it will be further attenuating and quantizing the highest frequencies of the image. Since the box blur has barely attenuated them already, this doesn't affect our ability to recover the image.

reply
mananaysiempre 2 days ago
> You would be far less able to recover individual strands of hair in an image that was gaussian blurred.

Frequency-domain deconvolution is frequency-domain deconvolution, right? It doesn’t really matter what your kernel is.

reply
bmandale 9 hours ago
As I explain, you can't perfectly reverse these filters because of quantizing. The more the signal is attenuated, the more information is lost when quantizing. So yes, it does matter what your kernel is.
reply
dsego 2 days ago
Can this be applied to camera shutter/motion blur, at low speeds the slight shake of the camera produces this type of blur. This is usually resolved with IBIS to stabilize the sensor.
reply
alphazard 2 days ago
The ability to reverse is very dependent on the transformation being well known, in this case it is deterministic and known with certainty. Any algorithm to reverse motion blur will depend on the translation and rotation of the camera in physical space, and the best the algorithm could do will be limited by the uncertainty in estimating those values.

If you apply a fake motion blur like in photoshop or after effects then that could probably be reversed pretty well.

reply
crazygringo 2 days ago
> and the best the algorithm could do will be limited by the uncertainty in estimating those values

That's relatively easy if you're assuming simple translation and rotation (simple camera movement), as opposed to a squiggle movement or something (e.g. from vibration or being knocked). Because you can simply detect how much sharper the image gets, and hone in on the right values.

reply
dizzant 2 days ago
I recall a paper from many years ago (early 2010s) describing methods to estimate the camera motion and remove motion blur from blurry image contents only. I think they used a quality metric on the resulting “unblurred” image as a loss function for learning the effective motion estimate. This was before deep learning took off; certainly today’s image models could do much better at assessing the quality of the unblurred image than a hand-crafted metric.
reply
yorwba 2 days ago
Probably not the exact paper you have in mind, but... https://jspan.github.io/projects/text-deblurring/index.html
reply
johnmaguire 2 days ago
Record gyro motion at time of shutter?
reply
jeremyscanvic 2 days ago
The missing piece of the puzzle is how to determine the blur kernel from the blurry image. There's a whole body of literature on that that's called blind deblurring.

For instance: https://deepinv.github.io/deepinv/auto_examples/blind-invers...

reply
crazygringo 2 days ago
Absolutely, Photoshop has it:

https://helpx.adobe.com/photoshop/using/reduce-camera-shake-...

Or... from the note at the top, had it? Very strange, features are almost never removed. I really wonder what the architectural reason was here.

reply
tracker1 2 days ago
Just guessing, patent troll.
reply
crazygringo 2 days ago
Oof, I hope not. I wonder if the architecture for GPU filters migrated, and this feature didn't get enough usage to warrant being rewritten from scratch?
reply
tonymillion 2 days ago
I believe Microsoft of all people solved this a while ago by using the gyroscope in a phone to produce a de-blur kernel that cleaned up the image.

Its somewhere here: https://www.microsoft.com/en-us/research/product/computation...

reply
ryukoposting 2 days ago
I wonder if the "night mode" on newer phone cameras is doing something similar. Take a long exposure, use the IMU to produce a kernel that tidies up the image post facto. The night mode on my S24 actually produces some fuzzy, noisy artifacts that aren't terribly different from the artifacts in the OP's deblurs.
reply
srean 2 days ago
Encode the image as a boundary condition of a laminar flow and you can recover the original image from an observation.

If, however, you observe after turbulence has set in, then some of the information has been lost, it's in the entropy now. How much, that depends on the turbulent flow.

Don't miss out on this video by smarter every day

https://youtu.be/j2_dJY_mIys?si=ArMd0C5UzbA8pmzI

Treat the dynamics and time of evolution as your private key, laminar flow is a form of encryption.

reply
lupire 2 days ago
If you encode code your data directly in the fluid, then turbulence becomes the statistical TTL on the data.
reply
tflinton 2 days ago
I did my thesis on using medioni's tensor voting framework to reconstruct noisy, blurry, low-res and the like images. It was sponsored by USGS on a data set that I thought was a bit of a bizarre use case. The approach worked pretty well, with some reasonable success at doing "COMPUTER ENHANCE" type computer vision magic. Later on talking with my advisor about the bizarrely mundane and uninteresting data sets we were working on from the grant he quipped that "You built a reasonable way of unblurring and enhancing unreadable images, the military doesn't care about this mundane use case." It then occurred that i'd been wildly ignorant to what I just spent 2 years of my life on.
reply
esafak 2 days ago
This is classical deconvolution. Modern de-blurring implementations are DNN-based.
reply
praptak 2 days ago
My (admittedly superficial) knowledge about blur reversibility is that an attacker may know what kind of stuff is behind the blur.

I mean knowledge like "a human face, but the potential set of humans is known to the attacker" or even worse "a text, but the font is obvious from the unblurred part of the doc".

reply
jonathanlydall 2 days ago
This was also my understanding.

It's essentially like "cracking" a password when you have its hash and know the hashing algorithm. You don't have to know how to reverse the blur, you just need to know how to do it the normal way, you can then essentially brute force through all possible characters one at a time to see if it looks the same after applying the blur.

Thinking about this, adding randomness to the blurring would likely help.

Or far more simply, just mask the sensitive data with a single color which is impossible to reverse (for rasterized images, this is not a good idea for PDFs which tend to maintain the text "hidden" underneath).

reply
swiftcoder 2 days ago
> mask the sensitive data with a single color which is impossible to reverse

You note the pitfall of text remaining behind the redaction in PDFs (and other layered formats), but there are also pitfalls here around alpha channels. There have been several incidents where folks drew not-quite-opaque redaction blocks over their images.

reply
yetihehe 2 days ago
> just mask the sensitive data with a single color which is impossible to reverse (for rasterized images, this is not a good idea for PDFs

Also not a good idea for masking already compressed images of text, like jpg, because some of the information might bleed out in uncovered areas.

reply
johnmaguire 2 days ago
Interesting - does a little extra coverage solve this or is it possible to use distant pixels to find the original?
reply
sebastianmestre 2 days ago
yep, some padding fixes this

JPEG compression can only move information at most 16px away, because it works on 8x8 pixel blocks, on a 2x down-sampled version of the chroma channels of the image (at least the most common form of it does)

reply
wheybags 2 days ago
I'm not super familiar with the jpeg format, but iirc h.264 uses 16x16 blocks, so if jpeg is the same then padding of 16px on all sides would presumably block all possible information leakage?

Except the size of the blocked section ofc. E.g If you know it's a person's name, from a fixed list of people, well "Huckleberry" and "Tom" are very different lengths.

reply
oulipo2 2 days ago
The parade is easy: just add a small amount of random noise (even not visible to the human eye) to the blurred picture, and suddenly the "blur inversion" fails spectacularly
reply
sebzim4500 2 days ago
Does this actually work? I would have thought that, given the deconvolution step is just a linear operator with reasonable coefficients, adding a small amount of noise to the blurred image would just add similarly small amount of noise to the unblurred result.
reply
srean 2 days ago
To reconstruct the image one has to cut off those frequencies in the corrupted image where the signal to noise is poor. In many original images, the signal in high frequencies are sacrificable, so get rid of those and then invert.

https://en.wikipedia.org/wiki/Wiener_deconvolution

If one blindly inverts the linear blur transform then yes, the reconstruction would usually be a complete unrecognisable mess because the inverse operator is going to dramatically boost the noise as well.

reply
jfaganel99 2 days ago
How do we apply this to geospatial face and licence plate blurs?
reply
IshKebab 2 days ago
In practice unblurring (deconvolution) doesn't really work as well as you'd hope because it is usually blind (you don't know the blur function), and it is ill-conditioned, so any small mistakes or noise get enormously amplified.
reply
jkuli 2 days ago
A simple solution is to use a system of linear equations. Each row of a matrix is a linear equation, Ax = b Each row contains kernel weightings A across the image X, B is the blurred pixel color. The full matrix would be a terabyte, so take advantage of the zeros and use an efficient solve for X instead of inversion.

Enhance really refers to combining multiple images. (stacking) Each pixel in a low res image was a kernel over the same high res image. So undoing a 100 pixel blur is equivalent to combining 10,000 images for 100x super resolution.

reply
zb3 2 days ago
Ok, what about gaussian blur?
reply
unconed 2 days ago
Sorry but this post is the blind leading the blind, pun intended. Allow me to explain, I have a DSP degree.

The reason the filters used in the post are easily reversible is because none of them are binomial (i.e. the discrete equivalent of a gaussian blur). A binomial blur uses the coefficients of a row of Pascal's triangle, and thus is what you get when you repeatedly average each pixel with its neighbor (in 1D).

When you do, the information at the Nyquist frequency is removed entirely, because a signal of the form "-1, +1, -1, +1, ..." ends up blurred _exactly_ into "0, 0, 0, 0...".

All the other blur filters, in particular the moving average, are just poorly conceived. They filter out the middle frequencies the most, not the highest ones. It's equivalent to doing a bandpass filter and then subtracting that from the original image.

Here's an interactive notebook that explains this in the context of time series. One important point is that the "look" that people associate with "scientific data series" is actually an artifact of moving averages. If a proper filter is used, the blurryness of the signal is evident. https://observablehq.com/d/a51954c61a72e1ef

reply
jerf 2 days ago
"In today’s article, we’ll build a rudimentary blur algorithm and then pick it apart."

Emphasis mine. Quote from the beginning of the article.

This isn't meant to be a textbook about blurring algorithms. It was supposed to be a demonstration of how what may seem destroyed to a causal viewer is recoverable by a simple process, intended to give the viewer some intuition that maybe blurring isn't such a good information destroyer after all.

Your post kind of comes off like criticizing someone for showing how easy it is to crack a Caesar cipher for not using AES-256. But the whole point was to be accessible, and to introduce the idea that just because it looks unreadable doesn't mean it's not very easy to recover. No, it's not a mistake to be using the Caesar cipher for the initial introduction. Or a dead-simple one-dimensional blurring algorithm.

reply
unconed 3 hours ago
Using a caesar cypher as an intro without explaining the pro tool and framing the educational context properly is just shit pedagogy bro.

Go look up what a z-transform is, and begone.

reply
the_fall 2 days ago
If you have an endless pattern of ..., -1, 1, -1, 1, -1, 1, ... and run box blur with a window of 2 or 4, you get ..., 0, 0, 0, 0, 0, 0, ... too.

Other than that, you're not wrong about theoretical Gaussian filters with infinite windows over infinite data, but this has little to do with the scenario in the article. That's about the information that leaks when you have a finite window with a discrete step and start at a well-defined boundary.

reply
unconed 3 hours ago
A binomial is exactly equal to a repeated 2 sample box blur yes. That's exactly how you construct pascal's triangle.

For filter sizes > 2, box blurs are ass.

reply
yunnpp 2 days ago
Interesting...I've used moving averages not thinking too hard about the underlying implications. Do you recommend any particular book or resource on DSP basics for the average programmer?
reply
jszymborski 2 days ago
> Sorry but this post is the blind leading the blind, pun intended. Allow me to explain, I have a DSP degree.

FWIW, this does not read as constructive.

reply
Sesse__ 2 days ago
It also makes no sense to me, and I also have a DSP degree. Of course moving averages (aka box blurs) filter out higher frequencies more than middle frequencies.
reply
unconed 3 hours ago
Homework assignment: make a bode plot of the convolution filters [1 1 1] vs [1 2 1].

Which one turns +1, -1, +1, -1, .. into all zeroes?

You ought to know this because the fourier transform of [1 0 1] is a cosine of amplitude 2 on the complex unit circle e^(i*omega), which means the DC quefrency needs to be 2 to get the zeroes to end up at nyquist.

The frequency response H(z) (= H(e^i*omega)) of [1 1 1] on the other hand will have its minimum somewhere in the middle.

Also here's a post that will teach you how to sight read the frequency response of symmetric FIR filters off the coefficients: https://acko.net/blog/stable-fiddusion/

reply
unconed 3 hours ago
The degree to which people defend poor scholarship and writing on HN these days is frankly pathetic.

There is nothing about that intro that is offensive. Reading comprehension ought to tell you that "pun intended" is a joke to make the bitter pill that OP wrote garbage easier to swallow.

reply
oulipo2 2 days ago
Those unblurring methods look "amazing" like that but they are just very fragile, add even a modicum of noise to the blurred image and the deblurring will almost certainly completely fail, this is well-known in signal-processing
reply
srean 2 days ago
Not necessarily.

If, however, one just blindly uses the (generalized)inverse of the point-spread function, then you are absolutely correct for the common point-spread functions that we encounter in practice (usually very poorly conditioned).

One way to deal with this is to cut off those frequencies where the signal to noise in that frequency bin is poor. This however requires some knowledge about the spectrum of the noise and signal. Weiner filter uses that knowledge to work out an optimal filter.

https://en.wikipedia.org/wiki/Wiener_deconvolution

If one doesn't know about the statistics of the noise, not about the point-spread function, then it gets harder and you are in the territory of blind deconvolution.

So just a word of warning, if you a relying only on sprinkling a little noise in blurred images to save yourself, you are on very, very dangerous ground.

reply
matsemann 2 days ago
Did you see the part where he saved with more and more lossy compression and showed that it still was recoverable?
reply
chenmx 2 days ago
What I find fascinating about blur is how computational photography has completely changed the game. Smartphone cameras now capture multiple exposures and computationally combine them, essentially solving the deblurring problem before it even happens. The irony is that we now have to add blur back artificially for portrait mode bokeh, which means we went from fighting blur to synthesizing it as a feature.
reply