A Recursive Algorithm to Render Signed Distance Fields
121 points by surprisetalk 5 days ago | 8 comments

userbinator 16 hours ago
Unsurprisingly, it turns out that other people had already thought of applying the multi-pass technique on GPU, but the idea is not very widely known.

The demoscene is particularly insular, but even within the field of computing in general it seems that there is not a lot of knowledge diffusion between all the different areas, leading to some reinventions (often with distinct terminology.)

reply
Tomte 12 hours ago
For example, the requirements for a CPU instruction set, in order for it to be properly virtualizable, had been known in the mainframe computing world for many, many years, when Intel and AMD came up with their unvirtualizable (except for VMware‘s heroic tricks) 32 bit instruction sets.

Those requirements and their different jargon from the mainframe world were re-discovered from the literature when virtualization in the PC world became a selling point.

(Edouard Bugnion et. al. - Hardware and Software Support for Virtualization)

reply
linolevan 23 hours ago
Played around with the code to implement a little bit of SIMD. Was able to squeeze out a decent improvement, ~250 fps avg, ~140 low, ~333 high (on an m4). Looks pretty straightforward to do threading with as well. Cool stuff! Could work to bring more gpu stuff back down to the cpu.
reply
refulgentis 23 hours ago
Tl;dr: SDFs are really slow but cool because they can compactly define complex stuff; demoscene uses it. Sort of the functional programming to trad renderings OOP. Would be cool if it was faster. Optimizing an algorithm for CPU rendering using recursive divide and conquer, 1 core with one object gets 50 fps. 100 fps if you lerp a 10x10 pixel patch instead of doing 1 pixel. Algorithm isn’t optimized, fully. Also, turns out the author’s idea is previously known but somewhat obscure, it is referred to as “cone marching”
reply
kg 13 hours ago
SDFs can be pretty fast if you do the work to optimize around them. Unreal Engine has lots of features based on SDFs that are used to great effect in games that run on consumer hardware.

You don't need bleeding edge hardware or software either. The game I'm working on generates a new SDF every frame for the scene (using the GPU's fragment units to rasterize the distance data for the objects in the scene into a scratch buffer) and then does cone traces through the generated SDF per-pixel to do realtime soft shadow casting and lighting, and that performs just fine even on an old laptop from 2015.

reply
naasking 7 hours ago
reply
01HNNWZ0MV43FF 23 hours ago
Holy crap! The demo is hitting 30 FPS from certain angles, on my decade-old CPU
reply
01HNNWZ0MV43FF 20 hours ago
Readers might also enjoy this: https://www.youtube.com/watch?v=il-TXbn5iMA

"I'm making a game engine based on dynamic signed distance fields (SDFs)"

That project is for GPU and it works by caching SDFs as marching cubes, with high resolution near the camera and low resolution far away, to build huge worlds out of arbitrary numbers of SDF edits.

So it probably wouldn't stack with these CPU optimizations that directly render the SDF at all.

reply