You don't need bleeding edge hardware or software either. The game I'm working on generates a new SDF every frame for the scene (using the GPU's fragment units to rasterize the distance data for the objects in the scene into a scratch buffer) and then does cone traces through the generated SDF per-pixel to do realtime soft shadow casting and lighting, and that performs just fine even on an old laptop from 2015.
"I'm making a game engine based on dynamic signed distance fields (SDFs)"
That project is for GPU and it works by caching SDFs as marching cubes, with high resolution near the camera and low resolution far away, to build huge worlds out of arbitrary numbers of SDF edits.
So it probably wouldn't stack with these CPU optimizations that directly render the SDF at all.
The demoscene is particularly insular, but even within the field of computing in general it seems that there is not a lot of knowledge diffusion between all the different areas, leading to some reinventions (often with distinct terminology.)
Those requirements and their different jargon from the mainframe world were re-discovered from the literature when virtualization in the PC world became a selling point.
(Edouard Bugnion et. al. - Hardware and Software Support for Virtualization)