The sensor to see a 3d scene is 2d(eye or camera). What is being done here is simulating a 3d sensor(for a 4d world) then we are looking at this 3d sensor using our 2d sensors (eyes). I don't know if this is the common way of rendering these 4d physics simulations. But it is the first I have heard it described this way. It is also why the narrative of the game focuses on eyes, because that is what it is doing.
Not merely 2d + depth perspective.
(Not unlike how a seemingly 2-dimensional world of a top-down FPS is actually 3-dimensional, you just have to follow way more rules when it comes to moving in the third one.)
If this is 4d doom, i wonder what 4d quake could be
Why? Well, apparently ants have 6 legs because this allows tripod-gait, a simple leg movement that always keeps 3 stable points on the ground[1]
In 4D, you'd need 4 points on the ground, hence tetra-pod gait (4+4 legs).
You could of course do with less, I'd guess even as low as 1-2 if you have lots of muscles and good balance.
Awesome book regardless.
The only answer would seem to be an extra axis of rotation, but (a) doesnt work well with existing input methods, and (b) would be even more of a brain-breaker
- all the dimensions are treated the same
- you only actually see two dimensions.
(it goes without saying that it's actually me who's confused.)
But in 4D there isn’t really an equivalent control, so it ends up feeling more like toggling something you don’t fully understand.
Ordinarily, a 3D scene rendered in 2D only allows you to see a cone from your eye up to the first surface the ray encounters, thus defining the 2D projection which you see.
But you can make the surfaces transparent so the ray continues, and each additional surface adds a bit to the final pixel. This can look like a mess if you stand still but if you wiggle your movement left and right (or any other direction), your brain suddenly manages to process it into the full 3D structure.
Can something like this be done in 4D?
The advantage of this one is that it offers a stereoscopic (red/blue) view of the 3D retina. Not sure if this one does too.
Also, here's a CPU-based prototype that led to the Hyperhell engine:
Steam link: https://store.steampowered.com/app/2147950/4D_Golf
The person goes over quite a few technical details on their Youtube, though they talk about a bunch of other coding experiments too.
When I first saw the title, I assumed this game was implemented in the same engine. I believe there are a few already.
[1] https://store.steampowered.com/app/473950/Manifold_Garden/
Hey , did any one played “Total over dose”?
I guess taken to the logical extreme, what does the brain of someone/thing that's good at playing this (or any game of N dimensions) look like?
While there is definitely something to the plasticity of young brains, for example in language acquisition, or the fact that the Fields Medal eligibility ages out at 40, I believe it's not a linear thing and not a one-way thing.
My curiosity is if this is like you suggest, ingrained patterns, or if there is actual slow down with age. I hear different opinions and am finding it difficult to navigate as I deal with my own, albeit mild, aging.
I actually noticed serious mental decline when I was burned out in my late 20s. There were real physical symptoms like not being able to look at a text editor for more than 2 minutes. Post recovery of that, I actually feel like my brain recovered a lot once I started learning languages very seriously (mandarin and japanese), starting a few years ago. Brain feels healthy now but I'm acutely aware of where it's not as sharp as before. Playing around with this felt a little like when my brain is trying to build a new grammar dictionary.
i managed to kill three enemies before succumbing to my fate
[1] https://en.wikipedia.org/wiki/Miegakure
https://store.steampowered.com/app/355750/Miegakure_Hide__Re...
They haven't showed anything for like a decade.
[1] For this check out zenorogue work btw
Since the gameplay is so much about 4D, clarity in what you see becomes more important and the extremely low resolution actually impairs the player rather than serve a positive (typically 'leaves more to the imagination').
It wouldn't take much of an effort to double or triple the resolution which I think would help the gameplay.
So it is as low res as it is because it is a bunch of voxels simulating a 4d camera.
The dev put out an interesting video on the topic.
https://www.youtube.com/watch?v=tKDMcLW9OnI
I tried pretty hard to increase the rendering efficiency on consumer GPUs. The biggest issue is that the main view is actually a 96x96x96 grid of "pixels" (or voxels). This makes scaling brutal: going up to 128x128x128 we'd double the total amount of pixels, to around as much as 1920x1080 resolution. Doubling the grid res to 256 would get us 16M voxels, which is about the same as two 4K displays. On top of that simple 4D object meshes scale much worse in terms of tetras than 3D objects do.
A quick solution could be to give the user a few resolution options, so they can bravely test the limits of their hardware.
So I've just modified the engine to allow you to specify a custom resolution in the URL:
https://dugas.ch/hyperhell/levels/the_bargain.html?vox_resol...
(Higher resolutions might break rendering entirely if the accel structure doesn't fit in allowed memory anymore. I was able to push it to 160x160x160 on my machines)
I'll also try to think of other ways to make the rendering more efficient, maybe a BVH instead of my simpler grid-based acceleration structure? My background is not in computer graphics, so others here might have better ideas.