Hmmmm, I'm not sure if a rasterizor or a signed distance fields approach would work better for 4D geometry.

The equivalent of a triangle in 4D is a tetrahedron. Defining the world out of thousands/millions of tetrahedrons would work but 4D objects just have so much surface area. Take for example a cube in 3D. We can represent it with 12 triangles. Now look at a 4D hypercube. It needs 40 tetrahedrons to define its shape. This increased complexity is even more pronounced on other shapes. A sphere for example in 3D could have 320 triangles to make it look somewhat smooth while a hypershpere would need ~3,370 tetrahedrons to look as smooth. Ignoring the performance hit of needing to use a custom tetrahedron rasterizer, the geometry itself would be a huge resource hog.

Signed distance fields wouldn't have to deal with this. SDFs only use "linear" math so they work in any dimension and simple objects like hypercubes or hyperspheres would work as primitives. The hard part becomes optimizing the number of render voxels, not optimizing the scene.

So I think I'll start out with an SDF approach since it seems more likely to succeed. Plus it'd be simpler for me to program.

Spent my break today working on this. That's a render of a 4D hypercube!!!

No shading atm and needs a lot of work to look better as a 3D image, but it's fully functional (as long as you don't rotate anything). The image above is a hypercube a few meters from the camera and a bit to the left. This makes the side face exposed, producing that partial pyramid extrusion on the right of the image. Thought it was a glitch when it first appeared.

The framerate is close to single digits but here's shading and a shadow. It's a hypersphere on top of a hypercube table. The image of the hypersphere dips into the image of the table because it's closer to the camera for that area. The image of the shadow is inside the table's image and is 3D, not a flat ellipse.

Worked on this again last night, but i think it's time to put it on the shelf until I'm ready to dedicate a lot of time with it. The 4D engine is all set up and is ready to have a game built using it.

Graphics are currently way too slow and my test scene runs at 20 fps (without shadows). I'll need to really optimize the shader and use some hacks to get it running faster. Also playing the game in VR or at the very least in 3D is pretty important so increasing performance is my number 1 concern.

Rotations are working too. 4D is hard to think about because there are just so many ways to rotate and so few online articles giving specifics. I found a method to rotate a point by a 4D rotation and used that to engineer all the other functions a game needs.

With that I also set up a very basic physics engine. It is not even close to being capable of calculating true physics, but it's enough to keep objects from falling through the ground. All of my functions are set up using double precision floats since 4D geometry has more room for errors to build up.

Playing in 4D is interesting. Left right, up down, and forward backward quickly map to your brain. Ana and kata kinda map too, but it's just a game mechanic you get used to. Rotations are similar but take more time to adjust to. Ana and kata rotations are really confusing and hard to understand. Things move around the screen in a predictable way but the shape objects make as they rotate feels random.

So the short of it is that I'm really excited by this and it's definitly possible, both to create and to play. I'm suprised how quickly my brain was adapting to 4D geometry so maybe I'll need a more complex game to go with it.

4D stuff is so cool to think about though since it just goes against everything you know about the world. Like it's impossible to tie a string into a knot in 4D since circular tubes don't fully constrict motion to 1 vector. Or that beings living in 4D space would almost always have circular symmetry since left/right and ana/kata are intermixable.

I'm prototyping a different rendering engine based off tetrahedron rasterization. Still a ways to go before it can visually compare to the SDF renderer but it'll hopefully be a lot faster.

I'm targeting a 256*256*256 or higher pixel render image. Unlike the SDF renderer, the 4D image will be rendered into a static cube instead of a dynamic one oriented with the camera. Disadvantage is that depth resolution will be over sampled compared to x and y, but the advantage is that every pixel sampled is important, the framerate is not affected by zooming in or moving the camera, and the block grid can be viewed multiple times without needing to refresh it (ie in VR the 4D scene doesn't have to be rendered separately for each camera and camera refresh could happen at 120fps even if the game is 30fps).

The screenshot above is of a 32*32*32 grid. Once finished it should not look blocky. The colors are to distinguish separate tetrahedrons. One hypercube needs 40 tetrahedrons to fully cover its surface. This sample scene has 13 tetrahedrons for a total of 520 tetrahedrons, but backface culling alone can bring that down to 125.

I have 4D depth testing implemented and could now render accurate depth buffers of any 4D scene. Still a lot to go before I have materials and lighting, but it's a nice start. Currently it's rendering at 64*64*64 voxels and I'll stick with that for a while probable. One of these days I'll also need to work on visibility issues with this final "ghosty" look, but right now I'm focusing on the 4D side of the engine.

The four trees are all identical in shape and size within the 4D world. Their images look different depending on position and distance from the camera. Each is made up of 4 hypercubes since I'm not ready to attempt curves.

On thing I absolutely love is that natural objects in the 4D world are intuitive. Only a mathematician could see the shadow of a hypercube and recognize the shape, but most players will be able to recognize mountains, rivers, trees, clouds, humanoids, etc. just by exploring the world.

These little blocky trees look more or less like 3D trees, but at a higher resolution you'd notice some big differences. For example the 3D image of the tree would have branches randomly juke back in towards the trunk instead of primarily extending outward. Also every area of the top of the tree's image would be covered in leaves, even with some appearing inside wood. This is just an illusion of course: the tree in 4D is solid and can't have parts overlap.

The graphics engine is good enough for this prototype. Still need to add 3D textures, lighting, shadows, etc. but those will come once I've built the rest of the engine.

Right now I'm building my own 4D version of Unity's scene view. This will allow me to see what I'm working on and set up levels a lot faster. Also I'll need to build a 4D mesh editor into this since that's pretty important. Would be really awesome if I got this working in VR/3D.

Physics will be interesting. The old engine could do raycasting using sdf but this new engine will not be as simple. Maybe I could copy over VizionEck's physics engine and convert it to 4D. Could also modifiy it to support raycasting. I don't plan to need any detailed physics but basic stuff is crucial just to even select objects by clicking on them.

I added a quick simple outline effect so that it's easy to see the bounds of the 4D rendered image. Also it's really really cool to move around the 4D world and treat it like a tv screen. The mouse wheel allows you to zoom in and out of the 4D world. Holding the scroll wheel down and dragging around allows you to pan the 4D camera left/right and ana/kata. With alt held down, it switches to pivot mode. Clicking and dragging with the left mouse button rotates the 4D camera around a point in the left/right and ana/kata directions. When doing left/right it feels kinda like normal rotating, but it's very weird when doing ana/kata.

I have a lot of my game engine classes partially set up. My 4D mesh class can calculate surface normals, so pretty soon I'm going to have to start messing with lighting in the graphics engine. First I'm going to get the renderer working with the data in the scene and get mesh editing working in the editor.

Also optimization wise I'm feeling pretty good about hitting my performance target. A lot of this code is quickly thrown together yet I'm still getting great framerates. Biggest slowdown is from tetrahedrons getting too close to the 4D camera and covering up a lot of voxels. This stops the GPU from running in parallel since currently voxels per tetrahedron are operated on in sequence. At the moment I can literally increase performance by adding more tetrahedrons to an object's mesh.

Rendered around a million tetrahedrons without destroying the framerate too much, so that's nice.

I really do need to learn more about the rendering pipeline though. 98% of the slowdown with that many tetrahedrons was from the CPU trying to pass them off to the GPU. Also the GPU could probably handle 100 million tetrahedrons as long as they were small. My big resource hog on that front is the pixel size of tetrahedrons.

I don't like that smaller images are darker. It's like if the game was played in fog.

I essentially have a minecraft map with 256x256x256 voxels and I need some way to let the player see every voxel. Like seeing every single underground block in minecraft.

1 option is to break up the data and not present it as a regular image. This would be stuff like showing it as a bunch of side by side slices or using a space filling curve to map every voxel to a screen pixel.Pros:No loss of informationAll data easily visible

Cons:Requires teaching the brain how to useNot natural or intuitive

2 option is to make all the voxels transparent and slightly glowing so that you can just look at them.Pros:Makes 4D really simple to understandCons:Screen can become a useless mess of colorNeeds to be in VR/3D or else you can't judge depth/ana and kata location

Essentially it seems the 2nd option is great if you want to pause the game and move the 3D camera to check out the scene but both options are poor if you're only moving the 4D camera.

I really hope I can figure out something better. Both can be supported but I think an enhanced version of option 2 might be possible. Make it have "solid" pixels for specific objects. Would be less information but would make it even more intuitive and simple to understand.

I figure I'll just keep posting these mini updates to help with activity.

I have a "Find closest point on mesh" function set up which could be the backbone of my physics engine. It's slow and doesn't work on every mesh atm but it's a great start. Still haven't started on raycasting but I now have a better understanding of how it will work.

I've also started to really think about 4D modeling. I need my own mesh editing software but just even the geometry is interesting and needs to be sorted out.