I like to play games at high fov and found the standard rectilinear projection sucks, i always wandered why you see objects at the sides bigger while the center is farther away. It always bothered me even before steam and gog existed, and i finally found the answer.
The standard projection used in all games was never intended for wide-angle views, whereas the Panini projection was invented to be the best representation of your natural ~170º wide vision.

Wait until we have post-processing effects working properly, and then hopefully it will be possible to add custom ones as a user.

Have you even read what i posted. I'm not asking for post processing. i'm asking for a different way to render the fow output.
To use non-standard projections like Panini, Blinky first snaps multiple pictures around you to form a Globe of pixels. Then it projects all those pixels to the screen using a Lens.
I don't see how you are going to do this in a post process since you have to render objects in back of your view also to render the fov. This is not a post process effect.
And i didn't like the tone in your reply, i posted something interesting and you shut it down.
You reply shows to me you didn't understand what you replied to. I'm waiting for an explanation as to why this disgusts you?

It's implemented as a post-process. You render the scene using normal rectilinear projection (with multiple view directions if required), then make the render targets into textures and give them to a post-process shader which looks up the colours in various positions in the old render targets and uses them to work out what colour to make each output pixel. Any other post-process effect takes the rendered-as-normal image as a texture and then has a shader look up values in the old render target in various positions and uses them to work out what colour to make each output pixel, i.e. the exact same thing. The only difference is that if needed, you do extra renders, but reprojection isn't the only post-process effect that uses more than one render as input. Temporal AA techniques use the previous frame as an extra input, for example.

The comment about it disgusting me was supposed to be a tongue-in-cheek remark that if you've got a flat screen and have your monitor at a suitable distance for it to take up the same part of your IRL FOV as your in-game FOV and you look at the middle of it, rectilinear is exactly the right projection, and anything else isn't. This type of projection is designed to solve problems that don't need to exist if your game settings coincide with your chair placement.

You would have to have 6 rectilinear view projections. lets say 4 90 degrees projections plus one up and one down to create the final fov. So you have to render everything 360 degrees around the player all the time.
Performance vs Quality: Blinky has to render 6 views per frame when using a Cube globe. So we provide lower poly globes as a way to balance quality and performance. Fewer renders means each view has to cover more area with less resolution.
The post process needs 6 renders implemented.

Yea, this actually looks really interesting. Would this solve the flat view distance fog effect, or is that something completely separate?

It's separate. It should already be possible to solve the flat fog by using the distance from the camera to the given pixel's world coordinates, rather than its viewport depth, though I don't think there's a way to enable that behavior currently.

It relies on a post-process with a particular render method. It essentially relies on rendering the scene to a cubemap (6 renders per frame), with each face using a square 90-degree FOV. A post-process shader then takes that cubemap and generates a 2D image to display on the screen, using whatever spherical projection method you want. Something like that was played around with once to generate spherical screenshots, but this requires that of each and every frame.

Personally, I see this kind of thing as an interesting feature to record spherical videos of gameplay (which essentially just uses a particular projection method to generate 2D video frames with a 360-degree FOV). Not the most useful thing, but it could be fun to play around with and show off.

Maybe Blinky is using a software renderer, but hardware renderers can only do rectilinear projection, even if you set up extra clip planes and discard statements so that your intermediate render targets are triangles. You also don't need to render the whole globe if your FOV isn't wide enough to mean you can see things which are behind you. I also don't understand how the argument that you have to have exactly six renders as a cubemap if it's implemented as a post-process holds any weight as it's factually wrong. You don't need recursion to generate a Panini projection from a triangle-based globe, so you can do it in a shader. It does obviously need to be a different shader (or code path selected by a uniform) than if you were doing it with a cubemap, though.

The tl;dr of this is basically that the reprojection works by rendering the scene with a standard projection (with as many view directions as is required) and then performing an image-space operation on these renders to get an output image, and that's what a post-process effect is.

I also don't understand how the argument that you have to have exactly six renders as a cubemap if it's implemented as a post-process holds any weight as it's factually wrong.

You might be able to get away with less if your final output FOV is small enough. It also depends on whether you apply any rotation during projection (after rendering the faces).

The tl;dr of this is basically that the reprojection works by rendering the scene with a standard projection (with as many view directions as is required) and then performing an image-space operation on these renders to get an output image, and that's what a post-process effect is.

But the point is that it still requires a particular render setup (multiple renders with a fixed projection) that is then used as input to a specific post-process shader. There is a post-process component, but that post-process requires specific inputs that the engine has to provide. You can't just slap a panini projection post-process shader into a random 3D game and it automatically works. It requires the game to know what the shader is and provide the correct inputs and outputs for it.