I have been thinking about the closer - wrap-around screen setup with trackir and stretched periphery a little more.

This could only work if one would keep eyes looking straight ahead. Unfortunately we scan with our eyes, and the periphery gets projected straight into our eyes when looking at the sides. Our brain immediately settles for this 'new' correct projection. For this to work in a sim, it would need eyetracking as well as adaptive projection mapping synchronized with eye scanning.

I also just happened across a video about visulising different projection systems in Quake. It's using a packate known as "Blinky"and the project can be found on GitHub:https://github.com/shaunlebron/blinky

I think Tim wants to adapt the projection based on where you're looking at. In that video you linked (which is is awesome btw), you can see there is a section in the middle of the screen thats not distorted. It think tim wants that section in the middle of your vision, so where your looking at, even if its a monitor edge or side monitor. I dont think that would work; even though eye tracking is quite feasible, I even experimented with it, since we do indeed scan all the time, if the image is warped accordingly all the time, you'll get sick in minutes. More over, every time you wanted to look at something in your peripheral vision, its position would move as the projection would be altered.

Combining it with track ir might be more feasible. THe idea would be if you turn your head enough that you're facing your left monitor, that monitor would become the center of the projection, being undistorted. THat might work better, but it would still be weird I think. you're probably better off with a VR solution at that point. And I'll gladly settle for a 'static' non linear projection until then.

Last edited by janjansen on Sat Apr 22, 2017 9:43 am, edited 1 time in total.

Thing is, I have no opinion on when V2 could or should ship, like everyone else, I dont mind waiting and I'll see it when it happens, Im not asking about the schedule or imploring they add some features that could delay it. I just wonder if they intend to support certain hardware configurations, as this might impact people's purchase decision, for things like single large monitor vs multiple monitors or even saving up for a VR set instead. The only assumption Im making, is that V2 is going to be released before a new monitor bought today will be obsolete or worn out.

janjansen wrote:I think Tim wants to adapt the projection based on where you're looking at. In that video you linked (which is is awesome btw), you can see there is a section in the middle of the screen thats not distorted.

Oh, I see. So the thinking would be to stay with the 'rectilinear projection' but reinterpret the center of that projection based on where the person is looking. I agree with you, I think that would be even more nauseating than simply looking at the stretched image. Of course, if multiple view-ports were to be implemented then everything would be perfect right where it is and there would be no need to adjust anything no matter where the person looked. Likewise, the same would be true in VR.

And yeah, that video is good but the 'demo' is a lot better. The GitHub page has pre-compiled binaries for you to download (saving you the need to compile anything yourself), so if you found the video interesting I'd recommend having a play with the demo. Personally, while the demonstrated view of 360° was interesting to see I found setting the FoV to 120° more useful myself, but your mileage may vary

janjansen wrote:Clearly, but I found it impressive that 360° even worked, and appeared playable on a single screen. 120 degrees on 3 screens would be easy then .

I have already seen it posted somewhere. Maybe it was not this message board. But if the goal is to fight against distortions then that is not the way to do it

One must match real and rendering FOVs and the projection surface with the monitor surface for the best result. Other option would be to use a bit more complicated shape than cylinder to compensate for mismatched FOVs. But cylinder is good enough. I think it gives errors in the range of few percent only (when set up well).

phercek wrote:One must match real and rendering FOVs and the projection surface with the monitor surface for the best result. Other option would be to use a bit more complicated shape than cylinder to compensate for mismatched FOVs. But cylinder is good enough. I think it gives errors in the range of few percent only (when set up well).

Indeed, and Ill be over the moon if V2 just supports either cylindrical mapping or multiple rectilinear viewport rendering, but its interesting to see what could be done. Particularly since I dont see myself playing a flight sim with a VR set for hours at a time any time soon, so I suspect I'll still use multiple monitors if/when V3 arrives. Then again, Im not 100% sure all this can be done with current GPU hardware. Quake I think is rendered mostly (if not only) in software , which may explain why they chose it as basis for that demo.