Hard to say. But both Seb & Voodoodx seemed impressed so likely other gimmicks in place.

Yeah but they both had limited time in a controlled setup. I wonder how it holds up in games where resolution in the distance matters. I guess we’ll see reviews popping up in december, when they start shipping to developers.

Just a couple of tips after talking with them. You need to apply using your company email address and provide links to your website and projects. They seem to be open to applications outside the selected countries as long as you can give a US shipping address like a freight forwarder

In my previous post I calculated 16% more pixels, but because of the extra subpixel multiply that with 50% so 74% total higher resolution. Yet over 90% more FoV that’s not going to be as sharp as the Vive Pro.

What exactly is the horizontal FoV of the Vive Pro (or the O+) again ? If you say 90% more FoV you seem to be assuming something around 115° horizontally ? I thought it was rather near 100° but as ever find it difficult to retrieve reliable data from the internet… why am I asking ? Because if the Vive pro FoV is even less, then the figure may well go to or beyond 100% more FoV meaning nothing good for the sharpness to be expected.

And what subpixel matrix / resolution combination would be preferable (ignoring the difference in GPU load for a moment), and why: having more rendered pixels being pushed to a screen which is not capable of displaying them fully (only each second R/B is shown) or having less rendered pixels but with full display of all subpixels rendered ? Which one will produce more detail ?

And bonus question: why isn’t the color completely fucked up as a result of not displaying all rednered pixels - so the rendering engine seems to compensate this known shortcoming. But does that then mean additional GPU workload for such calculations ?

The problem with short demos is that you don’t get to test the full set of features but just get impressions. So I wouldn’t be surprised at all to learn that Sebastian simply did not focus on the level of detail being visible - perhaps because the demos did not provide any highly detailed environments (Starbreeze will obviously present their baby in favoruable conditions). But indeed there are a number of factors playing into this so all we can do is wait for in-depth comparisons.

As you mention it: why, what happened to my trial of the 8K ? I guess you are mixing me up with somebody else.
From what I read now it comes in pretty much as I experienced it at the time. In terms of sharpness I said then that I felt that I could see a bit of more details than with the Rift but not really dramatically improved as one would perhaps hope for when just looking at the increase in resolution ignoring the effect the increase of FoV will have. The FoV and SDE were much improved but not the details. But still it was of course two vital improvements (FoV, SDE) and one light one (detail/sharpness) compared to my Rift and Vive, so I was and still am really happy about that. Only that the 5K+ is even better, so I’ll take that one now.

As we know, there are certain games out there that support VR, that are highly moddable. Let’s take Assetto Corsa for example:

Given that Assetto Corsa is highly moddable, and people have created ALL KINDS of things for the game that shouldn’t even be possible(Rainy weather, as a recent example), could a modder…in theory…mod the game to take advantage of the features the StarVR One has to offer? Could a modder for example, implement support for foveated rendering and make the game take advantage of the full 210º Field of View?

Or is that simply impossible even for an experienced modder with knowledge of how VR works? What are your thoughts?

Modding isn’t the same typically as programming. How ever there have bern interesting Multi player mods for GTA 4 & Just Cause 2. So maybe but imho probably not as this is a major change your tslking about.

Well actually it is. The thing is that every executable compiles to ‘machine language’, or ‘assembly language’ so even in compiled form you can just load it up into a programming workstation environment (or better said: a debugger). However there are several problems that complicate this in comparison to having the original source code:

Assembly language is extremely basic. So that means that 1 instruction line in C++ could easily translate to 10 instructions in assembly language. So you have a HUGE chunk of code that somebody else wrote, without any notes at all, so finding out what is what can be really challenging

A lot of times companies do steps to prevent modding, especially the kind of ‘modding’ that people do to remove a copy protection (= ‘cracking’). This can easily be a deal breaker.

Even if there’s no protection and you found out what is what, there’s another problem: you can’t simply ‘insert’ code like you could if you have the source code, by just adding some code instructions somewhere. After compilation everything is fixed to certain addresses so every time you add code you have to do some ‘rerouting’ work.

So in short, modding is only feasible if there’s no copy protection, if everything was programmed in a way that makes sense and the needed modification isn’t too big. I have never done any VR programming but maybe @OlivierJT can elaborate, how much change in general would be needed to change a rendering algo to foveated? Is there a way you could even ‘abstract’ this from the engine itself? Anyway, I suspect this is actually quite a bit change to the algo. So yeah, in theory everything in possible but in practice, I think this is going to be really really challenging.

Not necessarily. You might be able to do it using a DirectX injector mod. Those let you mess with the graphics calls, without needing to mod the actual game.

But what that does is hook D3D functions and then when a game calls those functions, modify the function parameters and then call the function with those modded parameters. You could indeed change render resolution like that. But to have the game split up in different zones and then render them at different quality, I don’t think that’s possible that way.

But to have the game split up in different zones and then render them at different quality, I don’t think that’s possible that way.

Honestly, I don’t know if it’s possible or not, but it might be possible to do something via multi-viewport rendering. The system is versatile. I’ve made my own tweaks to the Elite Dangerous mod, but I haven’t studied Helix in any detail.

Although this doesn’t necessarily mean anything for StarVR, it of course binds in with the recent reports of Acer setting a deadline for profitability, and it appears quite likely that one of Acer’s motivations may have been that Starbreeze was no longer capable of performing their part of the investments into StarVR, incl. taking their share of any losses StarVR are making.

The article I linked above also suspects that one of the issues might have been investment into VR, like enabling VR for Payday, did not generate any significant revenues while adding to the expenses… hmmm, I would have prefered to have read other news, if i am honest.