Hi! after buying the Samsung odyssey+ I’m now convinced that some kind of diffusion filter or equivalent processing must done for the 5K+. 8K SDE is greater than the OD+ if I compare the screens shot. So only a hypothetical 8K+ would probably be better. Some of us here have already mentioned interest for this in other discussions. I would like to compile any ideas and attempts here.

I will do preliminary testing with my iPhone, IPAD PRO screens and some VR lens or equivalent optics.

2018-12-08
1: To put things in perspective this experiment here is not to strap a 50 cents pouch into my 1K VR headset.
2: on first approach filters directly applied on the screen would be rejected because of the difficulty to applied them on the panels
3: This could be applied even on the next generation headsets to further improve SDE, if needed
4: This mod will cost $$ and probably need a crowd support to achieve require testing and selection of the filters

Been messing with some shrink wrap stretched over empty glasses frames , it definitely reduces the sde with vive/vivepro but also blurs the image a bit , ok for a viewing a film or pictures on the vive pro but not fps type games

I really don’t think a general diffusion filter is used because the image doesn’t look blurry to me, I can still see individual pixels on a very white background, they’re just larger and more square looking than the original odyssey.

I agree… I own both the original and the plus Samsung Odyssey. I haven’t had much time lately to spend a great deal of time in VR with them since I got the Plus… but the Plus does appear more blurry to me though however… the Plus is more comfortable than the original. I haven’t fully made up my mind which one I would prefer though… I need a little more time but as of right now, I think I would prefer the plus with its comfort and its marginally improved controller by having the Bluetooth built into the HMD.

EDIT… Doah… I miss read your comment… sorry… I see you said in fact it doesn’t look blurry… it does to me… for objects in the distance. Close objects though is clear. This is a comparison between the OG and the Plus only though. The plus is still more clear than other WMR hmd’s I have tried. Figured I would add this edit instead of deleting my comment.

Indeed, but I fear that applying something at lens level may introduce too much blurriness. Still it would be interesting to leverage the lens holder that is supposed to be provided. I don’t think it’s provided yet.

Something similar has been done before with passive 4K 3D tv’s, where a different polarization filter was added to each row of pixels… from all accounts, it didn’t add much to the price of the TV. Now granted, the pixel density is much higher on a VR headset, but I think it would still be possible with precise equipment.

About the just-a-simple-sheet-of-diffusing-material-applied-directly-to-the-displays option, I can relate my own experiences with the Rift DK1, and Vive regular retail version.

So on the former I tried thin laminating pouches, back in the day, and, later, on the latter: matte smartphone screen protectors.

With the DK1 I was quite happy with the amount of diffusion - it really needed it, whereas on the Vive, the material, although more transparent, was a little bit too diffuse in relation to the resolution of the screen, so that it made the pixels “bleed” a little bit into one another, rather than just enough to fill up the space between them - in both cases we are in essence talking about intentional blurring - it’s just a matter of how much - somewhat harder in the latter case, given the pentile arrangement results in the screen effectively having two different resolutions, one for the green subpixels, and one sqrt(2) lower one for the others.

The big problem in both cases, is that the sheets had rather large granularity to them, which resulted in the chromatic aberration of each and every one of those grains casting a pinprick rainbow, in yet another a static pattern, on top of the ones the Vive and Rift CV1 OLED screens have already.

The more “filled out” pixels also constituted a pattern of their own, so that it now looked like you had a wall of colour-shifting tiles in front of you, instead of a world out there, somewhat occluded by a chain-link fence. I am thinking here that the SDE may not be exclusively a bad thing, and that it does much for the spatial side of perception, what low persistence does for the temporal one – like how if you pixel a slanted line solid, one bit deep, the jaggies are extremely apparent, but if you break it up, by omitting every second pixel, your brain will fill in the intermediaries and more easily ignore the squareness of the pixels, taking the bitmap image as a proper line. In the same vein, if supersampling enough, it makes you perceive better resolution, as more detail is revealed with minute head movements, bringing it into view from behind the SDE’s chain links.

…so it becomes a matter of tuning: You want the diffusing properties of the material matched to the DPI of the screen, and you want it as milkily smooth and homogenous as possible.

Any turbid material is also going to cause some saturation and contrast loss, but much of that can be compensated for with image balancing.

I’m guessing (and this is a pure guess but it’s what I would try if I had their resources) that Samsung has cunningly repurposed a technique normally used for CCD sensors in cameras.

These too have gaps between the light-sensitive areas, but for quite a long time now most sensors have included a microlens array directly above the sensor so that light that would have fallen into these gaps gets refracted onto the sensor.