There's nothing weird about a game engine rendering the 3d world at a different resolution than it presents to the window/screen with, if that's what you guys are talking about.

nope. we are talking about the inability to change resolution of the app in general. there is only 1 screen resolution in options...that doesnt strike you as odd? how could there be only 1 resolution option and work universally?

I'm pretty sure that we get a list of supported resolutions by querying the driver (either directly or via SDL) to get a list of supported framebuffer resolutions. As Android GPU drivers are typically hot garbage, I imagine that they don't give a full list of everything they could conceivably support, but instead just return the actual screen resolution as the only option.

Of course, it's possible that we don't actually do this on Android and do it some other way, but it works like this on Windows, OSX and Linux.

Even in the case that you want to change the resolution, it would be helpful to, say, render the game's 3d world at 720p if you have a 1440p phone, because rendering all those extra pixels in 3d is virtually pointless on such a tiny display (if you have anti-aliasing enabled) and the upscaling/presentation step is virtually free, and you could render the UI at the native resolution.

On a desktop machine, there's basically no additional impact to rendering at higher resolutions as the fragment shader's so much simpler than what you'd get in a typical application these days that the actual draw calls (and the associated cull traversal etc.) burn so much more CPU time than the increased resolution burns GPU time. If Android devices have a similar CPU-to-GPU performance ratio and nothing weird starts happening on slower devices, it might not be any slower to render at native resolution anyway.

On a desktop machine, there's basically no additional impact to rendering at higher resolutions as the fragment shader's so much simpler than what you'd get in a typical application these days [...]

This is only true at normal resolutions like 1080p or 1440p. Eventually, like at 4k, the sheer number of fragments starts to be a problem, like for VRAM usage. It's also a problem on really, REALLY shitty graphics chipsets, like the one in my ancient laptop that I tried to run OpenMW on for my dad, where I'd want to render at 720p or lower but display at the native 900p or whatever just to increase the framerate from 30 to 40/50.

Not only that, but rendering at low resolutions can still decrease latency, even when the framerate is the same, because the time spent between input and display can be lower if you shave five milliseconds off of rendering. This is definitely the case with OpenMW right now, where rendering frame A can happen while frame B is being simulated, and frame C doesn't start getting simulated until frame B starts getting rendered.

And what about desktops that only have integrated graphics but 1440p displays? You can't really make any assumptions about the relative performance of different parts of someone's computer when deciding what performance options to include.