I might actually need to use a uint128_t for my GUI sort index value. I'm using some "wasteful" order-index allocation to reduce resorting when objects are added/removed.

I probably wont need it for a game GUI but an application GUI could bust 64 bits if every level takes 1 or 2 bits out each... Maybe I'll have to use a linear assignment and flag some GUI tree levels as "spammy" to reserve space and limit rebuilding the whole render list.

I recently asked a question and have accepted an answer, but comments on both the question and answers have shown that my question could've been clearer.
I think that I could add notes that would address this and show why I picked the answer that I did, not least because off what I've learnt (e...

@GabrieleVierti yeah, I use STB_FreeType which is nice as well. From looking at the code you linked they work pretty similar. But for STB I get clean Java bindings which I somehow can't find for FreeType :)

@Hemlata tbh no idea how unity does it. The concepts refer to your gameloop running either at a fixed step of n times per second or at a variable step (that often just runs as fast as it can). But your profiling seems useful so this doesn't really matter

i don't know unity that well sorry. But in general you want to reduce the amount of things that are drawn. How you achieve this is up to your gamelogic, maybe you draw things where other stuff is then drawn ontop - in that case you would want to do some raytracing or similar to only draw whatever can be seen.

if you're worried that images you upload in this chat will be used by others then you could e.g. use a page like github to upload screenshots and accompany the repository with a BSD license or something else that forces people to mention you at least

Both work fine; the approach you'd want to take depends mainly on how you're already managing the batch, and whether you are CPU or GPU bound

If you're already updating every vertex in the batch every frame or so anyway, and your batches aren't huge, it's fine to just do it on the CPU.

It may be more scalable to do it in the GPU since you leave room on the CPU to do other stuff and that kind of bulk transform is pretty easy for the GPU to do; but at the same time you do have to consider that you're spending time and resources updating that constant buffer instead. shrug

my batch has a reference to the shaderprogram it uses, and a list of textures & uniforms that it needs set, which it sets and unsets before and after batching; then multiple batches are executed in order so those using the same shaderprog are executed after another thus I don't need to unload/load a new shaderprog too often

As I don't have too much going on so far that works fine

I might make batches share a bigger structure/have a bigger class that organizes and manages its batches internally so I can spread the textures between all 48 or so texture_units I have available and need even less reloading

@dot_Sp0T We focused on mobile for the product b/c it's the only growth market in games for the forseeable future (Also, the breakthrough re: the mechanics came out of seeing how games can be expressed on tablet;)

but we're definitely open to collaborating with developers who want to produce PC and web versions! (We're a pure sweat-equity bootstrap)

@dot_Sp0T That sounds doable, especially as our developer is a Java-first guy, but he's got a lot on his plate with the mobile apps (we don't even have additional game modes/in-app purchases set up yet, and really need to get the game networked to build that online player community.)

That's said, the goal is to eventually have a version for all computing systems!

@dot_Sp0T thanks for that. I coded the original prototype on iPad in a text editor in pure JS. Took me a couple of months b/c I was learning JS concurrently, but I'm guessing a pro developer could probably slap a PvP version together in under an hour, and attach some kind of ML function for PvAI.

We're avoiding Deep Learning for the game AI at present, as the emphasis is on both "dumb" and "smart" AIs that play like humans.

don't underestimate it. I always awe when hearing or seeing that someone is 'creating game xyz in 2 hours'. But once you realize that they just slap together code other people wrote then it's not that aweful anymore

@dot_Sp0T the game is extraordinarily compact, but I hear you. Creating stable, production applications for iOS and Android was a different animal entirely. Sometimes I got frustrated at how long it took (we're all working nights), but I've never regretted bringing on a trained programmer for that part of the process.

(Also, my dev is a pretty gifted designer in his own right, so his contribution to the project has gone over and above just the programming.)

of course, as Product Manager, I have to fill in all the skill gaps, which has me spending a ridiculous amount of time doing graphics for the game and marketing.

@DMGregory I wonder how many different ways people can ask how to handle variable frame rates in a game loop. There's plenty of questions and answers (found yours) but couldn't find one that would be a proper duplicate of that latest one. Mostly because they're all specific to the particular engine/libraries they're using :(

I avoid variable delta-time loops because it's too easy to make a mistake and cumulative rounding errors are a pain. It's the same reason Unity has fixed time-step for some of the physics. Things get too numerically unstable.

Having to calculate fractional powers for acceleration/deceleration to get the proper surface under the curve gets too messy too quickly... It's doable but unless it's a driving/flying simulator I don't think it's worth it

@Jimmy I have a hybrid game-loop where most of the stuff runs as fixed-step and I only do delta-time processing for inconsequential parts like graphic particle updates. That lighten up the game update load on the CPU while keeping things simple enough. No state change issue.

If there are any glitches they're only cosmetic. Particles that interact with the game are still processed as fixed-step.

Yeah, you can get a perceptible judder that way (ie. with pure FixedUpdate and no interpolation or delta-time fixups on anything), that can make it look like your framerate is dropping even when your game is running solidly.

There's not really a way to win otherwise: You're still left with the issue of SLI micro-stutter and double / triple-buffering lag where the delta between displayed frames does not match the previous delta calculated so your frames are displayed with the wrong time delta.

And add the deferred rendering latency/jitter and the slight inaccuracies in the system timer... Going above 100fps often makes it look worst than sticking to 60 :(

Say the GPU can display a steady 100Hz, and the game loop has no problem matching this, but the OS delays your thread a bit, your frames are internally calculated as +10ms, +12ms, +8ms, +10ms But because the GPU and CPU is so fast (render in less than 5ms) they had time to catch up over that slight hiccup and they're actually still displayed as +10ms, +10ms, +10ms, +10ms without missing an actual frame.

So we effectively get a frame displayed 2ms too soon, followed by one 2ms too late.

make that 2nd one 4ms too late, (forgot to add the previous frame's error)

There's definitely no perfect solution, but even going from a ±8 ms judder at 60 fps with no interpolation to that ±2ms judder is still a massive and perceptible improvement to the smoothness of motion.