Author
Topic: How will the processing load be handled? (Read 2298 times)

With the procedurally generated world, how are you thinking of handling the processing load? Is the random generation going to be like minecraft, only with higher graphics, and I guess only loading in the map tile(s) that you're currently on?

(I know that may not have been fully worked out, just wondering what your initial ideas are)

With the procedurally generated world, how are you thinking of handling the processing load? Is the random generation going to be like minecraft, only with higher graphics, and I guess only loading in the map tile(s) that you're currently on?

(I know that may not have been fully worked out, just wondering what your initial ideas are)

Actually, this is one aspect that we've completely worked out. Minecraft has it's challenges because of being a seamless world. However, that mostly makes sense in a 3d context, anyway. We're having a sytrm of "chunks" that are like the areas you can walk around in in any classic snes game: Zelda 3, chrono trigger, final fantasy iv or vi, secret of mana... I actually can't think of an rpg or adventure game from that time that doesn't use the mechanic.

Anyway, so in single player only on chunk is ever active at a time. The rest is all just sitting idle on disk. For "persistent" stuff that happens when you are not looking, that generally will happening via a "rapid aging" mechanism when you next enter a chunk. In multiplayer, assuming we manage that, then one chunk per player could be active at once, presuming no two players are in the same chunk (which of course they could be, so it could be as few as one chunk that is in memory because all the players are in it together). The chunk-processing load benefits from our experience doing extreme amounts of processing for ai war, so even doing 16+ chunks at once if we had to should be a trivial load. The actual network traffic is a much larger concern with that (if there's a problem, that's where it would be, anyway, and what would set the bounds on how many players could be in-game at once).

The game won't had savegames, but rather will have "world files" that get persisted to disk as you move around and play, and when you exit, etc. So in that respect very similar to minecraft. But the overall design of our game is much more hierarchal rather than freeform: the world map contains tiles that each link to a region, and that region is made up of chunks. When you first enter a new region, the game will pause for a brief second (depending on your CPU it might not even be noticeable), as it does a very rough first pass of generating the chunks, deciding on a few macro-level attributes, etc. Then as you enter each chunk for the first time, it goes through the more detailed chunk-building process that populated that one specific area.each subsequent time you go into the region there is nothing more to do, but each time you go into a chunk, it goes through that rapid aging process.

A chunk can be as small as a single floor of a little house, or it could be much larger. We're still experimenting with chunk sizes, but Zelda 3 is a good gauge forthe rough sort of minimum size we're shooting for.

A region is pretty sizeable, actually: one region is probably larger than the entire overworld of Zelda 3. There would be dozens or even hundreds of chunks per region. The reason for hundreds of chunks is if you see, for instance, 15 office buildings in a single region, and they all have 20 floors... And you can go anywhere you see... And each floor of each building is a chunk... That's 300 chunks right there for the office buildings alone. And that's before you even get into caves or the region underworld, etc.

In terms of the world map, as you move around your view causes new regions to populate and get saved. Just the very barest of info is calculated at that stage: broad type of region, general difficulty, if it connects to any other regions via tunnels, that sort of thing. That gets saved to disk, and first time you enter any given region it uses that info to do the generation.

There are various other considerations with all of that, too, but that's the broad outline. It keeps CPU requirements to a minimum, and should keepworld building lag" constrained to just transitions into new regions and/or chunks that you've never been to before. And even that should be pretty slight in most cases. So far with our test chunk, it's less than half a second, but it's not fully populated yet. Anyway, thats how the world will be literally infinite. The only hardware limitation is the size of your hard disk, not the amount of CPU or ram you have. I don't know how big te largest worlds might get in terms of hard disk space, but I'd be surprised if it was more than a couple of dozen megabytes in most cases, and a couple of hundred in the really extreme cases. If you figure that, compressed, an ai war galaxy fits into 1-8 megs in most cases, that's a pretty good gauge. Of course, having a split-by-chunk file structure will mean that the opportunities for compression will be more limited.

Apologies for any typos, wrote this on my phone.

Logged

Have ideas or bug reports for one of our games? Mantis for Suggestions and Bug Reports. Thanks for helping to make our games better!

Heh, now your "simulation of a simulation" comment from elsewhere makes more sense to me.

I'm really looking forward to this. The basic game systems you are describing sound like you can get into some great detail and procedural world building without crippling resource requirements. I can't wait to experience this in action.

Heh, now your "simulation of a simulation" comment from elsewhere makes more sense to me.

Exactly. It's all done via "rapid aging" offscreen, in almost all cases. You play the game as an adventure game like any other, but the world changes around you as you're out adventuring, and as you do things that cause an impact to the world (saving NPCs, getting them to join a settlement, whatever).

Logged

Have ideas or bug reports for one of our games? Mantis for Suggestions and Bug Reports. Thanks for helping to make our games better!

Actually this "rapid aging" mechanic sounds a LOT of how blocks are handled in the city of Rogue Survivor. It will "simulate" what has happened in the frames that you were not there. You can limit this in the settings, since if you haven't been in a particular block in several in-game days, this simulation can take a LONG time, especially on large block sizes and population levels. So you can limit it to a set amount of frames if you wish.

Sounds like a good trade off between required RAM and "realism".

Logged

Click here to get started with Mantis for Suggestions and Bug Reports.

Yeah, sounds similar to Rogue Survivor, but also still very different. The way you describe it in Rogue Survivor is more the common approach, where you simply take an on-screen simulation and run it in fast-forward. Thus, depending on how intense your simulation is, your CPU and time requirements go up more and more depending on how long its been since you were last there. With a game like AVWW, with lots of spaces and worlds that are hopefully played over a span of many months or years, that's a huge problem.

Linear processing time could lead to several years worth of gametime needing to be simulated quickly in the worst cases, and even shortcuts to make that somewhat faster would still not be feasible. What you'd wind up with is that the more you played the game in a single world, the slower things would get as you ever revisited old areas. Ugh.

So instead, our rapid aging processes only simulate that simulation (Dream within a dream?). The idea is that if it's been ten seconds or ten years, the formula is the same and only the inputs are different. Rather than having an iterative simulations that are run over time, we have descriptive formulae with random components that take time as a variable.

The output of the two methods, if you ran them side by side, would give very different specific results, which would be a problem for a game like AI War. But for a game like this, all that matters is that the results are convincingly realistic. In that sense, we are able to arrive at the right kind of result with the simulation of a simulation, while not reaching the exact same result. So the result might be "North Dakota" instead of "Idaho," but it won't be "Green" or "3," to give an analogy.

This approach really lets us remain flexible and keep the CPU needs from spiraling ever upwards, which is really something we want to avoid. Ideally, all the rapid aging, no matter how long it's been, will take one quarter of one second or less to do. Maybe in extreme cases, half a second. And honestly in most cases I expected 1-2ms or less. I can't imagine how much would have to change to even take 3ms, which is great because there's a big gap therefore between our goals and the reality of the thing (in our favor for once).

Interestingly, the difference between what Rogue Survivor is doing, and what we're doing, is that we're "unrolling the loop" in a discrete math sense. It's something I've used extensively in the past for database operations, to cut down multi-minute processes into a couple-of-seconds process. We'll see even greater gains here, because the upper bounds of the number of loop iterations is so vastly much higher, while the data set itself is so much smaller. Ideal candidate for loop unrolling!

Logged

Have ideas or bug reports for one of our games? Mantis for Suggestions and Bug Reports. Thanks for helping to make our games better!

Interestingly, the difference between what Rogue Survivor is doing, and what we're doing, is that we're "unrolling the loop" in a discrete math sense. It's something I've used extensively in the past for database operations, to cut down multi-minute processes into a couple-of-seconds process. We'll see even greater gains here, because the upper bounds of the number of loop iterations is so vastly much higher, while the data set itself is so much smaller. Ideal candidate for loop unrolling!

Thank you for mentioning this. As a beginner in optimizing code, this is a great example of how to reduce processing/ect time. I'll make sure to keep this on my mind while coding my own things.

Anyway, I like what you've come up with. Very elegant solution. Believability is key, not necessarily "correctness" as far as the simulation is concerned. Like you said, as long as it's close the player won't care nor likely notice.