Display posts from previous

Sort by

I just want to say I've been watching for years and I'm glad this is still being worked on!

As far as the economy goes, the only way to know for sure is a lot of AI testing.

See what the universe spits out, then check to see if you can pack up bad areas or need a new style of economics.

Economics is chaos, and there can't be a perfect system or the growth stops and it suffocates. It has to change, there has to be winners and losers to make it work.

Some things should have a massive effect on the game, and others should be hardly noticeable. And those things should change, seemingly at random like a real economy. Don't be over concerned with chaos. Chaos is capitalism...chaos is economics.

What I don't know is whether Limit Theory can be fun with no random events of any kind. Seriously, that's a Josh-level question: to what extent are random events (at any level from generating space station resources to stars exploding) necessary in Limit Theory to insure that there's always something interesting/fun happening somewhere?

The whole game could run without a random number generator at all.
In this case the designer would need to provide all the parameters for the procedural generators. Ultimately the designer taking over the role of the random generator, with the difference that the designer can adapt the parameters to some aesthetic or gameplay goal.

I think a mix of them both is the best here. (having a number of hand-selected beautiful systems and shipdesigns, mixing it with parameters from a random number generator)

Like in Spore, where the planets and creatures where actually generated from a number of preciously handpicked parameters, eg someone (a woman who was giving a talk about it, cant find the video), was filtering them according to a good balanced and visually pleasing output.

Some things should have a massive effect on the game, and others should be hardly noticeable. And those things should change, seemingly at random like a real economy. Don't be over concerned with chaos. Chaos is capitalism...chaos is economics.

I think I don't fully disagree with you here... but I wouldn't say that capitalism (or lesser socioeconomic organizing systems) is fully chaotic. If it were, it wouldn't work; a fully chaotic system could not be trusted by human participants to reliably deliver rewards for effort expended. Even if there are no guarantees of success on an individual basis, the system as a whole can be seen to be stable and comprehensible enough (i.e., not entirely random) that participation makes sense.

The whole game could run without a random number generator at all. In this case the designer would need to provide all the parameters for the procedural generators. Ultimately the designer taking over the role of the random generator, with the difference that the designer can adapt the parameters to some aesthetic or gameplay goal.

In the case of games with fully handcrafted content, this is exactly what happens, even if some low-level proceduralism is included.

But it seems pretty clear that Josh wants more procedural content generation systems in LT than that; the designer's creativity will go into the construction of random and procedural systems that together produce results that tend to be aesthetically pleasing and fun to play with.

Here I think I need to amend my earlier suggestion that there's no random production of input values between the start of universe generation and the moment when the player is turned loose in the generated world. I'm sticking with the statement as made that there are no truly random inputs... but of course the shape of a created universe depends on having a pseudorandom number generator that emits a reliable sequence of random-appearing numbers given the same seed value.

Randomness is used all over the place in the AI computations. Whenever the AI must make a choice, it uses a source of randomness. Every pseudorandom stream is deterministic based on the parameters that generated that stream ('seed').

This means that, if you use the same seed to generate the 'randomness' for the AI each time, the AI will make the same decisions. It does not mean that the AI didn't have a choice, only that the same choices will be made if the same stream of randomness is used.

Furthermore, just because you know all of the random stream beforehand, does not mean you can simply 'compute' the end result of a historical simulation directly. It's similar to an extremely complex integral that can't be evaluated directly. The only way to get the answer is by discrete integration (which is really what a simulation is), there is no 'function' that can just give you the end result given a seed (well, unless that function performs discrete integration).

So to summarize: yes AI choices are deterministic WRT the pseudorandom stream, no that does not preclude free will / true decisions, and yes we still have to run the (deterministic) simulation in order to generate the universe

Something has to produce those values, otherwise there's nothing for the procedural algorithms to operate on. It's just that the generator can't actually be literally random, otherwise you couldn't get identical universes from the same starting seed. Fortunately, it's a lot easier to build pseudorandom number generators than ones that produce numbers that approximate true randomness.

That said, there is a source of highly-randomized inputs that will be available once the game starts: the person playing Limit Theory. The moment each of us starts performing any actions at all within the game, it's pretty much the Butterfly Effect. Any in-game system that uses player choices for input values will be affected; the results of these systems then feed into other systems; and so on. From the moment we take our first action that's used by an LT system connected to other systems, the shape of our game universes will start to diverge. As Josh Himself put it:

Whenever the AI needs to make choices, it is using deterministic, seed-based randomness in the same way that the procedural algorithms do. The determinism lasts only up until the player enters the game. This means that, in theory, you will see the exact same universe & universe history given the same seeds each time you start the universe. But after that point, the universe will start to diverge based on the player's actions.

There are a couple of ways in which this might not happen: 1) Nothing the player does is used as an input to a system that feeds other systems, or 2) There's additional code that damps out the effect of player actions -- player choices are only allowed to have local/short-term effects. This would yield a game universe that's similar to how some time-travel stories are defined, where it's simply stated that "the universe doesn't like to be changed" and you really have to work hard to make meaningful changes stick.

It's not inconceivable that Josh might design LT to work like this. That could be interesting. But I suspect we're probably going to see a more dynamic system than that, where even small differences in player inputs add up over time to yield visibly different game universes.

And that's also interesting.

(For a couple of additional posts on pseudorandom number generation for LT, check out this one and this one from Josh, as well as this one where Josh briefly notes that the relative error inherent in his PRNG code is so low that it should never lead to differences in generated universes.)

There should be zero random events between the start and end of universe generation.

..and that specifically, which seemed to me to be potentially contradictory to Josh's talk of using random events for capitalisation of industry, etc. which would happen at the time of universe generation.

The moment that universe generation is complete, the same seed should produce identical universes. Only after the clock starts ticking -- the player starts playing -- should identical universes start to diverge as random events begin to occur.

That's precisely what I was expecting from LT, and which the economic discussions held a potential to derail, IMO.

There's a thought: what if there are zero random events? As long as players take zero action in the gameworld, this should mean that universes with identical seeds remain identical forever. Basically the old Cartesian definition of a clockwork universe.

That's definitely one of the paths I started thinking about in this whole theme, and TBH I think it would be impossible to tell if it happened
I certainly think that even if it didn't happen this way, the only way you as a player could tell is if a visitor from another star system came to your location and tried to initiate an interaction with you, but a second run with the same seed failed to produce that.
It seems to me that there's a lot of value in this approach simply because of the efficiency of not doing anything behind-the-scenes until you need to

What I don't know is whether Limit Theory can be fun with no random events of any kind. Seriously, that's a Josh-level question: to what extent are random events (at any level from generating space station resources to stars exploding) necessary in Limit Theory to insure that there's always something interesting/fun happening somewhere?

I think after a player starts doing things (even travelling), random events are perfectly acceptable.
As I say, I think you'd never know if there weren't random events anyway due to time differences, and I can't see any reason why you'd want to simulate that if you couldn't tell as a player.

As there is no (nondeterministic) player influence during world generation the same seed will always produce the same starting universe.
And should do that as well when identical player actions are fed into the sim after the initial generation terminates

Everything else is derived from the pseudorandom number stream, why cant "random" events be well?

Maybe we should be distinguishing between "pseudorandom number generator producing input values for procedural systems" and "Random Rare Big Events" as those have different content-generating purposes... but I think you're right that ultimately these do both depend on the output from the PRNG.

As long as the player does nothing whatsoever in a new game universe, every bit of content -- meaning both RRBEs and low-level input values feeding procedural systems -- comes from the same combination of starting seed + generative algorithms + time. As long as you don't have mods installed (so that everyone's generative algorithms are identical), and you run the game to the same point in time, then if you use the same seed you'll get exactly the same universe, RRBEs and all. Those, too, are produced by the generative algorithm "deciding" that it's time for some weirdness somewhere and using the next value(s) from the PRNG (which always produces results in the exact same sequence up to Josh's 2^53 error limit) to create that RRBE.

It's only after you-the-player start doing things in your universe, which changes how the immutable sequence of pseudorandomly-generated numbers are applied in your universe from how that same sequence is applied in someone else's universe, that divergence starts to happen.

(Note: I guess it's mathematically/physically possible that a stray fleck of cosmic radiation could bonk part of the RAM being used to generate an LT universe, potentially resulting in a divergent game universe. Allow me to be the first to say I'm fine with Josh not coding error detection/correction routines to catch and fix such events. )

Whenever the AI needs to make choices, it is using deterministic, seed-based randomness in the same way that the procedural algorithms do. The determinism lasts only up until the player enters the game. This means that, in theory, you will see the exact same universe & universe history given the same seeds each time you start the universe. But after that point, the universe will start to diverge based on the player's actions.

There are a couple of ways in which this might not happen: 1) Nothing the player does is used as an input to a system that feeds other systems, or 2) There's additional code that damps out the effect of player actions -- player choices are only allowed to have local/short-term effects. This would yield a game universe that's similar to how some time-travel stories are defined, where it's simply stated that "the universe doesn't like to be changed" and you really have to work hard to make meaningful changes stick.

It's not inconceivable that Josh might design LT to work like this. That could be interesting. But I suspect we're probably going to see a more dynamic system than that, where even small differences in player inputs add up over time to yield visibly different game universes.

And that's also interesting.

There’s another thing to consider: at universe generation from an initial seed, I doubt that the whole infinite (or extremely large) universe is generated. It’s more likely that only the, possibly quite large, vicinity of the start point is generated and simulated at various levels of detail.

As the player explores, new systems will be generated in the direction of travel and the back history simulated. With careful structuring of how the seed is used in the generation of each star system, it is feasible to have a deterministic physical structure of the universe.

However, the history is possibly going to be different, though similar.

Why?

Say that the first time I play, I set off in a direction and that leads to a new star system being generated that I visit after an hour of play but just pop in and scan it without doing much before leaving. Then I revisit it after a further 19 hours of play.

Then I start a new game with the same seed, and go in a different direction and eventually, after 20 hours of play, I visit the same system.

Likely, the social structure will not be the same because the simulation in one case was simulated in coarse detail after 1 hour, then fine detail for a few minutes and then 19 hours of coarse detail. In the second case, the first time I enter the system, it’s history is generated for 20 hours of coarse detail.

One idea I'm wondering if you've considered for the AI is observational learning. This is of course one of the primary ways that humans learn -- we copy the behaviors of those around us. Your friend or sister or mother gets a degree in engineering and gets a great job, so you decide that you'll also major in engineering; a company in your city pumping out fidget toys is doing spectacularly well, so you decide to start a company that builds fidget toys too; a country on your planet that is a capitalist republic becomes wealthy and powerful, so you decide to turn your country into a capitalist republic.

Observational learning won't help with early decisions at the initial creation of a universe, because presumably there's no history and no observations to draw from, but later on in the evolution of a universe I'm wondering if you might see AI make reasonable/intelligent decisions more efficiently -- or perhaps just see it make better decisions period -- by making use of observational learning. For example, a fledgling AI faction might observe that a similarly sized AI faction in the solar system has increased its wealth/power by building lots of fighters and using those fighters to raid mining ships in a nearby asteroid field. The AI doesn't need to understand why building fighters or raiding is effective. It simply needs to observe a correlation between purchases and outcomes of those purchases, and/or actions and outcomes of those actions (did an observed faction's wealth/power decrease or increase following the purchase/action?). In short, copying others' effective purchases/behaviors and avoiding others' ineffective purchases/behaviors should yield positive results, and should resemble something akin to intelligence. The results of purchases and behaviors are of course ever-changing in a dynamic and evolving universe, such that, for example, all factions copying a wealthy pirate faction by building only fighters and using them to raid will result in increasingly smaller (and eventually negative) return on investment due to increasingly fewer mining ships, but this reflects the "tragedy of the commons" that happens all too often in our own world populated with self-interested human beings. And as soon as it's observed that mining has become more profitable than raiding, mining will suddenly boom and pirating will plummet. But then before long, pirating will become very profitable again. And so the cycle continues.

I imagine that the degree to which observational learning is used by a given faction could reflect one element of its AI personality. Some factions may be strongly inclined towards incorporating observational learning into decision making whereas others may be less inclined. I would imagine that in all cases, the more similar a faction is to another in certain important ways (certainly size and wealth among other factors), the more likely a faction would be to learn from it; building aircraft carriers might make sense for the U.S. and China, but it wouldn't make much sense for Argentina to look at the U.S. as an example and try to build carriers too. It's also probably true that newer observations would have to have more weight than older ones.

I imagine a universe at its start would be pure chaos. Factions would have to make decisions using very basic reasoning or by simply making random decisions and observing the results. But in a universe with observational learning, I imagine that over time not only would individual factions learn from their own experiences and become more intelligent, but the universe as a whole would become more intelligent. A person born into the world today benefits from the accumulated knowledge and experience of their parents (and/or caregivers, friends, coworkers, etc.), who benefited from the knowledge and experience that was accumulated by their own parents, and their parents' parents, and so on. Likewise, a mature universe would be inhabited by AI factions making decisions guided by thousands of years of experience accumulated by not just their own faction, but the accumulated experiences of all the factions they know, who in turn have been educated by all the factions known to them. An old universe would hence be a very different kind of experience than a young one, and even new factions in an old universe would be wise beyond their years.

One of the interesting consequences of such a universe, inhabited by observational learners, is that it would matter who you know and what you know. A faction that has explored more and has more knowledge of its enemies and allies would be likely to make better decisions, because it's aware of more of the follies and successes of more factions (i.e., can choose the strongest positive correlations among a larger pool of known correlations). Even still, factions in a mature universe may get into behavioral ruts, just as older people can sometimes get set in their ways. Sufficiently novel behavior may be very effective in a universe where the behaviors of many factions have converged into predictable patterns. When old wisdom fails, ancient empires may fall and give way to a new universal order. (translation: learning history could be a weakness rather than an advantage)

Another interesting consequence is that one imagines different regions of the universe having certain flavors that would develop due to factions copying the behaviors of other factions that exist in their local region of space. I think this could make for a pretty interesting universe.

What do you think? Is implementing observational learning feasible? Would it be worthwhile?

That is a very interesting idea. After reading your post I'm struck by the following idea. I think it might be a bit too computationally expensive to add your idea to what Josh is presently doing with AI, and I don't think it's really possible to change his mind about how the AI works at the moment. But having said that I think there is a way that this could really be cool and not too computationally problematic. It shouldn't be too hard to have a separate AI who only copies. They see someone building a station of a similar size they do it too with no questions asked about whether it's a good idea. Of course they know their own assets and can work out if something is possible, but there is no time wasted on whether it's actually a good idea, only copying. Now if what Josh is creating and these copycats could be combined in the same world I think some interesting emergent properties might come out. This article by Tim Urban is a very interesting read on this topic.

A life well lived only happens once.Seems deep until you think about it.

I feel like they're making things harder than they need to be when it comes to posting dev logs. If it is supposed to be a 2 week update schedule (I remember Josh saying there would be weekly updates, but I could be mistaken), then each of them only needs to write an update every 6 weeks. That should be plenty of time to write a short "here's what I've been doing, here's what I plan to do" post.

I feel like they're making things harder than they need to be when it comes to posting dev logs. If it is supposed to be a 2 week update schedule (I remember Josh saying there would be weekly updates, but I could be mistaken), then each of them only needs to write an update every 6 weeks. That should be plenty of time to write a short "here's what I've been doing, here's what I plan to do" post.

Four, actually, since Lindsey left the team. I'm not sure if Josh ever actually wanted to do them weekly. I know Lindsey did, but I advised to try to keep it to a once-every-two-weeks schedule. Unfortunately even that seems to be a bit difficult to keep track of. It shouldn't be, though. They really don't need to do the long ones, just short updates.

I feel like they're making things harder than they need to be when it comes to posting dev logs. If it is supposed to be a 2 week update schedule (I remember Josh saying there would be weekly updates, but I could be mistaken), then each of them only needs to write an update every 6 weeks. That should be plenty of time to write a short "here's what I've been doing, here's what I plan to do" post.

Four, actually, since Lindsey left the team. I'm not sure if Josh ever actually wanted to do them weekly. I know Lindsey did, but I advised to try to keep it to a once-every-two-weeks schedule. Unfortunately even that seems to be a bit difficult to keep track of. It shouldn't be, though. They really don't need to do the long ones, just short updates.

wait Lindsey left? why? is it something you can say or did she simply get another job or something? is it going to impact development too?