If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

There are so many variables to account for that it's hard to peg down a guesstimation, but I'd say that your average asset doesn't need more than 1-2 million tris, which will be 30-45 meg, and will only need two large material textures, the diffuse and roughness (and a metallic map, which doesn't have to be 4k, and only needs to be a 1bpp image if used properly, so they can be tiny).

It's hard to say how large the materials will be, since that can vary wildly depending upon the image format. Assuming the best of the best here, we'll say they're using .exr files for maximum quality. I downloaded a megascan to have something solid to work with, and a stack of 4k .exr textures (diffuse, roughness, displacement, and normal) can land you in the ballpark of around 200 meg. Most environment assets are, maybe, around 1500-3000 tris, and for an .fbx file, that's less than a meg. We'll roll with 200 meg for a single contemporary asset for the sake of conversation.

Now a new UE5 asset won't need a displacement or normal map, so we'll cut off half of its texture budget. 100 meg for the diffuse and specular. So you're looking at, all together, maybe 145 meg or so for a single asset?

Keep in mind that I'm only talking about static environment meshes here. Anything that's animated will be, well, probably exactly the same as what we're using now, since even the most miraculously well optimized engine in the world can't make shifting around massive amounts of vertices any less CPU intensive. Character models and whatnot will still be baked.

It's hard to say anything for a fact, but from spitballing, it doesn't seem to make a vast amount of difference, it may even be slightly better.

Looking at the export settings in Quixel Bridge, it seems my only other option are uncompressed .jpg files. Downloading the same material in that format gives me a 25 meg stack. Yeah, that's a quite a bit of difference.

Epic mentioned that they didn't really optimise it hard or target a high resolution, so that demo's at 30 FPS and (a possibly dynamic resolution) 1440p. And sure, not many AAA games are going to shove in that kind of asset detail in there, but if there's one thing we do know, it's that they'll find a way to use available space anyway.

Renz: think you worked a little too hard on that . We're all spitballing here and it could land on either side of anything since we don't have all the variables, but the real constraints are going to be if - as all signs are pointing to - retail is still the marketing priority. Then it comes down to Blu Ray disc capacity and how many they can stuff into those jewel cases. Triple layer BD seems to come out at 100 GB and quad at 128 GB, but I don't think I've seen one of those yet.

How many AAA games will have the quality of this tech demo (which was probably polished and optimised up the wazoo) as a baseline, though?

You ask how many.
Another question you should also ask in this context: when ?

My guess is that all AAA games will have the quality of this tech demo. And cheaper games too. But it might take 10 years. Or more. But imho there is no question that this will be the standard for games at some point in time. And we'll go beyond that. And beyond that. Until we can't make transistors smaller anymore.

When I was a little kid, pong was released. Seeing that was almost magic. How was that possible ?
In my first job (1984), I worked with a computer that did the administration for 3 companies, one pretty large one (an employment agency). 12 People using the system constantly. It was a Data General mini-computer (the size of 3 fridges). It had 512KB ram. It had 2 HDDs, both the size of a fridge too. One was 200MB and the other was 100MB. Having a 1TB SSD in my system doesn't surprise my anymore (I had it for 5 years now). So having a 10 TB or 100TB SSD doesn't seem impossible to me at all. We only have to wait.

Megabyte, Gigabyte, Terrabyte, Petabyte, Exabyte. Who cares ? We'll get there one day.
I'm looking forward to being 80 years old, and playing games with unbelievable eyecandy.
Seeing stuff like this UE5 demo makes me happy.

It's hard to say how large the materials will be, since that can vary wildly depending upon the image format.

The engine compresses the lossless input you provide on import, and that's usually one of DXT / BC modes. Roughness, metallic, and AO maps are all grayscale; they don't need to be separate textures, so typically they're packed in one image.

All in all, you can actually simplify that to one question really: will the high poly model offset the disk / memory cost of a high-res RGB normalmap?

And the answer is: yes, provided the model is around several hundred K polys, and not around 1M or above.

You ask how many.
Another question you should also ask in this context: when ?

My guess is that all AAA games will have the quality of this tech demo. And cheaper games too. But it might take 10 years. Or more. But imho there is no question that this will be the standard for games at some point in time. And we'll go beyond that. And beyond that. Until we can't make transistors smaller anymore.

Of course. I was not doubting that games will ever get to this point, just the feasibility of having tech of this level in the next generation of AAA games. And sure, I'm as big of a fan of technological progress as anyone. Isn't it absolutely amazing that in just a few decades you go from a 5 MB hard drive that's bigger than an industrial freezer and weighs over a ton to a several TB hard drive that fits inside your phone?

That said, though, I find that quite a lot of people spend so much time obsessing over whether they could that they never stop to ask whether they should. LA Noire came out nearly a decade ago. How many games are there that use their face tech? Exactly one -- LA Noire. Turns out that not many studios can afford to do what Team Bondi did. Hell, Team Bondi couldn't afford to do what Team Bondi did. Even with one of the richest game companies in the world backing them. And Team Bondi's motion tech might have been an outlier, but it seems that generally the cost of asset creation has absolutely exploded with the advance of more realistic graphics. Top line games now take nearly a decade (or more *cough*Star Citizen*cough*) and hundreds of people to make. Turns out that it doesn't really scale all that well with technological advancements.

And what is it all for? More linear scripted cinematic experiences? More pressure to play it as safe as possible? Huge teams that are more and more removed from the art they create? More perma-crunch and abusive working conditions? Games that have to sell millions upon millions of copies to justify a sequel? Entire genres becoming unprofitable and forced out of the AAA space as expectations and costs go up?

The way I see it, at the end of the day it's a very simple equation -- the more time and resources you need to make a game look good, the less time and resources you'll have to make the rest of it good. I don't think it's a coincidence that two of my all time favourite games (Thief and Dark Souls) already looked incredibly dated when they came out. Personally, though, I'll take that trade-off any day.

The Team Bondi example may not be the best one. Their face scanning tech was really odd because while that bank of cameras capturing facial motion and detail was really cool, it also needed people to be stuck in position for the capture to work, which meant they couldn't move while talking, so they had to stick that performance on top of separate full-body mocap. And the end result wasn't even that good! It looked like they'd rotoscoped faces onto blotchy pudding on top of a stiff polygonal model. Compare that with what Ninja Theory did for Hellblade with its single camera motion capture (tech they made specifically to work with the low budget they had), and it's no contest which one looks better. All that expensive tech for LA Noire, when other methods would have worked out just fine or better.

Star Citizen is also not a great example because it's the gold standard for feature creep and monetisation practices instead of just making the game and releasing it. Anything in the top-line games bracket that takes more than 3-4 years to make is likely suffering some kind of development hell in the current AAA environment. 9+ years is not the standard (SC is 8 as of now I reckon).

That said, I don't disagree with the overall point. There's better things to be done that recruiting and tasking a 500-strong or more art department with filling out a game's graphics - I'd trade every single Assassin's Creed ever made for a simple-looking Thief/Hitman clone with advanced AI and cutting edge sound design.

The other questions you have are iterations of the same ones we've had at least as far as 15 years ago. It's always going to be a reality of the commerce of gaming, and the balance depends on what people are going to do to make these high fidelity recreations of reality easier and cheaper to create. That's part of why UE5 is doing what it does; and dema's vision for procedural art isn't that far into the future. Banks of hand-crafted+procedural assets massaged into whatever game being made is the immediate future.

Epic mentioned that they didn't really optimise it hard or target a high resolution, so that demo's at 30 FPS and (a possibly dynamic resolution) 1440p. And sure, not many AAA games are going to shove in that kind of asset detail in there, but if there's one thing we do know, it's that they'll find a way to use available space anyway.

Was lack of optimisation really the reason that forced them to have the demo at that resolution and FPS, though? Or was it that they simply hit the ceiling of what the PS5 is capable of under this kind of graphical load?

Originally Posted by Sulphur

The Team Bondi example may not be the best one.

Star Citizen is also not a great example

Sure, I didn't really mention these examples because they are typical of game development, but rather as the extreme where the pursuit of technology can push game development into.

Originally Posted by Sulphur

The other questions you have are iterations of the same ones we've had at least as far as 15 years ago. It's always going to be a reality of the commerce of gaming, and the balance depends on what people are going to do to make these high fidelity recreations of reality easier and cheaper to create. That's part of why UE5 is doing what it does; and dema's vision for procedural art isn't that far into the future. Banks of hand-crafted+procedural assets massaged into whatever game being made is the immediate future.

And yes, my concerns don't really have much to do with UE5 in particular as much as with the general push for realism and quality having a negative impact on the medium.

Was lack of optimisation really the reason that forced them to have the demo at that resolution and FPS, though? Or was it that they simply hit the ceiling of what the PS5 is capable of under this kind of graphical load?

Honestly, we don't know what kind of graphical load there was, as barring a second or two where you could see frame rate dips, it was a seamless experience. In comparison, the last real-time demo from Epic for PS4/UE4 (Agni) had a lot of places where the frame rate jumped up and down and just kind of struggled, but it's been bested both in performance and fidelity by the PS4's most technologically capable games like God of War. This means that there's probably headway for things to look better on the PS5 relative to the demo's graphics.

Nanite scaling both up and down the hardware power ladder means we have no idea how expensive it is coupled with Lumen - though the fact that there wasn't any RTX in play during the demo is telling.

Yeah, the lack of RTX is part of why I'm suspecting they hit a ceiling. And at the end of the day, it's a tech demo. You don't really know how feasible this is in a full-scale game under the hardware limitations until you have tried it.

I'm not seeing anything that disproves they had a limited budget relative to AAA, or that says they abused their resources? They obviously have years of expertise and talent, but that's never going to be a guarantee that your game will sell well.

Huh? But that's not what anyone has claimed or what the article is trying to say, though? It is saying that what was accomplished with Hellblade was in large part due to their unique circumstances and therefore serves more as an example of an exception to the rule.

My citing of Hellblade was directly in relation to the Team Bondi facial tech example. There were different, smarter ways of going about it that didn't need to blow a hole in the budget. Larger systemic issues with the industry aren't going to be proved or disproved by taking only a few examples either way.

Yes, I get it. They are different teams doing different things at a different time and Team Bondi doing what they did in the 2000s is not comparable to what Ninja Theory was doing in the 2010s. I understand all that. My point, though, is that Team Bondi's LA Noire is an example that proves the trend of budgets and teams getting more and more bloated and development becoming more and more expensive whereas Ninja Theory's Hellblade is an anomaly in the large scale of things.

But sure, Hellblade was an amazing game done by exceptional people with a razor-sharp focus and very likely at a much, much lower price than a big AAA game (though likely also not making as much profit). You do have to wonder, though, if they were making a game of LA Noire's scale and ambition, how much less would it have really cost?

Right now we see a jump in engine-technology. Your remark (as I understand it) is: what use is new engine-technology when it is way too expensive to develop content for that engine ? Well, the answer is (I think): better tools.

In my own area of expertise, 20-30 years ago, the big challenge was to create new technology, and make it scale. Nowadays, for the last 10 years, nobody really cares about newer or better technology. Yeah, of course they want faster and they want cheaper. But the issue that is hot right now is: how do I manage this technology ? So all the focus over the last 10 years has been on tools: how do I configure my stuff, how do I troubleshoot my stuff, how do I manage software versions, etc.

I think gaming will go through similar cycles. We get new engine-technology. But it's impossible to use on a large scale. Too expensive. Then someone (or some companies) come up with new technology to create content for those engines. Easier and cheaper ways to do motion capture. Easier and cheaper ways to create a 3D environment from simple pictures or movies of existing things. Right now nobody is encouraged to create those (relatively expensive) tools. Because creating content by hand is cheaper. But once creating content by hand gets too expensive, people will be encouraged to develop those tools.

Same could be true of the content too. Not just the tools. If you need a picture of a cow, do you drive to the country and take a picture yourself ? Nope, you buy a picture from a specialized website. Same could be done for game content. What Sulphur calls "a bank of hand-crafted+procedural assets". I think this is not common yet, because it's still cheaper to produce your own models/textures/meshes. But once such banks become cost effective, they will lower the cost of producing games.

I do hope that by that time, studios will have rediscovered that "making games that are fun" can make them good profits too. And they will focus less on milking their games for every penny they can, and taking minimal risks. But that's a whole different issue.

Yes, something being possible doesn't mean it's necessarily viable or desirable. At least in the near future. But I also recognise that I'm in the minority and that there will always be a lot of demand for the best possible graphics and more photorealism, even at the cost of other things.

Yes, something being possible doesn't mean it's necessarily viable or desirable. At least in the near future. But I also recognise that I'm in the minority and that there will always be a lot of demand for the best possible graphics and more photorealism, even at the cost of other things.

You know, with all these new features that make creating high poly photorealistic graphics that much easier to do, I have a feeling it'll ultimately create a push for more stylized games, since they'll end up standing out more from the rest of the pack.