Ptolom wrote:I just remembered the nightmare of downloading 50GB of Doom only to find wine couldn't handle the copy protection.

Doom in 50GB? That must be new Doom, then. But the original wasn't even fifty megabytes. (Two or three floppies, compressed, at most?)

How much of that is necessary code..? And/or data?

And, having decided to not just post that as a direct reply, I'm now going to ramble a bit rather than leave a possibly loaded question.

Does running under Windows (or a further step or two removed, with DirectX, etc) save coding, like it ought to, or does it take more effort for programmer/compiler to jump through API hoops whereas previously it'd just drill its own lean'n'mean designer boilerplate straight down to x86 assembler for the most part (user-supplied details prompting branching for Soundblaster16 audio or Hercules graphics, etc)?

How much efficiency then gets eaten up with the extra graphical complexity? You'd assume most of the actually detailled 'code' would be supplied by hardcoded GPU, not needing supplying by the game. Whether raster data (then rendered in 3D largely by the card, rather than the faux-3d/2.5d in-binary method of 20+ years ago) or vector (see Alone In The Dark of... '92..? for a sprightly unaccelerated code-only solution, and even original Elite's wireframe on 32k(!) systems, and its Frontiers successors). In my mind, the release of control of the complicated rendering guts to a ShaderModel3+ GPU should outweigh the complexity of priming such complexities as custom part-transparent view-windows in 3d space, but I've dabbled too little (essentially not all all outside of already abstracted situations) in GPU-optimised code.

Maybe the WADs (or modern equivalent), now neednto store greater detail of texture, layout and (though likely minimally, unless plaintextually like Transcendence's XMLness) script-coding. And of course I expect the levels to be more numerous than the 32 (I think) original Doom release levels, as well as more detailed.

Maybe there's something in the more advanced enemy (and ally?) AIs that suck up code, although the kind of borderline Artificial Stupidity that a playable game needs to be just-unpredictable-enough. And usually a tuned 'emergent behaviour' suffices to present even a seemingly complex 'society' of denizens. And byte-setting/bit-flipping code (also relevent to states such as those used in tracking absolute positions/headings/intentions of the doubtless more numerous NPCs) is the simplest thing to create,.

I sort of have this same distrust with OO programming, I must admit. It makes things easier to visually understand during/after authoring (and by compiler-complaints, debugs data re-associations across totally different elements that not explicitly defined as linked by inheritance/etc), but I always get the distinct feeling that envelope-code used in object-handling actually slows things down if the compiler doesn't unstrip everything in the human-readable container-code towards getting the executable.

Yeah, almost all code you consume, whether games or webpages, is 90% images/etc. Code is always a fairly small part of it.

Like, there's a haha-only-serious bit of webpage optimization advice that, instead of doing tons of complicated shit to optimize and batch and cut your code weight, you can just cut 1 jpeg from your front page and be better off.

For comparison the wad file for Doom 1 is 12M, Doom 2 is 14M, Doom 3 (2004) is 1.5GB unpacked. Doom 4 is 47GB. If that trend continues Doom 5 will be about 1TBAlready I can't really appreciate the high resolution textures on my 1600x900 monitor.

Don't confuse features with bloat. By today's standards, Doom 1 was a relatively simple project. Here are a few statements that I strongly suspect to be true:

* Doom 3's physics engine is an order of magnitude more complex than all of Doom 1 combined.* Doom 3's network code is more complex than all of Doom 1's non-network functionality combined.* The piece of code responsible to render Doom 3's ingame videos is more complex than all of Doom 1 combined.* Doom 3's audio code (with positional audio, and audio changes based on level geometry) is more complex than all of Doom 1 combined.* The facial expression system of Doom 3 is more complex than all of Doom 1 combined.* Sadly, the copy protection system of Doom 3 is more complex than all of Doom 1 combined.* etc

(Note: I haven't played or examined Doom 3; these statements are equally likely to be true for any other current fps)

So that's why the executable grew larger, it just does more stuff. You cannot derive statements about coding efficiency from the data you presented.But again, most of the 50GB are textures/videos/3d models, the executable is probably just a few MB. Still larger than doom 1, but not as much.

Soupspoon wrote:I sort of have this same distrust with OO programming, I must admit. It makes things easier to visually understand during/after authoring (and by compiler-complaints, debugs data re-associations across totally different elements that not explicitly defined as linked by inheritance/etc), but I always get the distinct feeling that envelope-code used in object-handling actually slows things down if the compiler doesn't unstrip everything in the human-readable container-code towards getting the executable.

Please internalize the golden rule of optimization: measure first. Distrust without data is not a useful thing to live by.

I believe your assumptions are wrong, and also misguided - you're focusing too much on small scale optimizations. With infinite time and resources, you can of course produce the most efficient and optimized executable just using assembler without any fancy structure or paradigm. But you never have infinite time and resources, so you need to do the best with what you got. Using OO will introduce maybe a few percent of overhead (unless you go bonkers with virtual multiple inheritance in every class or something), but OO will also allow your programmers to finish the same task in less time, so they will then have time left to benchmark the code, find inefficiencies and employ optimizations that will far outweigh the few percent lost.

Soupspoon wrote:I sort of have this same distrust with OO programming, I must admit. It makes things easier to visually understand during/after authoring (and by compiler-complaints, debugs data re-associations across totally different elements that not explicitly defined as linked by inheritance/etc), but I always get the distinct feeling that envelope-code used in object-handling actually slows things down if the compiler doesn't unstrip everything in the human-readable container-code towards getting the executable.

Please internalize the golden rule of optimization: measure first. Distrust without data is not a useful thing to live by.

I believe your assumptions are wrong, and also misguided - you're focusing too much on small scale optimizations. With infinite time and resources, you can of course produce the most efficient and optimized executable just using assembler without any fancy structure or paradigm. But you never have infinite time and resources, so you need to do the best with what you got. Using OO will introduce maybe a few percent of overhead (unless you go bonkers with virtual multiple inheritance in every class or something), but OO will also allow your programmers to finish the same task in less time, so they will then have time left to benchmark the code, find inefficiencies and employ optimizations that will far outweigh the few percent lost.

OO is great for speeding up development -- and for certain things it's hard to imagine how to implement it without some form of OOP.

Code readability should not be underestimated. Readable code is easier to debug and easier to improve and maintain. If you've got a (large) program entirely written in assembly, while it may be fast and small, it will be next to impossible to maintain without the original programming team. However, if you implemented the same code in a highly readable fashion (like OOP tends to do), it's much easier to get people who can maintain it further down the line. They spend less time just trying to parse the code and more time actually maintaining it.

Tub wrote:Don't confuse features with bloat. By today's standards, Doom 1 was a relatively simple project. Here are a few statements that I strongly suspect to be true:

* Doom 3's physics engine is an order of magnitude more complex than all of Doom 1 combined.* Doom 3's network code is more complex than all of Doom 1's non-network functionality combined.* The piece of code responsible to render Doom 3's ingame videos is more complex than all of Doom 1 combined.* Doom 3's audio code (with positional audio, and audio changes based on level geometry) is more complex than all of Doom 1 combined.* The facial expression system of Doom 3 is more complex than all of Doom 1 combined.* Sadly, the copy protection system of Doom 3 is more complex than all of Doom 1 combined.* etc

(Note: I haven't played or examined Doom 3; these statements are equally likely to be true for any other current fps)

I think confusing Doom 3 and Doom 4 (the new Doom) - Doom 3 was from 2004 and I think very few of those statements would be true.This suggests that Doom was ~40k sloc and that Doom 3 ~600k sloc - that's an order of magnitude increase, but actually perhaps not as big as you might expect (Quake 3, including its tooling, was ~350k).We can't comment on later id games - the source code has not been released.

Soupspoon wrote:And of course I expect the levels to be more numerous than the 32 (I think) original Doom release levels, as well as more detailed.

An unfortunate fact of modern games is levels are seldom more numerous, just much more detailed.There's still a Doom modding community, even Doom 3 had people making mods and maps. Making high-quality mods in modern games requires much more effort, as the standard set by the base game is so high.

pogrmman wrote:However, if you implemented the same code in a highly readable fashion (like OOP tends to do)

I'm of the opinion that readable code and OOP code have no causal link. Video game devs have their own reasons for not liking OOP, and long term maintainability has historically not been a concern.

(And started writing this before Xenomortis's prior message appeared...)

Points already conceded about readability compared with bare Assembler, but wisely written C is a short step from such an optimum yet does not drag invisible object-handling into the equation. I don't object to the handling, but I like to know what goes on, in getting my mind into the code, and writing my own packaging methods, effectively, was something I now realise I did before I knew what OO was. But today isn't the age (with exceptions) of one-person developers producing their own products in isolation.

Tub wrote:Here are a few statements that I strongly suspect to be true:

* Doom 3's physics engine is an order of magnitude more complex than all of Doom 1 combined.

Even in the early days of delayed-hitscan approximations, there were still explicit ballistic (sometimes even guided-ballistic, with limited bending or homing) for select projectiles... Gravity was unvarying (until quake) but within the limits of the faux-3d engine were only a line of code away from being variable or even invertable gravity (a few more lines and more even than that). Debris impact, especially from destructable architecture, is largely missing, but that's already something the rendering engine doesn't provide, and when it does I'd class that in dynamic projectile handling. If you include rendering of the true-3d layouts into the umbrella of the Physics Engine, maybe ypu have a point, but see below.

* Doom 3's network code is more complex than all of Doom 1's non-network functionality combined.

I find that hard to believe. Message-passing of remote-actor actions sufficient to give a (mostly) lag-free experience of a remote player (and keeping the resulting 'shared' ballistics/environment chamges pseudo-synchronised) is not a particularly complex problem, beyond tuning up which failure conditions prompt which workarounds. And perhaps some anti-sniff/anti-spoof measures.

* The piece of code responsible to render Doom 3's ingame videos is more complex than all of Doom 1 combined.

By my understanding, most of that is delegated to dedicated GPU resources. Originally it will have involved rendering a bitmap across a polygon by determining (or sourcing from the pre-render) the component triangles, determining the limits of the graphic to be emblaxoned, calculating the offset, scaling and (probably) shearing matrix, then scanlining the triangle, mapping the appropriate texture-point (straight, or fuzzy/antialiased/etc, accordingly) to the screen or pre-screen memory-map. These days, it is a polygon (if not polyhedral), viewpoint parameters and (if not already cached for use) the image resource, and ditto bump-maps and refectivity maps to taste that are passed to the copro accelerator that already has a majority of the required functionality hard-coded into it. Even recursive viewports (trivial to code, potentially intensive to implement) are easily explored. Slightly more effort for 'faux lag', but can still be done all on-card.

* Doom 3's audio code (with positional audio, and audio changes based on level geometry) is more complex than all of Doom 1 combined.

Doom 1 already used stereo positioning. Was famous for it. And for two decades, at least, sound-cards have featured post-processing abilities that can add such subtleties that these days are being used to give true binaural '3d sound' effects without advanced speaker layouts, and 5.1-capable hardware is again an example of responsibilty largely abdicated by the game-code, beyond being primed with the basic requirements on how to morph'n'mix the audio samples.

* The facial expression system of Doom 3 is more complex than all of Doom 1 combined.

Well, that's wireframing vs spriting, directly compared. But the majority of the wireframing is handled off CPU, as above.

* Sadly, the copy protection system of Doom 3 is more complex than all of Doom 1 combined.

Unless there's a multivariable polymorphism, involved, I'm not sure how even (self-referentially) binary-hashing check-code, and all the other classic tricks, take multi-megabytes of code to accomplish.

* etc

(Note: I haven't played or examined Doom 3; these statements are equally likely to be true for any other current fps)

Not played, myself. Watched. Admired the techniques.

But again, most of the 50GB are textures/videos/3d models, the executable is probably just a few MB. Still larger than doom 1, but not as much.

That's probably the crux. I'm a coder, not much of an artist. I prefer procedural generation (and/or variegation*) to personally retouching and personalising every single pixel of a landscape/etc. Not that my eye-candy is as eye-candyish as a fully artified eye-candy. Perhaps in conjunction with an artist willing to collaborate, though, I could maybe provide a visually-stunning and not obviously cloned dynamic landscape with a minimum of base data.

The brickwork of a building can be reduced to subunits, for example. At a given Level Of Detail, the brickwork is represented by a suitable set of 'panels', some trivial subroutine (akin to a discrete cosine transform, for example) changes the nature of that panel (contiguously with its neighbours, if above the brick-by-brick level of differentiation where that doesn't matter so much) to make it non-identical to its neighbours. Additional modifications might be that beneath each window-frame, perhaps, dark marks can be overlaid according to age (or as shadow, but only where shadows aren't going to be post-rendered anyway) to represent industrial-era sooting up of rain-protected areas, etc, without a hint of spraybrushing in the original artwork.

The closer your viewpoint, the higher the LOD and further DCT-inspired variations are added to bring out higher-res 'brick' panels in line with the variegation already implied at the lower LOD plus newly revealed fine detail only now emerging from the pixel-soup. As the view gets closer, less 'absolute' area needs rendering and so more overlays can be added to a section of screen to no extra workload (or, conversely, zooming back out, the additional background can take up the effort not needed to be used at near-subpixel levels of rendering, and peering round the edge of a builing, the near wall takes similar effort to finely detail thst the vista spreading out on to the horizon does, for like areas of visual accuity).

Once at the brick-level, manufacturing marks can distinguish each and every one, whilst being in line with the transforms of the less precise LODs, and with an actual deliberately defined 'delta' in the scene definition, a single brick can be seen as subtly miscoloured at all relevent resolutions, indicative of it being 'special' (loose and concealing a plot-relevent dead-drop package, maybe, or subtly proud from the wall to enable a hand/foot-hold to aid climbing), and all with a smallnset of images and a (relatively) simple recursively calculating engine to create scenes as varied as the library of variations allows.

I doubt my early efforts on this still survive, on the floppy disks that are doubtless beyond use (and techniques have probably overtaken me, since), but that's the background behind my fledgling proof-of-concept.

(Hint: work with Hue/Saturation/Luminosity base images, or similarly defined/interpreted, rather than RGB. Applying a mat hue transform to each building gives each an individual character of 'stone', with a very much reduced variation of hue between panels unless you want it to look like a Rainbow House; meanwhile, saturation and lumonisity(/intensity/value/whatever) can be seperately cued to their own ranges, independently, so that even a tightly single-hued structure can show variations in weathering/etc, and shadow-overlays affect just the one image-plane, for speed. Easier to then convert that to RGB prior to painting the end pixels than to keep trying to keep balanced the desired hue/etc of the RGB trio throughout the entire process of adjusting with procedural washouts, et al.)

True, sorry. I was talking about doom 1 vs. current-gen games. My observations weren't based on actual LoC, but on my rough understanding of each module's workings.Though I underestimated doom 1's complexity; I totally forgot that it includes a software renderer and sound drivers and everything. A modern port that calls into OpenGL and other system APIs could be a lot leaner than the original.

Xenomortis wrote:I'm of the opinion that readable code and OOP code have no causal link. Video game devs have their own reasons for not liking OOP, and long term maintainability has historically not been a concern.

Eh. OOP is neither a necessary nor a sufficient condition for readable code, but it certainly helps. Of course you can write readable, structured, modular code with a clear separation of implementation and interface without ever using a class, an interface, a namespace, a private declaration, a reliable destructor.. but with each tool you remove, it gets more difficult.

Soupspoon wrote:wisely written C is a short step from such an optimum yet does not drag invisible object-handling into the equation. I don't object to the handling, but I like to know what goes on[..]

Then instead of avoiding OO, why don't you spend some time understanding it? Figure out how the compiler transforms your objects into code and memory layout. In many cases, what your compiler does is very similar to the code you'd write in C, except that it looks cleaner and more concise.And then there's exceptions. Saver of thousands of lines of code, remover of hundreds of leaks and assorted bugs, and if used correctly, a performance improvement.

To elaborate on a few points:* physics engine: it's not just about missiles, it's also about explosions, ragdolls, volumetric smoke etc. There are open source physics engines for comparison, like bullet, it's at 200k lines (not counting tests), and I doubt current shooters use something simpler than that. Bouncing a grenade against a wall seems easy enough, but modern audiences require physically correct bouncing of a corpse with flailing limbs against a spiked ball that's suspended on a chain. While on fire.

* network code is no longer just a distribution of actor state. Lag compensation involves both predicting future states and recalling past states. There's compression, possibly encryption involved. With rising numbers and complexities of actors, there's code that determines which actor's state gets distributed to whom, instead of just broadcasting everything. Actors may be deliberately hidden from clients, to prevent wallhacks. Then there's lots of networking outside of a match, e.g. for matchmaking, session tracking, leaderboards, achievements and all that stuff. There's certainly a HTTP(s) client implementation in there, and that alone is non-trivial.

* videos: you've missed the giant step of decoding the video. That's very complex (x264 is open source for comparison), and last I checked, most engines have a specialized in-engine codec instead of relying on operating system support. Not sure if that's still true, with hardware video decoders becoming more common on modern GPUs.

* audio: figuring out a relative position is easy, surveying the level geometry to determine a suitable amount of dampening, echoing and possibly a modified source direction ("the voices are coming from that door!" instead of "oh, right behind this wall") is complex. I must admit that I don't know how much of these techniques are actually used in doom 4, but the more realistic you want it to be, the more complex it will get, and there's not really a upper limit to the amount of code you can write for it.

* facial expression: again, proper facial expressions require a lot more than just a small vertex shader. On the other hand, maybe not in doom..

Soupspoon wrote:wisely written C is a short step from such an optimum yet does not drag invisible object-handling into the equation. I don't object to the handling, but I like to know what goes on[..]

Then instead of avoiding OO, why don't you spend some time understanding it? Figure out how the compiler transforms your objects into code and memory layout. In many cases, what your compiler does is very similar to the code you'd write in C, except that it looks cleaner and more concise.And then there's exceptions. Saver of thousands of lines of code, remover of hundreds of leaks and assorted bugs, and if used correctly, a performance improvement.

Agree. More directly: to a first approximation, there's no such thing as wisely-written C. C is not a language for humans to write in, practically nobody can actually do it correctly. Write in a language that doesn't shoot your foot off quite as easily, so any C-writing bugs are confined to the compiler for your language, which is a lot more battle-hardened.

("Wisely", I meant, more from the good, consistent, explanatory and probablyCamelCased variable and subroutine names at every stage, keeping to a sane indentation and whitespace scheme and avoiding polynested single-line run-ons without introducing any abstractions the compiler cannot handle. All the while presenting logical linear processes perfectly usable as pseudocode for programmers of alternate paradigms... Not that I'm a total stranger to obfuscation both inadvertent and entirely intentional, of course. But I try to be consistent within a single project, even if its just a pet project for my own entertainment.)

Xanthir wrote:Agree. More directly: to a first approximation, there's no such thing as wisely-written C. C is not a language for humans to write in, practically nobody can actually do it correctly. Write in a language that doesn't shoot your foot off quite as easily, so any C-writing bugs are confined to the compiler for your language, which is a lot more battle-hardened.

I think that is quite a bold statement. Projects like the Linux kernel show that it is certainly possible to produce high quality programs in C alone. Yes the Linux kernel also contains numerous bugs and security vulnerabilities. But so does the .NET framework or the JRE and both are written in much "safer" languages. EDIT: For comparison I looked up the numbers of CVEs in those projects. Linux had 149 in 2016, Java had 29 and .NET had 7. If you correct for attack surface and SLOC I don't think that Linux is that bad. Note that the JRE / .NET received no major updates in 2016 while there were 5 major Linux releases.

There will always be a need for system programming languages (e.g. for implementing kernels, drivers, databases, embedded applications or other performance critical programs; games certainly also fall in this category). Good questions however are: Is C a good system programming language in 2016 with C++14 (and soon C++17) and Rust available? Is it wise to implement general purpose applications in C?

Regarding the OOP debate: Many programmers equate OOP and everything-is-in-an-inheritance-hierarchy which is certainly false. Zero-overhead OOP designs are certainly possible if the programmer understands how the code is going to be translated to good old C.

That's kinda true, in the same way that it's true that The Gun That Also Shoots Backwards Whenever You Fire It doesn't require you to shoot yourself, since it's possible to hold it in such a way that you'll rarely do so. But I'm still comfortable blaming TGTASBWYFI for the increase in self-inflicted gunshot wounds among its owners, and hold people who promote usage of TGTASBWYFI because they claim it's slightly faster and more accurate than other guns responsible for the increase in accidental deaths.

The level of attention required to "not be stupid" with C is, to a first approximation, not possible even for highly skilled humans. You can blame humans for that, sure, but, uh, why else was it designed if not to be used by humans? We've designed better languages in the meantime that accomplish similar things without the same mistakes.

Xanthir wrote:The level of attention required to "not be stupid" with C is, to a first approximation, not possible even for highly skilled humans.

Do you have any basis for this statement other than dogma? It's one thing to say that you don't want to bother (which is entirely fair,) but to outright claim that it's not possible (or "to a first approximation" not possible, which is...I dunno, only 70% impossible or something?) requires something more than just repeating the same thing a lot.

"'Legacy code' often differs from its suggested alternative by actually working and scaling." - Bjarne Stroustrupwww.commodorejohn.com - in case you were wondering, which you probably weren't.

I've got lots of anecdotes, which are *like* data if you pile them up and squint. In particular, everyone I know in the security community is convinced of this, because most security vulns are due to people writing bad C (where "bad" ranges from "omits obvious buffer checks" to "hits a bizarre UB case I had to look up just to be sure of, and their compiler converts that into a vuln"). A large fraction of the non-security programmers I know are also at least somewhat convinced of this, largely due to simple experience. A lot of very smart programmers I'm familiar with are excited about Rust for this very reason, as a "C that doesn't shoot you".

There are also actual automated studies of C usage across millions of codebases that show that practically every piece of non-trivial code hits UB, and thus is one unlucky compiler update away from a possible vulnerability. This is a decent precise interpretation of what I mean by "impossible to a first approximation".

Or in other words, yes, "people don't bother to do the legwork everybody knows is necessary, and so things break and they blame the language." (Or, less commonly, "people are using a janky compiler that does something stupid, and so things break and they blame the language.")

"'Legacy code' often differs from its suggested alternative by actually working and scaling." - Bjarne Stroustrupwww.commodorejohn.com - in case you were wondering, which you probably weren't.

Uh, no, those are both terrible and frankly incorrect ways of putting it. Avoiding UB in C requires nearly superhuman levels of knowledge and detail; blaming people for being merely human is terrible. And I'm talking about *all* compilers, good compilers that everyone uses. They all exploit UB, for good reason - it lets them make things go faster. And it's 100% allowed by the language, on purpose - if it has bad results, again, the language is the correct one to blame.

You can write good code and bad code in any language. Features to assist you in avoiding certain errors are nice, sometimes. They're a tool, and like most tools, they have a place. In all honesty, I prefer not to write in C myself, I can flesh out things faster in C# or Java, generally. Likewise, OO methodologies are a tool. For some things, a very good tool. It is definitely not the right tool for every projects, but there's significant value in learning how to use it well. There's nothing whatsoever wrong with using either.

As far as optimizations go, honestly, I find that it's easy for coders to get wrapped up in optimization. For most things, it honestly doesn't matter that much. Flesh it out quick and dirty, worry about optimization later if it turns out to be necessary. Mostly, it isn't. Sure, something that scales exponentially instead of linearly or whatever can be an issue, but..."this code runs 5% faster" is usually not important.

Shit, the first shooter I wrote(purely for fun, it was awful), was done in visual basic. Ideal choice? Obviously not. But I was in the military, and very bored, and it was an available option. Still, good enough to write a quick and dirty physics engine with collision detection and what not. There's something to be said for writing your own vs using an existing API, of course, but...that approach does only scale so far. Drilling down to assembly is mostly undesirable. Again, it's premature optimization. And it doesn't really spare you the pain of dealing with endless standards. Now you're merely dealing with different standards on a different level. Mostly, you don't want to sweat the details of user hardware, because there's just so much of it.

A lot of graphics computations end up being things like dynamic lighting. You want significantly changeable worlds, and you also want things to look great? Well, there's a cost associated with that. And yeah, higher res textures(and layered textures!) definitely are going to be a big difference. There are a LOT of potential optimization rabbit holes to go down if you want the best performance.

But...you're not gonna embrace modern 3d games without at least a pretty good comfort level with OO, I think. There's some odd engines out there, but offhand, I can't think of any that shun OO. I can't imagine why you'd want to. Generally speaking, you have a very large amount of entities to handle, and anything that isn't fairly object oriented would seem to tend towards a lot of unnecessary complexity. Sure, I don't go in for creating tons of pointless layers of objects for things you've got one of or what not, but in practice, it's generally just easier to use the baked in methodology in Unity or whatever you're using for the vast majority of stuff.