Share this story

In 1993, id Software released the first-person shooter Doom, which quickly became a phenomenon. The game is now considered one of the most influential games of all time.

A decade after Doom’s release, in 2003, journalist David Kushner published a book about id Software called Masters of Doom, which has since become the canonical account of Doom’s creation. I read Masters of Doom a few years ago and don’t remember much of it now, but there was one story in the book about lead programmer John Carmack that has stuck with me. This is a loose gloss of the story (see below for the full details), but essentially, early in the development of Doom, Carmack realized that the 3D renderer he had written for the game slowed to a crawl when trying to render certain levels. This was unacceptable, because Doom was supposed to be action-packed and frenetic. So Carmack, realizing the problem with his renderer was fundamental enough that he would need to find a better rendering algorithm, starting reading research papers. He eventually implemented a technique called “binary space partitioning,” never before used in a video game, that dramatically sped up the Doom engine.

That story about Carmack applying cutting-edge academic research to video games has always impressed me. It is my explanation for why Carmack has become such a legendary figure. He deserves to be known as the archetypal genius video game programmer for all sorts of reasons, but this episode with the academic papers and the binary space partitioning is the justification I think of first.

Further Reading

Obviously, the story is impressive because “binary space partitioning” sounds like it would be a difficult thing to just read about and implement yourself. I’ve long assumed that what Carmack did was a clever intellectual leap, but because I’ve never understood what binary space partitioning is or how novel a technique it was when Carmack decided to use it, I’ve never known for sure. On a spectrum from Homer Simpson to Albert Einstein, how much of a genius-level move was it really for Carmack to add binary space partitioning to Doom?

I’ve also wondered where binary space partitioning first came from and how the idea found its way to Carmack. So this post is about John Carmack and Doom, but it is also about the history of a data structure: the binary space partitioning tree (or BSP tree). It turns out that the BSP tree, rather interestingly, and like so many things in computer science, has its origins in research conducted for the military.

That’s right: E1M1, the first level of Doom, was brought to you by the US Air Force.

The VSD problem

The BSP tree is a solution to one of the thorniest problems in computer graphics. In order to render a three-dimensional scene, a renderer has to figure out, given a particular viewpoint, what can be seen and what cannot be seen. This is not especially challenging if you have lots of time, but a respectable real-time game engine needs to figure out what can be seen and what cannot be seen at least 30 times a second.

This problem is sometimes called the problem of visible surface determination. Michael Abrash, a programmer who worked with Carmack on Quake (id Software’s follow-up to Doom), wrote about the VSD problem in his famous Graphics Programming Black Book:

I want to talk about what is, in my opinion, the toughest 3-D problem of all: visible surface determination (drawing the proper surface at each pixel), and its close relative, culling (discarding non-visible polygons as quickly as possible, a way of accelerating visible surface determination). In the interests of brevity, I’ll use the abbreviation VSD to mean both visible surface determination and culling from now on.

Why do I think VSD is the toughest 3-D challenge? Although rasterization issues such as texture mapping are fascinating and important, they are tasks of relatively finite scope, and are being moved into hardware as 3-D accelerators appear; also, they only scale with increases in screen resolution, which are relatively modest.

In contrast, VSD is an open-ended problem, and there are dozens of approaches currently in use. Even more significantly, the performance of VSD, done in an unsophisticated fashion, scales directly with scene complexity, which tends to increase as a square or cube function, so this very rapidly becomes the limiting factor in rendering realistic worlds.

Abrash was writing about the difficulty of the VSD problem in the late ’90s, years after Doom had proved that regular people wanted to be able to play graphically intensive games on their home computers. In the early ’90s, when id Software first began publishing games, the games had to be programmed to run efficiently on computers not designed to run them, computers meant for word processing, spreadsheet applications, and little else. To make this work, especially for the few 3D games that id Software published before Doom, id Software had to be creative. In these games, the design of all the levels was constrained in such a way that the VSD problem was easier to solve.

For example, in Wolfenstein 3D, the game id Software released just prior toDoom, every level is made from walls that are axis-aligned. In other words, in the Wolfenstein universe, you can have north-south walls or west-east walls, but nothing else. Walls can also only be placed at fixed intervals on a grid—all hallways are either one grid square wide, or two grid squares wide, etc., but never 2.5 grid squares wide. Though this meant that the id Software team could only design levels that all looked somewhat the same, it made Carmack’s job of writing a renderer for Wolfenstein much simpler.

Further Reading

The Wolfenstein renderer solved the VSD problem by “marching” rays into the virtual world from the screen. Usually a renderer that uses rays is a “raycasting” renderer—these renderers are often slow, because solving the VSD problem in a raycaster involves finding the first intersection between a ray and something in your world, which in the general case requires lots of number crunching. But in Wolfenstein, because all the walls are aligned with the grid, the only location a ray can possibly intersect a wall is at the grid lines. So all the renderer needs to do is check each of those intersection points. If the renderer starts by checking the intersection point nearest to the player’s viewpoint, then checks the next nearest, and so on, and stops when it encounters the first wall, the VSD problem has been solved in an almost trivial way. A ray is just marched forward from each pixel until it hits something, which works because the marching is so cheap in terms of CPU cycles. And actually, since all walls are the same height, it is only necessary to march a single ray for every column of pixels.

This rendering shortcut made Wolfenstein fast enough to run on underpowered home PCs in the era before dedicated graphics cards. But this approach would not work for Doom, since the id team had decided that their new game would feature novel things like diagonal walls, stairs, and ceilings of different heights. Ray marching was no longer viable, so Carmack wrote a different kind of renderer. Whereas the Wolfenstein renderer, with its ray for every column of pixels, is an “image-first” renderer, the Doom renderer is an “object-first” renderer. This means that rather than iterating through the pixels on screen and figuring out what color they should be, the Doom renderer iterates through the objects in a scene and projects each onto the screen in turn.

In an object-first renderer, one easy way to solve the VSD problem is to use a z-buffer. Each time you project an object onto the screen, for each pixel you want to draw to, you do a check. If the part of the object you want to draw is closer to the player than what was already drawn to the pixel, then you can overwrite what is there. Otherwise you have to leave the pixel as is. This approach is simple, but a z-buffer requires a lot of memory, and the renderer may still expend a lot of CPU cycles projecting level geometry that is never going to be seen by the player.

In the early 1990s, there was an additional drawback to the z-buffer approach: On IBM-compatible PCs, which used a video adapter system called VGA, writing to the output frame buffer was an expensive operation. So time spent drawing pixels that would only get overwritten later tanked the performance of your renderer.

Since writing to the frame buffer was so expensive, the ideal renderer was one that started by drawing the objects closest to the player, then the objects just beyond those objects, and so on, until every pixel on screen had been written to. At that point the renderer would know to stop, saving all the time it might have spent considering far-away objects that the player cannot see. But ordering the objects in a scene this way, from closest to farthest, is tantamount to solving the VSD problem. Once again, the question is: What can be seen by the player?

Initially, Carmack tried to solve this problem by relying on the layout of Doom’s levels. His renderer started by drawing the walls of the room currently occupied by the player, then flooded out into neighboring rooms to draw the walls in those rooms that could be seen from the current room. Provided that every room was convex, this solved the VSD issue. Rooms that were not convex could be split into convex “sectors.” You can see how this rendering technique might have looked if run at extra-slow speed in the video above, where YouTuber Bisqwit demonstrates a renderer of his own that works according to the same general algorithm. This algorithm was successfully used in Duke Nukem 3D, released three years after Doom, when CPUs were more powerful. But, in 1993, running on the hardware then available, the Doom renderer that used this algorithm struggled with complicated levels—particularly when sectors were nested inside of each other, which was the only way to create something like a circular pit of stairs. A circular pit of stairs led to lots of repeated recursive descents into a sector that had already been drawn, strangling the game engine’s speed.

Around the time that the id team realized that the Doom game engine might be too slow, id Software was asked to port Wolfenstein 3D to the Super Nintendo. The Super Nintendo was even less powerful than the IBM-compatible PCs of the day, and it turned out that the ray-marching Wolfenstein renderer, simple as it was, didn’t run fast enough on the Super Nintendo hardware. So Carmack began looking for a better algorithm. It was actually for the Super Nintendo port of Wolfenstein that Carmack first researched and implemented binary space partitioning. In Wolfenstein, this was relatively straightforward because all the walls were axis-aligned; in Doom, it would be more complex. But Carmack realized that BSP trees would solve Doom’s speed problems too.

Share this story

175 Reader Comments

I don’t doubt that Carmack is extremely smart. But other people beat him to a similar rendering engine almost a year earlier. I don’t know if Ultima Underworld used binary space partitioning to speed up the rendering but the rendering was pseudo-3D, like Doom. Though the game was slower paced than Doom, the rendering engine wasn’t “slow” at rendering. Perhaps other tricks were being used... I know the rendering distance was small and the player was mostly walking around in the dark but it was an impressive engine for early 90s standards.

I don’t doubt that Carmack is extremely smart. But other people beat him to a similar rendering engine almost a year earlier. I don’t know if Ultima Underworld used binary space partitioning to speed up the rendering but the rendering was pseudo-3D, like Doom. Though the game was slower paced than Doom, the rendering engine wasn’t “slow” at rendering. Perhaps other tricks were being used... I know the rendering distance was small and the player was mostly walking around in the dark but it was an impressive engine for early 90s standards.

I don’t doubt that Carmack is extremely smart. But other people beat him to a similar rendering engine almost a year earlier. I don’t know if Ultima Underworld used binary space partitioning to speed up the rendering but the rendering was pseudo-3D, like Doom. Though the game was slower paced than Doom, the rendering engine wasn’t “slow” at rendering. Perhaps other tricks were being used... I know the rendering distance was small and the player was mostly walking around in the dark but it was an impressive engine for early 90s standards.

Ultima Underworld used a tile-based engine, allowing for rendering shortcuts somewhat similar to those used in Wolfenstein 3D. Also, the 3D view in Ultima Underworld occupies less than half the screen. Reducing the 3D view resolution goes a very long way toward improving framerate on low-powered systems. Doom had the ability to reduce the size of the 3D view, which allowed the game to run at smoother framerates on an underpowered 386 computer than what would have been possible with a fullscreen 3D view.

I don’t know if Ultima Underworld used binary space partitioning to speed up the rendering but the rendering was pseudo-3D, like Doom. Though the game was slower paced than Doom, the rendering engine wasn’t “slow” at rendering. Perhaps other tricks were being used... I know the rendering distance was small and the player was mostly walking around in the dark but it was an impressive engine for early 90s standards.

According to this page, Underworld stored level data as 2D tiles, which were used for culling, but the actual renderer was full 3D. The engine could handle both slopes, and looking up and down. From experience, the game ran much slower than Doom, even though the 3D view was only a small window on the screen.

It would be nice to mention that binary space partitioning adds several drawbacks.Static nature of partiotioning makes it impossible to move walls - this is why Doom can change height of floors/ceiling only - doors and lifts can move only vertically.Also there was level compilation step in preparing level.

All these problems were removed in engines such as Duke3D. Moving sectors and instant-ready to play levels from editor were nice...But also, (as video mentions) it was possible to make another great technique - non-euclidean levels! Levels with 'impossible' geometry. In fact Duke3D has such level, but it is secret one (it's named 'Tier Drops') and in the middle of 90 people were not so experienced to even detect this non-euclideanity. But central room in this map is really 4 different rooms you can see through 4 different gates located at the same height/floor. It's amazing thing and it was there long time ago! Thing which Carmack could not achieve because of his decision to use BSP...

what I wonder is why the original BSP used a painters algorithm, yes it might enable loading faster once an object comes into view but it takes up memory that wasnt doing anything when memory was expensive.

There was a graphics processor (in those days it was a separate box) by GE division called Graphicon 700 in the 80’s that was sold to do 3d rendering and real time simulation that was based on BSP rendering engine. The company was based in the RDU area and I think a spin out of Fuchs and UNC.

At that time BSP was a known technique, but considered tough for simulation because of the problem as pointed out by the author of handling insects that changed position.

I don’t know if Ultima Underworld used binary space partitioning to speed up the rendering but the rendering was pseudo-3D, like Doom. Though the game was slower paced than Doom, the rendering engine wasn’t “slow” at rendering. Perhaps other tricks were being used... I know the rendering distance was small and the player was mostly walking around in the dark but it was an impressive engine for early 90s standards.

According to this page, Underworld stored level data as 2D tiles, which were used for culling, but the actual renderer was full 3D. The engine could handle both slopes, and looking up and down. From experience, the game ran much slower than Doom, even though the 3D view was only a small window on the screen.

The rendering speed was certainly less important in the game as it was an early adventure/rpg that Spector and gang refined into Deus Ex for world solving and traversal. It didn't need to be frenetic. It needed to run fast enough to support mostly melee combat and exploration with a lot of verticality.

There has always been dedicated graphics cards, back in the doom era I bought a Diamond stealth 64 2MB video VRAM (PCI or VL bus card, do not remember), specifically because it was very fast with doom.Before that card I had a cirrus logic card (VL bus card), which was extremely slow on anything above 16bit colour.Do not remember the card that was in the 386 I had. PC have always, since i have used them (from the 80s) have had dedicated graphics cards, just that the functionality on those cards and the buses have changed.Some had on board like the Olivetti I had, but that had an expansion slot which allow you to connect an ISA board, and could plug in graphics card, hard drive, sound card (adlib / soundblaster / gravis ultra sound etc).

From experience, the game ran much slower than Doom, even though the 3D view was only a small window on the screen.

The rendering speed was certainly less important in the game as it was an early adventure/rpg that Spector and gang refined into Deus Ex for world solving and traversal. It didn't need to be frenetic. It needed to run fast enough to support mostly melee combat and exploration with a lot of verticality.

Sure, but the game could be basically unplayable on hardware that could run Doom OK. IIRC you could turn off basically all texure mapping which made Underworld run fast enough even on old hardware, but let's just say immersion suffered.

Love this article, and this is something I come to Ars for. Would be interested in a followup article as to what the next development in rendering engines was and how the barriers of things like sloping walls were overcome.

Thanks for this article and for all the work that went in to making it both interesting and informative.

It is nice to come across anecdotes like this, not only because they are interesting in their own right, but because they serve as useful illustrations of what can be done when we put our minds to something.

One of my favorites - giving it much less space here than it deserves - concerns the development of "git", the version control system written by Linus Torvalds to help him manage the Linux kernel for GNU/Linux.

Originally, Linux used BitKeeper to manage the kernel source code, but this was a proprietary platform, so when Tridge - Andrew Tridgell - was accused of reverse-engineering BitKeeper's protocols to create the open source "SourcePuller", something had to happen. And it did. Oh boy, it did.

From Wikipedia:-

"The development of Git began on 3 April 2005.Torvalds announced the project on 6 April;it became self-hosting as of 7 April.The first merge of multiple branches took place on 18 April.

Torvalds achieved his performance goals; on 29 April, the nascent Git was benchmarked recording patches to the Linux kernel tree at the rate of 6.7 patches per second.

On 16 June Git managed the kernel 2.6.12 release."

That's all the work of one person, albeit a pretty smart one.

To go from a standing start to self-hosting in 3 days is impressive. To merge multiple kernel branches in 12 is astounding. To go from nothing to managing the 2.6.12 kernel release in less than 60 days is borderline ridiculous.

Tech is filled with adventures like this, and more often than not, each one brings with it some real insight into "how to get stuff done".

One of my favorites - giving it much less space here than it deserves - concerns the development of "git", the version control system written by Linus Torvalds to help him manage the Linux kernel for GNU/Linux.

Thanks for this article and for all the work that went in to making it both interesting and informative.

It is nice to come across anecdotes like this, not only because they are interesting in their own right, but because they serve as useful illustrations of what can be done when we put our minds to something.

One of my favorites - giving it much less space here than it deserves - concerns the development of "git", the version control system written by Linus Torvalds to help him manage the Linux kernel for GNU/Linux.

Originally, Linux used BitKeeper to manage the kernel source code, but this was a proprietary platform, so when Tridge - Andrew Tridgell - was accused of reverse-engineering BitKeeper's protocols to create the open source "SourcePuller", something had to happen. And it did. Oh boy, it did.

From Wikipedia:-

"The development of Git began on 3 April 2005.Torvalds announced the project on 6 April;it became self-hosting as of 7 April.The first merge of multiple branches took place on 18 April.

Torvalds achieved his performance goals; on 29 April, the nascent Git was benchmarked recording patches to the Linux kernel tree at the rate of 6.7 patches per second.

On 16 June Git managed the kernel 2.6.12 release."

That's all the work of one person, albeit a pretty smart one.

To go from a standing start to self-hosting in 3 days is impressive. To merge multiple kernel branches in 12 is astounding. To go from nothing to managing the 2.6.12 kernel release in less than 60 days is borderline ridiculous.

Tech is filled with adventures like this, and more often than not, each one brings with it some real insight into "how to get stuff done".

Thanks for this article, Ars. Keep 'em coming...

Had Linus took the time to think through Git a bit better, we wouldn't have the ugly, illogical UX that we all suffer with today.

"by talking about how real-time graphics systems must be able to create an image in at least 1/30th of a second."

I think, technically, this sentence should probably say, "at most 1/30th of a second". If interpreted strictly, saying at least 1/30th of a second means the smallest it can be is 1/30th of a second, but it could be larger, like say, 1/2 second. This is obviously not correct.

The largest time value during which a computer may create an image is 1/30 of a second, if it is a real-time interactive system. It can be smaller (giving higher frame rates, and thus, less lag/more interactivity).

The optimization craft involved in computer graphics never ceases to amaze me. It's as if the most brilliant people on Earth--the ones who aren't wasting their lives in finance--are all writing rendering engines. The emotional appeal of gaming, the eternal scarcity of computing power, and the immense size of the commercial market all intersect to ensure this is so.

It's been several years since I last fired up Ultima Underworld, but I remember it as being extremely stately in comparison. You moved slowly, looked around slowly, fought slowly. 10 frames per second would have been more than adequate, where DOOM, with its laser focus on frenetic action, wanted to hit 60fps if it possibly could.

However, I think UU was true 3D, where Doom was only "2.5D".... the world was basically flat, with 'height maps' for rendering. That's why you could shoot things that were 'higher than you' without raising your gun, because they weren't higher than you. They were just displaced upward when drawn because of the heightmap. Now, Doom did have arbitrary angles and corners, so it was much better than Wolfenstein 3D, but it wasn't all the way there yet.

It might be fairer to compare Ultima Underworld with Quake, which was a true 3D engine in all respects. It still would have lost, but it probably wouldn't look quite so slow.

What happens with Carmack is what happens with a lot of well known, respected video gamer programmers or designers. People get an idea of why others respect them (or why anyone does), and then over time, it goes to a ridiculous place (sometimes due to ignorance of other people's work, sometimes due to ignorance of the work itself). People also forget that these people worked alongside other, lesser known individuals.

Miamoto is another example of this - people behave as if he alone just shat out original Zelda and other, later Nintendo games. They ignore games that came out before Zelda that are clearly doing some things it did actually later (even if not well, but first doesn't need to be perfect). They ignore all of the other people involved in a project that were paramount to it's creation.

I get it - it's easier to just shove all of the respect onto one, more-well-known person. But I think we do the industry a disservice, even if indirectly, when we put too much emphasis on one person's talent (and it's bizarre to not want to see the lineage of video games as it exists vs what your fanboyism may want).

What happens with Carmack is what happens with a lot of well known, respected video gamer programmers or designers. People get an idea of why others respect them (or why anyone does), and then over time, it goes to a ridiculous place (sometimes due to ignorance of other people's work, sometimes due to ignorance of the work itself). People also forget that these people worked alongside other, lesser known individuals.

Miamoto is another example of this).

To be fair, this is true of all the “great men of science” to a greater or lesser degree. It’d be great if we could find a way of teaching science history which wasn’t so fixated on individuals.

Carmack and Miyamoto do however both deserve huge respect for being abnormally humble, considering the circumstances.

"by talking about how real-time graphics systems must be able to create an image in at least 1/30th of a second."

I think, technically, this sentence should probably say, "at most 1/30th of a second". If interpreted strictly, saying at least 1/30th of a second means the smallest it can be is 1/30th of a second, but it could be larger, like say, 1/2 second.

No, I think your interpretation is wrong.

If the sentence was "the real-time graphics system takes at least 1/30th of a second to create an image," then it suggests the he shortest amount of time it can be is 1/30th of a second.

However saying "the real-time graphics system (must be able to/can) create an image in at least 1/30th of a second" implies that 1/30th of a second is the longest it can take.

It's like the difference between saying "It'll take at least an hour to drive 60 miles," and "I can drive at least 60 miles in an hour".

I’d say it’s true in many areas. If you look at, e.g., famous pieces of art, literature, or pop culture, they’re usually not strokes of genius that exist completely independent from their surroundings but rather a combination and extension of pre-existing trends and ideas. (E.g., George Lucas drawing on old movie serials and mythology research for Star Wars.)

That whole “standing on the shoulders of giants” adage is true more often than not.

There has always been dedicated graphics cards, back in the doom era I bought a Diamond stealth 64 2MB video VRAM (PCI or VL bus card, do not remember), specifically because it was very fast with doom. Before that card I had a cirrus logic card (VL bus card), which was extremely slow on anything above 16bit colour.

As you note, PCs had graphics cards since the beginning, but their capabilities were limited. 2D hardware acceleration via the graphics card was just becoming a thing when Doom was released, it wasn't until 1995 that even 2D acceleration was standard on all cards. For a while, it was up in the air whether 3D calculations were better left to the CPU or graphics card. AMD's 3DNow! instruction set was designed to facilitate better 3D calculating using the CPUhttps://en.wikipedia.org/wiki/3DNow!

What happens with Carmack is what happens with a lot of well known, respected video gamer programmers or designers. People get an idea of why others respect them (or why anyone does), and then over time, it goes to a ridiculous place (sometimes due to ignorance of other people's work, sometimes due to ignorance of the work itself). People also forget that these people worked alongside other, lesser known individuals.

Miamoto is another example of this - people behave as if he alone just shat out original Zelda and other, later Nintendo games. They ignore games that came out before Zelda that are clearly doing some things it did actually later (even if not well, but first doesn't need to be perfect). They ignore all of the other people involved in a project that were paramount to it's creation.

I get it - it's easier to just shove all of the respect onto one, more-well-known person. But I think we do the industry a disservice, even if indirectly, when we put too much emphasis on one person's talent (and it's bizarre to not want to see the lineage of video games as it exists vs what your fanboyism may want).

Well, yes and no. Miyamoto probably gets credit for a lot of ideas that he didn't have himself; descriptions I've read of Nintendo's dev culture is that they encourage invention and coming up with new stuff for the games they're making.

Miyamoto's genius might be realizing what works and what's fun. Many of the earliest innovations probably were his invention, but while many of the later ones probably weren't, his good taste allowed him to pick the great ideas out of the pile of those that weren't quite right. (or which were terrible, but probably not that many terrible ideas got up to his level.)

He's not 100% by any means, but I don't think there's ever been a studio head that's managed to put out the same kind of consistent quality that he has. At his age, probably very few of the new mechanics are his idea anymore, but he definitely seems to have preserved the ability to spot brilliance in his subordinates. If he puts his name on a project, chances are amazingly high that we'll have a good time playing it.

DOOM creation Myth revision: apriori DOOM for PC, originally DOOM was created and written on the NeXT platform. DOOM-NeXT precedes the Wiki timeline 1993 release for PC with DOOM appearing IIRC as early as 1991-1992 timeframe on FTP sites. John Carmack and John Romero were gods from first release on NeXT's platform.

By 1991 it was obvious the advantages O-O programming provided but painfully obvious, too, that a 3rd OS, NeXT, marketplace was not going to crystalize - ever. PIVOT to PC and the rest is history. Which portended SteveJobs switch to little endian Intel Inside hoping to meet reality halfway.

DOOM creation Myth revision: apriori DOOM for PC, originally DOOM was created and written on the NeXT platform. DOOM-NeXT precedes the Wiki timeline 1993 release for PC with DOOM appearing IIRC as early as 1991-1992 timeframe on FTP sites. John Carmack and John Romero were gods from first release on NeXT's platform.

By 1991 it was obvious the advantages O-O programming provided but painfully obvious, too, that a 3rd OS, NeXT, marketplace was not going to crystalize - ever. PIVOT to PC and the rest is history. Which portended SteveJobs switch to little endian Intel Inside hoping to meet reality halfway.

AFAIK they were always planning on releasing on DOS computers from day one, they were merely using Next hardware for development.

Years ago I wrote a 3-d rendering engine in Javascript for polyhedra. Out of 150 or so, more than 140 displayed just fine, but there were a few for which I needed a better answer to the "does this face need to be rendered before or after this face?"

It's devilishly hard to solve the problem correctly, even if you don't care about performance (these were simple shapes.) I had heard of B.S.P. but did not know it in detail and thought I would soldier on fixing bugs.

Investigating examples where my old code broke, I realized that I needed to break up the facets in the case that the plane one facet is in cuts through another facet. If you move from one side of the first facet to the other side, you see that different parts of facet 2 are visible or not visible depending on which side of facet 1 you are on. The process of cutting up facets established the binary space partition.

At that point I gave up, looked at the algorithm book and it made perfect sense.

B.S.P. is an algorithm over partial orderings and that is an rewarding area that is less familiar then algorithms over total orderings -- such as the "topological sort" that controls the operation of make, maven, etc.

I wonder if these concepts will be partially reversed for Stadia type platforms where potentially millions of people are looking at the same model from 'every' angle. With enough players in a given area it may be more efficient to run AND partially render the entire model centrally (I don't believe any rendering is centralized yet). Then create an individual viewpoint for each player into this world with cheaper discrete GPU's (much as the human brain does in reality). Rendering facets such as textures and rough lighting within the core model greatly reduces the load on each GPU as it can concentrate on the individual viewpoint, using the article's concepts to focus on only the visible surfaces.