Share this post

Link to post

1. PC, maybe Mac. Maybe.
2. I have no clue. I would guess at least 128 megs of RAM, and a 600-800 Mhz processor to start with.
3. Not very well.
4. Most likely. A GeForce 3 sounds nice, but I would save your cash until Doom 3 is released in order to get the best price on hardware.
5. Couldn't say, but it seems very likely.

Share this post

Link to post

1.What computer, MAC or PC?
2.What will the requirements be to run it?
3.Will it run on my NVIDIA TNT2 card?
4.Would I need to get a new card?
5.Will Doom 3 have reload?(Such as Half-Life)

Thanks for any replys!

1. It's being targeted to Win32, Linux and MacOS X.

2. If you're a graphics freak, nothing below a P3 900 and a GeForce3. A Geforce1 is the bare minimum of functionality, altough the visual quality will be awful compared to Carmack's vision. No less than 256 megs of RAM, btw.

Share this post

Link to post

2. ... A Geforce1 is the bare minimum of functionality, altough the visual quality will be awful compared to Carmack's vision. ...

Or just slower? I think that the GeForce1/2 have the cabability to do per-pixel bumpmapping/w specular. There have been some tech-demo's out for some time that prove that this can be done, in Carmack's stated 5 passes. If you have a GeForce family card, you can download the demo from nVidia's web site, along with some documents concerning the basic algoritm behind this technique. Of course, there are a lot of "holes" that aren't accounted for, that I'm sure Carmack has patched with particular genius.

With a GeForce2 type card, if you run at really, really, really, REALLY, REALLY low res (like 320X200), you might be able to get acceptable framerates, with full bumpmapping turned on, becuase it's mostly a fillrate thing, rather than a polygon thing. Of course, at 320x200, you'll have to be right up against something before you see any surface lighting effects like that. But if you want true Doom Nostalgia, you'll be running at 320x200 resolution anyway...:)

Share this post

Link to post

Or just slower? I think that the GeForce1/2 have the cabability to do per-pixel bumpmapping/w specular. There have been some tech-demo's out for some time that prove that this can be done, in Carmack's stated 5 passes.

He wrote a different rendering module for each video card generation. GeForce1/2 support bumpmapping, true, but Carmack's not using bumpmapping, but Pixel Shaders. Seems like they're way more flexible.

Share this post

Link to post

He wrote a different rendering module for each video card generation. GeForce1/2 support bumpmapping, true, but Carmack's not using bumpmapping, but Pixel Shaders. Seems like they're way more flexible.

But I thought the whole point of the graphics in the new game was that high poly models were sort of condensed into bumpmaps, and then rendered in real time with per-pixel bumps and specular effect. Did I miss something? :(

I understand that you can use nVidia's PixelShaders to do bump mapping like that, and it would be muich more flexible (since it's a higher generation card, anyway), but I'm sure that I heard Carmack say in a post on Slashdot that GeForce1/2 would take 5-6 passes. This sort of matches up exactly with the number of passes the demo's use for per-pixl bump-mapping. It might not have many advanced subtle shading effects (like the ability to render velvet, or moire effects on surfaces), but GeForce1/2 is certainly has the dot3 instruction, which lets it do multipass bumpmapping, register combiners to do the lighting math, and hardware accelerated cubemaps for multiple spotlights and environmental reflections and stuff. That's still a lot of horsepower, and it would be aweful if id left these wonderful early cards pretty much untapped-- they should at least be given the oppourtunity to run the per-pixel effects, even if it was slower...

But you're probably right about having a lot of time to get a new card-- they might not even be selling GeForce2 cards by the time the game comes out...

Share this post

Link to post

But I thought the whole point of the graphics in the new game was that high poly models were sort of condensed into bumpmaps, and then rendered in real time with per-pixel bumps and specular effect. Did I miss something? :(

Not even a model with more than 10 million faces could accurately represent the detail you can archieve with bumpmaps. The whole point of having high detail models is to :

2) Enhance the bumpmaps by refining the lighting calculations. Hi-poly surfaces are way more precise than low detail when it comes to render microdetail, mainly because the light orientation on the bumpmaps is derived from the respective triangle.

Share this post

Link to post

2) Enhance the bumpmaps by refining the lighting calculations. Hi-poly surfaces are way more precise than low detail when it comes to render microdetail, mainly because the light orientation on the bumpmaps is derived from the respective triangle.

True. However, after all the pre-preproccessing is done to render that detail into the bumpmaps and the other vectormaps, the basic pixel math to implement the algorithm to render the bumpmaps in-game is available on GeForce1/2, as well as the newer high end cards. It's sort of like the MMX or 3DNow proccessor extentions, where the basic graphics of a game can look the same as with the non-MMX version, but the MMX version would be much faster, and have a different rendering method.

All I'm saying is that the GeForce1/2 is perfectly capable of rendering per-pix bumpmaps w/ specular, and that Carmack should write a backend to support this feature, even if it just resolves to watching "What I did on my Doom3 Vacation Slideshow" ;)

Anywho, you've probably read this already, but it's what I'm basing most of my un-called for rant on. And the screenshots are lovely... Oh, I wish I had a GeForce...

Share this post

Link to post

True. However, after all the pre-preproccessing is done to render that detail into the bumpmaps ...

Technically, this is accomplished as the frame's getting rendered. It's not saved into a grayscale bumpmap for later use. In fact bumpmaps wouldn't really make it work right. I'm guessing id's in-house "fallout" material.

Yes, it could be done in a GeForce2. Not gonna mention the GF1 for now. But what about bottlenecks?
Maybe one could indeed get the GF2 renderer to work at a GF3 level, but that means "screenshot mode" ^_^

Share this post

Link to post

...Ah. Well, I guess my PII300 isn't going to cut it either :( It sort of makes you wonder about the percentage of proccessor cycles that are actually left over for the game logic proper... Maybe they could have text-mode output driver, like someone once did for the original Doom, and I could run DoomIII on my calculator (minus Dolby 5.1, of course).