This page shows the source for this entry, with WebCore formatting language tags and attributes highlighted.

Title

Carmack on the NV30

Description

John Carmack, of <a href="http://www.idsoftware.com/">id Software</a>, has updated his .plan file (<a href="http://www.earthli.com/users/marco/carmack_2003_01_29.php">cached copy</a> at earthli.com) with his impressions of the NV30 from NVidia. He compares it to the current king of the video card market, the ATI R300 part and describes how they handle his DOOM3 engine. <bq>At the moment, the NV30 is slightly faster on most scenes in Doom than the R300, but I can still find some scenes where the R300 pulls a little bit ahead.</bq> He mentions that there are several code-paths or renderers available for the DOOM engine, two of which (ARB and ARB2) are codes without card-specific functions. The ATI part will run the high-quality non-specific version almost as fast as it's native implementation, but <iq>[t]he NV30 runs the ARB2 path MUCH slower than the NV30 path. Half the speed at the moment</iq>, so he can't really do what he calls an <iq>apples-to-apples comparison</iq>. The new part from NVidia is somewhat physically intrusive as well, as he mentions that <iq>[t]hey take up two slots, and when the cooling fan fires up they are VERY LOUD.</iq> This ARB support is a new standard API for loading and executing programs on video cards with hardware pixel shader/T&L support. Carmack is making use of only standardized functions in as many pipelines as possible to avoid API fragmentation: <bq>Doom has dropped support for vendor-specific vertex programs (NV_vertex_program and EXT_vertex_shader), in favor of using ARB_vertex_program for all rendering paths.</bq> Worry not, Carmack is already looking to the future, beyond the DOOM engine (he's been working on it for 2 years now) when he says: <iq>[i]t is going to require fairly deep, non-backwards-compatible modifications to an engine to take real advantage of the new features...</iq>. I take this to mean that the current DOOM engine required feature set is properly frozen, but tweaking to accomodate new hardware (and improve performance) is continuing. Completely new fuctionality exposed by the new cards --- more programmability through the new APIs or larger program sizes (graphics card programs) --- will only be explored in the next generation of the engine. However, he has managed to work in some improvements over the existing renderer for those that purchase the horsepower: <iq>[p]er-pixel environment mapping, rather than per-vertex. This fixes a pet-peeve of mine, which is large panes of environment mapped glass that aren't tessellated enough, giving that awful warping-around-the-triangulation effect as you move past them</iq> and <iq>[l]ight and view vectors normalized with math, rather than a cube map ... [which] give[s] you ... a perfectly smooth specular highlight, instead of the pixelish blob that we get on older generations of cards.</iq> Both somewhat minor wins for a faster-paced game, but in a slower, environment-based game, any visual improvement is a good one if it enhances immersion. The next generation of cards will only improve internal data precision, with <iq>[f]loating point framebuffers and complex fragment shaders</iq> becoming very important, allowing <iq>much better volumetric effects, like volumetric illumination of fogged areas with shadows</iq> --- again, something that will increase the movie-like feel to a scene as interaction of light, air and shadow improves.