|<<>>|32 of 40 Show listMobile Mode

Carmack on Matrox Parhelia

Published by marco on

Parhelia LogoShackNews is reporting that John Carmack has updated his .plan file recently in Carmack On New Cards, Rendering − the actual .plan file is here − Carmack’s 2002/06/28 .plan.

If the other .plan link is broken, then you can get an archived copy here − Carmack’s 2002/06/28 .plan

His latest two updates concern the Matrox Parhelia. The first update pretty much trashes the card, calling it “really disappointing for the first 256 bit DDR card” and that the “[a]nti aliasing features are nice, but it isn’t all that fast in minimum feature mode”, which means no one is going to use it because it gives too much of a performance hit.

His second update reneges much of what he said in the first. In fact, he praised Matrox’s rollout of the new card for providing such good testing hardware and solid drivers right out of the gate:

“I was duly impressed when the P10 [Parhelia] just popped right up with full functional support for both the fallback ARB_ extension path (without specular highlights), and the NV10 NVidia register combiners path. … this is the best showing from a new board from any company other than Nvidia.”

Given that, he decided to “go ahead and write a new back end that would let the card do the entire Doom interaction rendering in a single pass”. That must have been about a good two or three hours of Mr. Carmack’s time, I bet. In addition, rather than using the NVidia-compatible extensions that the Parhelia supports and making use of the existing NVidia rendering pipelines, “[Carmack] decided to try using the prototype OpenGL 2.0 extensions they provide.”

This brought on a discussion of OpenGL 2.0, in which he mentions that “[he is] now committed to supporting an OpenGL 2.0 renderer for Doom…”, even though “[a] GL2 driver won’t give any theoretical advantage over the current back ends optimized for cards with 7+ texture capability”. He sees it more as a move away from “lower level coding practices” and toward “C-like graphics languages”. In addition, he “strongly urges [vendors] to implement GL2 instead of proprietary extensions”.

Thus, the decision to write a rendering pipeline for OpenGL 2.0. Any card that supports this non-proprietary specification should be able to run Doom III in all of its glory with no extra work. As mentioned above, since the other optimized pipelines for existing cards are already written, it won’t make any difference for Doom III whether he writes the GL2 pipeline or not, but from an open-standards and engineering perspective, it makes a lot of sense.

He sees the trend as moving toward using these C-like languages for all graphics-chip programming work instead of “current interfaces we are using”. For most compatibility, he suggests rallying around OpenGL 2.0 as the language of choice for this, as opposed to the CG language recently proposed by NVidia as an alternative.

“It won’t be too long before all real work is done in one of these, and developers that stick with the lower level interfaces will be regarded like people that write all-assembly PC applications today.”

AnandTech has a whole article reviewing the Matrox Parhelia, Matrox’s Parhelia − A Performance Paradox. Once again, as with the Radeon 8500, there is a card that, on paper, should beat the top-of-the-line NVidia cards. And, once again, the actual hardware fails to reach expectations.

As with ATI, Matrox focuses on visual quality above speed. “There’s no doubt that the analog output quality of Parhelia is excellent, it’s definitely the best we’ve seen to date.” However, the sacrifice for that visual quality is still too high, since a sufficient frame-rate can’t be sustained with full high-quality rendering enabled. The best feature of the card is Fragment Anti-Aliasing, which is Matrox’s new algorithm. With anti-aliasing enabled, the Matrox card is slower than the comparable GeForce4 Ti 4600, “[b]ut rest assured that Matrox’s Fragment Anti-Aliasing looked nothing short of amazing.” At 1024 × 768, it was a little faster than the GeForce4, but both were in the mid-40s for frame rate.