« The Economist on Taiwan | Main | Changes at AA »

Pixar Gets It Wrong For Once

The Tech Report has a comprehensive article on the progress being made toward film-quality 3D graphics on the desktop. The article begins by discussing Pixar's view of the prospect as of NVIDIA's launch of the GeForce2:

NVIDIA's Jen-Hsun Huang said at the launch of the GeForce2 that the chip was a "major step toward achieving" the goal of "Pixar-level animation in real-time". But partisans of high-end animations tools have derided the chip companies' ambitious plans, as Tom Duff of Pixar did in reaction to Huang's comments at the GeForce2 launch. Duff wrote:
`Pixar-level animation' runs about 8 hundred thousand times slower than real-time on our renderfarm cpus. (I'm guessing. There's about 1000 cpus in the renderfarm and I guess we could produce all the frames in TS2 in about 50 days of renderfarm time. That comes to 1.2 million cpu hours for a 1.5 hour movie. That lags real time by a factor of 800,000.)

Do you really believe that their toy is a million times faster than one of the cpus on our Ultra Sparc servers? What's the chance that we wouldn't put one of these babies on every desk in the building? They cost a couple of hundred bucks, right? Why hasn't NVIDIA tried to give us a carton of these things? -- think of the publicity milage [sic] they could get out of it!

Duff had a point. He hammered the point home by handicapping the amount of time necessary for NVIDIA to reach such a goal:

At Moore's Law-like rates (a factor of 10 in 5 years), even if the hardware they have today is 80 times more powerful than what we use now, it will take them 20 years before they can do the frames we do today in real time. And 20 years from now, Pixar won't be even remotely interested in TS2-level images, and I'll be retired, sitting on the front porch and picking my banjo, laughing at the same press release, recycled by NVIDIA's heirs and assigns.
Later in the article, the author makes clear that events are progressing more quickly, perhaps, than Pixar thought possible:
Pixar had better be ready to receive its carton of graphics cards. Only two years after Tom Duff laughed out loud at NVIDIA's ambitions, graphics chip makers are on the brink of reaching their goal of producing Hollywood-class graphics on a chip.
After distributing this article, I heard from David Smith, AirEight's Chairman and CTO and fairly legendary 3D programmer:
The fact is that these chips are easily doubling in price/performance every six months. This means that every five years we get 32x improvement. Ten years is 1000x. But, the real point is that there is a fundamental change occurring with the nature of the GPUs. They are now becoming essentially massively parallel general purpose shading engines. That is, what we are really seeing is a completely new thing here. Think of it as essentially that 1000 server farm on a single chip without the majority of your cycles wasted in bandwidth management. The simple reality is that the performance of these things should easily be able to achieve Pixar like capabilities in 5 years -- defined in terms of what they will be doing then, not now. In fact, you will see Pixar adopt these architectures in 2-3 years, and maybe less because their competitors certainly will, so by definition, this technology will "catch up" to Pixar.
What interests me about this is the gulf between price-performance improvements at the high end and the mid-range to low end. The rewards for success are greater in the middle of the bell curve of technology adoption, and so the capital available for R&D and the pressure to use that capital efficiently are increased -- at least that's how I assume it must be. Has anyone done a study of CPU performance comparing the improvements in, say, supercomputers versus the desktop? My hunch would be that the desktop curve is steeper.

Post a comment