And then Intel’s little report on raytracing hopped across my desk like a little white rabbit with a pocketwatch, and I followed it right down into the rabbit hole. There it was, a parallel world that connected a lot of dots, some of which hadn’t even been drawn yet.
As some of you already know, the idea of real-time raytracing has always been one of my pet-peeves for the industry. The concept is easy – rather than trying to approximate every single pixel’s light value through myriad pipelines and shaders, you trace rays of light from eye to source using one physics calculation. This calculation takes lots into account based on what the light hits, but it is just one calculation that is repeated millions and millions of times per frame.
So how long until all this could start pushing out of the theoretical and into the real? Well, Intel says that we’re looking at needing about 450 million raysegs per second before we get ‘interesting.’ And since a single-core P4 at 3.2Ghz was capable of 100m raysegs/sec, that means we’re looking at….
Tee-Hee !!!
Raytracing holy grail indeed…
On a more serious and cynical approach: Intel (and everybody else, for that matter) hit the wall with die shrinkage and the Mhz race. It simply starts to get diminishing returns. So they thought of doubling cores. Made sense: get twice the processor and maintain the thermal dissipation. Only snag: processes by software were not optimized to be parallel. Games in particular are highly un-parallel beasts. And since the graphics are more and more dealt by the GPU, intel (and the rest…) had no market on the high end desktop (the game enthusiast).
So this new found paralell scalbility of raytracing comes in handy precisely to that market, where it expects to sell those quad-cores (and dual quad cores presumably).
Hey, for me is all gravy. I’m a 3D hobby enthusiast and this new direction of marketing comes right on the money…
So…raytracing is nice, but what about Radiosity…
PS. Someone please clarify this: isn’t this task ideally suited for the “Cell” processor? Hmmmm… Will PS3 be saved by intel in a year or two?
I remember playing with ray tracing programs way back with the original Amiga 500. It took hours to generate a single image of a 3D model mirrored on the surface of a chrome sphere but, boy, was that cool! I imagine the new dual Core 2 Duos (and maybe future quad Core 2 Quad (?) machines) would make the prerequisite number of computations for gaming use feasible.
I agree that texture mapping raster graphics has pretty much been tapped out with diminishing returns. Maybe a switch to ray tracing will finally bring photo realistic gaming to reality. After reading this article, it sounds like it is not only feasible but it is more likely that more cores are better suited to ray tracing than to raster graphics.
It is about time for a paradigm shift, isn’t it? And none too soon. The real question is will the game industry make such a switch? The first few ray tracing games would likely be very poor sellers due to the limited number of machines that could run them properly. That will change when dual and quad core machines become commonplace. Anything in the meantime would likely use raster graphics with ray tracing for added effects.
I agree with Joao, the scaleability of the Cell Processor sounds like a natural for this. I wonder if the guys at Sony will have some surprises up their sleeve when it comes to lighting effects on the PS3.
Fun aint it….
something they SHOULd have done YEARS ago…
I STILL cant see WHY, the drivers, the infs, the DLL, and the codecs CANT be loaded ONTO the video card…
It would save about 60-90% of the CPU processing ALL OF IT…
CARDS that know what to DO, insted of this CRAp thats out here…
Add the SAMe to the audio card, and then use a timing sync between THEM…and ALL video can work…
The cell processor? Don’t hype a technology that has an +80% failure rate at the manufacturing level.
It seems that a lot of GPU on the CPU talk is starting, and it seems to be driven by the realization that a four core processor is overkill for non-scientific uses. Those cores could easily be put towards graphics power.
Processor yields are bitchen… The PS3 cell is being manufactured with only 7 of the 8 SPE cores active. That means that if one is bad they can still use the silicon. And some others may end up in a Toshiba TV near you with just 4 or 5 SPEs pumping. There’s always a second grade market – hey that’s why intel invented the celeron. it’s a Pentium with the cache gone bad, so they disable half and voilá.
Quad core silicon is an upgrade benefit of die shrinkage – less size, so they can put two duals on the same space of one. but don’t count too much on the yields…