260°

When Will Ray Tracing Replace Rasterization?

During the past few years, ray tracing seems to have become the El Dorado of the real-time 3D world. The rendering technique sparked a peak of interest when a young researcher by the name of Daniel Pohl devoted a research project to the technology in 2004.

The reason the general public took an interest in his work is largely because Pohl chose to focus on id Software's famous Quake III, Quake IV, and Quake Wars 3D shooter game franchise. The researcher got a lot of media coverage and gamers began dreaming of a bright future in which their favorite titles would be ray traced and devoid of rasterization.

Intel soon became aware of the buzz and spotted an ideal way to justify increasing the number of cores in its processors. The company quickly started its own research program and now never misses an opportunity to remind us that ray tracing is the future of real-time 3D games. But is it, really? What technical realities lie behind the marketing hype? What are the real advantages of ray tracing? Can we really expect it to replace rasterization? Tom's Hardware will try to provide some answers to those questions.

Read Full Story >>
tomshardware.com
Some Ray Tracing Examples:
Click to enlarge:
Ray traced glass Ray traced villa
Ray traced sports car Ray traced face
Ray tracing vs Rasterization Ray traced old streets
Godmars2906093d ago

My guess is why it hasn't happened yet is file size. Can't imagine objects of that detailed fitting onto a DVD en mass.

Kakkoii6093d ago (Edited 6093d ago )

Nope.. It's because ray tracing a whole scene takes an epic amount of processing power.

I don't see why you think model detail has anything to do with ray tracing. Sure the more polygons, the more detailed. But it has nothing to do with ray tracing. Ray tracing is an algorithm for simulating how light works in real life. It only traces for how many pixels are on your screen (your resolution). So ray tracing a game like Quake 3 would still take an assload of power.

asdr3wsfas6093d ago (Edited 6093d ago )

No files are needed for ray tracing. Simply, you shoot rays out of the eyes of the viewer and bounce them all over your scene. Any that eventually hit a light source contribute illumination from it, and so you follow the ray back from the light source to your eye and add the contribution from that light source to the illumination of each object it hits.

More specifically, you fire a ray of light from the viewer through the middle of each pixel. Those light rays that hit objects reflect off and and fire at the newly displaced trajectory. The bouncing is repeated a preset number of times (2, 3, 10, etcetc) or until the ray hits a light source.

If the ray hits a light source following the ray from the light source backwards leads to the viewer by construction. Including these in reflective calculations of any reflective object hit along the way (phong model) adds a contribution from light rays bouncing off a surface or bouncing between several surfaces into the illumination of objects, and shooting the ray from the viewer ensures we only calculate those rays that reach the viewer. In addition, the reflected light gets weaker each time it hits an object, depending on how reflective it is (takes into account diffusion of reflected light).

This hasn't been implemented because you shoot one ray through the middle of every pixel from the viewer, which is considered the origin. Then if any of those rays hit a light source even after bouncing around, you consider it as reaching the viewer. So every time a ray hits an object, you calculate the trajectory it bounces off and see if that refracted ray hits a light source. If the ray eventually hits a light source, the light from the source follows the ray backwards back to the viewer, modifying the illumination of every object it hits on the way while also losing illumination. So the reflective lighting values at each point are modified by the light contributed by each reflected ray, modified to account for diffusion caused by reflection.

This is a simpler explanation, when the true explanation requires linear algebra and calc III. Ray tracing works particularly well with reflective and transparent materials (specular lighting) but does poorly with objects like walls that reflect light more diffusely (diffuse lighting).

The reason it hasn't been implemented is that every scene you apply it to requires several calculation for each ray you shoot - you shoot them through the middle of each pixel, so there are a lot. And you have to calculate the middle of each pixel. Then you have to see where they hit. For each ray that hits you may have to calculate a new trajectory and repeat a set number of times (more repetitions increases detail). Lastly if such a ray hits a light source you have to take the colors of the light source and it's relative strength and go backwards, modifying where each ray bounced by the light originating from that source and accounting for how much is reflected before finally ending at the viewer.

This is computationally obscene. It has nothing to do with file size. It requires lots and lots of processing power, which the article points may become possible as multicore is further adopted (one of the cores runs this algorithm while the others do the rest). Ray tracing is done dynamically by calculations as the view changes, so for each frame, you have to repeat. Diffuse lighting is the type you render once and forget about.

I wrote a ray tracer once. It was not fun.

IdleLeeSiuLung6093d ago

Thanks for the explanation.

A number of years back I wrote a ray-tracer too. It was fun to see your stuff rendered, but it took forever even with optimizations and simple objects with few light sources.

Out of curiosity, why do you keep saying the middle of a pixel? Isn't a pixel the smallest unit displayable already?

asdr3wsfas6093d ago (Edited 6093d ago )

You're right, the pixel is the smallest unit, but only on the computer. When you do the original physics calculations to determine where the ray of light goes you're thinking of actual space, with infinitely many points. The pixels as a grid are then conceptually overlayed on this image just behind the viewer. The grid can be thought of the 2d plane (the monitor) you project the image of your 3d scene onto. In reality you can see infinitely many points in each pixel. But since your computer can only represent one color per pixel, you only need one ray shot through each.

You pick the middle as a convention for the average of the pixel. I don't think it'd matter much as long as you shoot each ray through the same part of the corresponding pixel, but it would displace the image on your monitor from what you'd expect if the viewer in your scene was looking straight ahead. If you shot the ray through the left side of each pixel the image drawn would be a little to the left of what you'd expect the viewer in that scene to see. Basically it'd screw up the coordinate systems, aka it screws up the frame of reference. Like if a camera didn't point straight ahead of the viewer, but instead aimed a little to the left. The image would be fine but you wouldn't expect the photographer to stand that way in relation to the camera. It'd overly complicate a lot of programming.

We do the calculations with exact models and approximate by pixels at the final step to minimize the errors caused by pixel approximation. You can't make an exact circle out of pixels (there will be a small variation in distance from the center; graph a circle on a TI calculator for a striking example, they look good on our screens cause we have way more pixels). In describing your scene, you conceptually represent a circle with the equation that determines it and include all the points equal distance from the center. Then when you -display- the image you approximate that exact circle with pixels providing the best fit. If you represented the circle conceptually with a pixel approximation instead of the exact mathematical equation, the error from the pixel approximation will creep into anything you use those coordinates for, like determining where it intersects another graph (aka collision detection in gaming). Of course computers are discrete and can't represent the circle exactly either, but the simplified idea is that you can represent points on the circle much more accurately than it's displayed.

When you do the calculations for where the light bounces it's the same idea. You define conditions representing the objects in your image in continuous space, aka space with infinitely many points. You bounce the light off the objects in the exact physics model to avoid errors caused by pixel approximation in your physics calculations. The grid overlay is only used to keep track of how many rays you actually have to shoot out, so you don't waste time shooting 3 rays through each pixel and only use light reflected on one of them. The middle is chosen as convention and to avoid messing up the coordinate system. As long as the rays go through the same place in each pixel you'll get a sensible image, but perhaps a bit displaced.

For example, specular lighting uses normal vectors to an object in determining reflectiveness and where the light bounces after it hits the object. The normal vectors to a sphere always point straight out. The corner of a pixel has no normal vector as it's discontinuous, and so the pixel representation of that same sphere has places where no normal vector exists. Applying a pixel approximation before displaying the image totally destroys the normal vector on certain parts of the sphere, making the physics and math involved in the lighting equations useless at those points. In other words, errors from the pixel approximation compound in subsequent operations, in this case quite dramatically.

So you convert to the pixel approximation after the physics is done, so there's only one error inducing step. Otherwise errors in the pixel approximation would get in your physics calculations and potentially grow, as well as require much more complex methods to deal with them. We overlay the pixel grid on our mental conception of the image to correspond each ray of light with a single pixel we're drawing. And we just choose the middle cause to avoid displacement. If we chose the left side of the pixel our image would be shifted a little to the left of what we'd expect.

IdleLeeSiuLung6093d ago (Edited 6093d ago )

I'm jogging my memory here, but now vaguely remember this. It's a continuous space mapped onto a discrete space. This means the calculations are done as a floating point instead of integers (for all intensive purposes).

So I see where you are heading with this, but still have to give it some more thought on the exact mechanism for this mapping. For instance, how do you determine x,y,z coordinate maps to this point in the continuous space?

However, the same issues applies to straight, but diagonal lines as well. What if the line hits the edge of a pixel?

asdr3wsfas6093d ago (Edited 6093d ago )

The discrete mapping is an area of study in its own right. I studied math primarily so I could care less about how that's done in graphics ;) The CS degree was for $$$.

There are lots of different methods I'm sure that vary in error and processing speed. It may also depend on the shape you're mapping. I learned and promptly forgot about all of it. Representing geometry with matrices and calculus is far more interesting to me. Which is why I got into writing that.

+ Show (3) more repliesLast reply 6093d ago
poopsack6093d ago

i thought rasterbating was when u blew up a picture into a bunch of pages to put it up on a wall.

http://www.248am.com/images...

poopsack6093d ago

yeah and my mistake i didnt see it said rasterizing.

Tony P6093d ago

Well. This article injured parts of my brain, but hell. Learn something new everyday, eh?

krouse936093d ago

Basically with Ray tracing you eill notice differences in Say a Car racing game.

So a car has reflection but you won't see other cars in the reflection because that would require real time ray tracing and though it's possible you would probably get 4 frames per second on a 1080p game with a quad core processor (rough estimate.

But like that new Pool kings game that coming out on PSN if you look at the pool balls you see HDR lighting and it reflects the room on each ball but it cant reflect where the other balls are because it's not ray tracing the light source.

*My kind of easy way of explaining it.*

Show all comments (34)
50°

Analyst says Epic Games layoffs send a clear signal: 'This will not get any easier'

The myth of perpetual growth continues to bulldoze through the industry.

Read Full Story >>
gamedeveloper.com
Pyrofire9521h ago

"It is going to get so much worse in the video game industry...This will get worse and going to be bad and it's going to eventually get better." -Greg Miller right before it got worse

Agent7518h ago

Too many games, too many versions, download only games, game pricing, £700+ consoles, eye watering prices for controllers, four formats, gaming is one big mess.

gold_drake15h ago

id argue that epic games has alot of other revenue income other than fortnite, so their reasoning makes zero sense.

40°

TIGA: UK games dev sector faces "most severe downturn on record"

Total workforce fell from 28,516 in May 2024 to 27,347 in September 2025

Read Full Story >>
gamesindustry.biz
40°

Mojang partners with Merlin Entertainments for the Minecraft theme park, Minecraft World

Mojang has partnered with Merlin Entertainments to build the world's first Minecraft theme park in the UK.

Read Full Story >>
gamesindustry.biz