Approvals 11/3 ▼
BlackIceJoe (2) - 6108d ago Cancel
Valay (4) - 6108d ago Cancel
ZombieNinjaPanda (2) - 6108d ago Cancel
UltimateIdiot911 (2) - 6108d ago Cancel
evrfighter (1) - 6108d ago Cancel
260°

When Will Ray Tracing Replace Rasterization?

During the past few years, ray tracing seems to have become the El Dorado of the real-time 3D world. The rendering technique sparked a peak of interest when a young researcher by the name of Daniel Pohl devoted a research project to the technology in 2004.

The reason the general public took an interest in his work is largely because Pohl chose to focus on id Software's famous Quake III, Quake IV, and Quake Wars 3D shooter game franchise. The researcher got a lot of media coverage and gamers began dreaming of a bright future in which their favorite titles would be ray traced and devoid of rasterization.

Intel soon became aware of the buzz and spotted an ideal way to justify increasing the number of cores in its processors. The company quickly started its own research program and now never misses an opportunity to remind us that ray tracing is the future of real-time 3D games. But is it, really? What technical realities lie behind the marketing hype? What are the real advantages of ray tracing? Can we really expect it to replace rasterization? Tom's Hardware will try to provide some answers to those questions.

Read Full Story >>
tomshardware.com
Some Ray Tracing Examples:
Click to enlarge:
Ray traced glass Ray traced villa
Ray traced sports car Ray traced face
Ray tracing vs Rasterization Ray traced old streets
Create Report !X

Add Report

Reports

Godmars2906109d ago

My guess is why it hasn't happened yet is file size. Can't imagine objects of that detailed fitting onto a DVD en mass.

Kakkoii6108d ago (Edited 6108d ago )

Nope.. It's because ray tracing a whole scene takes an epic amount of processing power.

I don't see why you think model detail has anything to do with ray tracing. Sure the more polygons, the more detailed. But it has nothing to do with ray tracing. Ray tracing is an algorithm for simulating how light works in real life. It only traces for how many pixels are on your screen (your resolution). So ray tracing a game like Quake 3 would still take an assload of power.

asdr3wsfas6108d ago (Edited 6108d ago )

No files are needed for ray tracing. Simply, you shoot rays out of the eyes of the viewer and bounce them all over your scene. Any that eventually hit a light source contribute illumination from it, and so you follow the ray back from the light source to your eye and add the contribution from that light source to the illumination of each object it hits.

More specifically, you fire a ray of light from the viewer through the middle of each pixel. Those light rays that hit objects reflect off and and fire at the newly displaced trajectory. The bouncing is repeated a preset number of times (2, 3, 10, etcetc) or until the ray hits a light source.

If the ray hits a light source following the ray from the light source backwards leads to the viewer by construction. Including these in reflective calculations of any reflective object hit along the way (phong model) adds a contribution from light rays bouncing off a surface or bouncing between several surfaces into the illumination of objects, and shooting the ray from the viewer ensures we only calculate those rays that reach the viewer. In addition, the reflected light gets weaker each time it hits an object, depending on how reflective it is (takes into account diffusion of reflected light).

This hasn't been implemented because you shoot one ray through the middle of every pixel from the viewer, which is considered the origin. Then if any of those rays hit a light source even after bouncing around, you consider it as reaching the viewer. So every time a ray hits an object, you calculate the trajectory it bounces off and see if that refracted ray hits a light source. If the ray eventually hits a light source, the light from the source follows the ray backwards back to the viewer, modifying the illumination of every object it hits on the way while also losing illumination. So the reflective lighting values at each point are modified by the light contributed by each reflected ray, modified to account for diffusion caused by reflection.

This is a simpler explanation, when the true explanation requires linear algebra and calc III. Ray tracing works particularly well with reflective and transparent materials (specular lighting) but does poorly with objects like walls that reflect light more diffusely (diffuse lighting).

The reason it hasn't been implemented is that every scene you apply it to requires several calculation for each ray you shoot - you shoot them through the middle of each pixel, so there are a lot. And you have to calculate the middle of each pixel. Then you have to see where they hit. For each ray that hits you may have to calculate a new trajectory and repeat a set number of times (more repetitions increases detail). Lastly if such a ray hits a light source you have to take the colors of the light source and it's relative strength and go backwards, modifying where each ray bounced by the light originating from that source and accounting for how much is reflected before finally ending at the viewer.

This is computationally obscene. It has nothing to do with file size. It requires lots and lots of processing power, which the article points may become possible as multicore is further adopted (one of the cores runs this algorithm while the others do the rest). Ray tracing is done dynamically by calculations as the view changes, so for each frame, you have to repeat. Diffuse lighting is the type you render once and forget about.

I wrote a ray tracer once. It was not fun.

IdleLeeSiuLung6108d ago

Thanks for the explanation.

A number of years back I wrote a ray-tracer too. It was fun to see your stuff rendered, but it took forever even with optimizations and simple objects with few light sources.

Out of curiosity, why do you keep saying the middle of a pixel? Isn't a pixel the smallest unit displayable already?

asdr3wsfas6108d ago (Edited 6108d ago )

You're right, the pixel is the smallest unit, but only on the computer. When you do the original physics calculations to determine where the ray of light goes you're thinking of actual space, with infinitely many points. The pixels as a grid are then conceptually overlayed on this image just behind the viewer. The grid can be thought of the 2d plane (the monitor) you project the image of your 3d scene onto. In reality you can see infinitely many points in each pixel. But since your computer can only represent one color per pixel, you only need one ray shot through each.

You pick the middle as a convention for the average of the pixel. I don't think it'd matter much as long as you shoot each ray through the same part of the corresponding pixel, but it would displace the image on your monitor from what you'd expect if the viewer in your scene was looking straight ahead. If you shot the ray through the left side of each pixel the image drawn would be a little to the left of what you'd expect the viewer in that scene to see. Basically it'd screw up the coordinate systems, aka it screws up the frame of reference. Like if a camera didn't point straight ahead of the viewer, but instead aimed a little to the left. The image would be fine but you wouldn't expect the photographer to stand that way in relation to the camera. It'd overly complicate a lot of programming.

We do the calculations with exact models and approximate by pixels at the final step to minimize the errors caused by pixel approximation. You can't make an exact circle out of pixels (there will be a small variation in distance from the center; graph a circle on a TI calculator for a striking example, they look good on our screens cause we have way more pixels). In describing your scene, you conceptually represent a circle with the equation that determines it and include all the points equal distance from the center. Then when you -display- the image you approximate that exact circle with pixels providing the best fit. If you represented the circle conceptually with a pixel approximation instead of the exact mathematical equation, the error from the pixel approximation will creep into anything you use those coordinates for, like determining where it intersects another graph (aka collision detection in gaming). Of course computers are discrete and can't represent the circle exactly either, but the simplified idea is that you can represent points on the circle much more accurately than it's displayed.

When you do the calculations for where the light bounces it's the same idea. You define conditions representing the objects in your image in continuous space, aka space with infinitely many points. You bounce the light off the objects in the exact physics model to avoid errors caused by pixel approximation in your physics calculations. The grid overlay is only used to keep track of how many rays you actually have to shoot out, so you don't waste time shooting 3 rays through each pixel and only use light reflected on one of them. The middle is chosen as convention and to avoid messing up the coordinate system. As long as the rays go through the same place in each pixel you'll get a sensible image, but perhaps a bit displaced.

For example, specular lighting uses normal vectors to an object in determining reflectiveness and where the light bounces after it hits the object. The normal vectors to a sphere always point straight out. The corner of a pixel has no normal vector as it's discontinuous, and so the pixel representation of that same sphere has places where no normal vector exists. Applying a pixel approximation before displaying the image totally destroys the normal vector on certain parts of the sphere, making the physics and math involved in the lighting equations useless at those points. In other words, errors from the pixel approximation compound in subsequent operations, in this case quite dramatically.

So you convert to the pixel approximation after the physics is done, so there's only one error inducing step. Otherwise errors in the pixel approximation would get in your physics calculations and potentially grow, as well as require much more complex methods to deal with them. We overlay the pixel grid on our mental conception of the image to correspond each ray of light with a single pixel we're drawing. And we just choose the middle cause to avoid displacement. If we chose the left side of the pixel our image would be shifted a little to the left of what we'd expect.

IdleLeeSiuLung6108d ago (Edited 6108d ago )

I'm jogging my memory here, but now vaguely remember this. It's a continuous space mapped onto a discrete space. This means the calculations are done as a floating point instead of integers (for all intensive purposes).

So I see where you are heading with this, but still have to give it some more thought on the exact mechanism for this mapping. For instance, how do you determine x,y,z coordinate maps to this point in the continuous space?

However, the same issues applies to straight, but diagonal lines as well. What if the line hits the edge of a pixel?

asdr3wsfas6108d ago (Edited 6108d ago )

The discrete mapping is an area of study in its own right. I studied math primarily so I could care less about how that's done in graphics ;) The CS degree was for $$$.

There are lots of different methods I'm sure that vary in error and processing speed. It may also depend on the shape you're mapping. I learned and promptly forgot about all of it. Representing geometry with matrices and calculus is far more interesting to me. Which is why I got into writing that.

+ Show (3) more repliesLast reply 6108d ago
poopsack6108d ago

i thought rasterbating was when u blew up a picture into a bunch of pages to put it up on a wall.

http://www.248am.com/images...

poopsack6108d ago

yeah and my mistake i didnt see it said rasterizing.

Tony P6108d ago

Well. This article injured parts of my brain, but hell. Learn something new everyday, eh?

krouse936108d ago

Basically with Ray tracing you eill notice differences in Say a Car racing game.

So a car has reflection but you won't see other cars in the reflection because that would require real time ray tracing and though it's possible you would probably get 4 frames per second on a 1080p game with a quad core processor (rough estimate.

But like that new Pool kings game that coming out on PSN if you look at the pool balls you see HDR lighting and it reflects the room on each ball but it cant reflect where the other balls are because it's not ray tracing the light source.

*My kind of easy way of explaining it.*

Show all comments (34)
40°

UKIE: UK games market reaches £8.7bn in 2025

Growth driven by digital software and console hardware sales.

Read Full Story >>
gamesindustry.biz
Create Report !X

Add Report

Reports

+ Updates (1)- Updates (1)

Updates

Changed from Pending to Approved
Community11h ago
Agent7510h ago

Probably down to the high cost of gaming. Controllers and racing wheels can cost as much as consoles.

50°

Saudi Game Publisher Says Japanese IP Is 'Like Oil,' Calls Middle East Partnership 'Very Important'

TNS - Saudi anime and game publisher Manga Productions compares Japanese IPs to 'oil,' arguing that developers focus too much on Western markets.

Read Full Story >>
thenerdstash.com
Create Report !X

Add Report

Reports

+ Updates (2)- Updates (2)

Updates

Changed from Pending to Approved
Community1d 3h ago
Changed: content
Unit0762d ago
blacktiger1d 1h ago

So Saudi owns the world with Jews and the rest?

rlow11d ago

So they should focus on smaller markets?

__y2jb6h ago

Anyone believing that these pigs are their friends is in for a very big shock down the road. Those doing business with them should be told to go live in the middle east.

blacktiger4h ago

Something is wrong and the Americans don't care and the dislikes are like Jews and Muslim getting pissed

gold_drake3h ago

i find it very troublesome that they are meddling with gaming ips, buying stock etc.

they even bought some capcom stock.

80°

Content Creators Are Receiving Xbox Project Helix Merchandise From Microsoft

Content creators are receiving Xbox Project Helix merchandise alongside a note from Microsoft Gaming CEO Asha Sharma.

Read Full Story >>
twistedvoxel.com
Create Report !X

Add Report

Reports

+ Updates (1)- Updates (1)

Updates

Changed from Pending to Approved
Community2d ago
shadowT3d ago

Everyone deserves to make a bit of money

VenomUK2d ago

“Just wait until next year!”

DivineHand1252d ago

Well they are doing what they can to revive the brand. They have a lot of work to do.

1Victor2d ago ShowReplies(2)
peppeaccardo2d ago

yep agree 100% ... and some more. The hype machine is warming up and the performance numbers sheet will be filled in no time to guarantee that the winner on paper is revealed way before the launch day ... BUT as we all know only 1st party games and brand idenity will declare the real winner. Only problem is the price tag needed to get into next gen ...

SIdepocket2d ago

Just what the world needs: another poorly-selling console.

Agent752d ago

Next gen will be a massive failure for Microsoft and Sony, you could say it's already started. Game sales at an all-time low and the high cost of consoles and controllers, the PlayStation 5 Pro is the first indication on the pricing of next gen consoles and that's been a sales flop for Sony. Some people will say that people are paying eye watering prices for PCs, but that's an entirely different market, what I call OCD gaming. As I mentioned in another post, the PlayStation 5 Pro here sells for just under £800, which is just over $1000, games can top £75 ($100), including Switch 2 games. Some of us would like to see a single format, but that'll never happen. Microsoft and Sony would laugh at the idea of stepping off the gas pedal with the tech and do what Nintendo has done with the Switch 2. Three hybrid consoles fighting against each other, basically taking current tech and adding a few bells and whistles. Who's up for a PlayStation 6 for $999 and $99 games? Let me know in the comments.

Reaper22_2d ago (Edited 2d ago )

The simple solution is dont buy what you cant afford. You dont have to move on just because the companies do. If people want better hardware and better games theyre gonna be more expensive. People need to be content with what they have.

rlow11d 18h ago

You know a lot of those sales are dads and moms buying for their kids and teens. Consoles have alway been a reasonable price to a degree. But you start heading far north of $500, I don’t see as many parents buying for their kids or teens……will it still sell? Yes but will it sell a hundred million or more…..that’s the real question. Hopefully ram prices drop.

derek2d ago (Edited 2d ago )

@Agent75 There is no need to lie to make your point. The ps5pro is selling as a percentage about as well as the ps4pro. The recent pssr2 update has actually improved its value. Your attempt to group the ps5 a console (that is about to cross 100 million sold and is by far Sony's most profitable generation) with the abject failure that is xbox is laughably stupid. Truth is Sony shouldn't even consider launching another console until 2029 at the earliest.

StoneTitan1d 23h ago

Everyone except you knows the pro is a profit machine that is why its expensive.
Ps6 will not be even close to it in price

rlow11d 19h ago

What happens if ram prices remain high? Do you think Sony or MS would eat those costs? Depends on how much they would have to absorb.

InUrFoxHole1d 22h ago

And the lord shone his light upon gamers!

Show all comments (31)