240°

Intel: 2-year-old Nvidia GPU outperforms 3.2GHz Core i7

PC World: Intel researchers have published the results of a performance comparison between their latest quad-core Core i7 processor and a two-year-old Nvidia graphics card, and found that the Intel processor can't match the graphics chip's parallel processing performance.

On average, the Nvidia GeForce GTX 280 -- released in June 2008 -- was 2.5 times faster than the Intel 3.2GHz Core i7 960 processor, and more than 14 times faster under certain circumstances.

Read Full Story >>
pcworld.idg.com.au
cpuchess5734d ago

Wow that's cool. Any software out there that actually uses the video card like a cpu?

Conloles5734d ago

Try Badaboom, its a converting software absolutely kicks CPU conversion times ass

You Noob5734d ago

so, what about GTX 480? :D

meetajhu5734d ago (Edited 5734d ago )

its already stated by nvidia that its equivalent to 4 i7 975 cpus

http://www.youtube.com/watc...

mittwaffen5734d ago

GPU's dont calculate like a General CPU. Its different; extremely different. A cpu does so much work in so many different ways in a very effective manner compared to gpus.

Some software does use Nvidia cards, like Adobe.

Personally; that isnt very impressive, it should of done better being GTX 280.

Nihilism5734d ago (Edited 5734d ago )

That's not true, the 400 series runs c++ and fortran code natively, any program that uses that lanuguage can be changed to run entirely on the GPU.

The shader cores are used as processing cored so a gtx480 is essential a 480 core processor. They are extremely efficient, far more so than any CPU which is why Tesla cards are used in scientific calculations rather than cpu clusters like the days of old.

A PGU beating a cpu in tasks is nothing new. Nvidias method will take hold before intel can combine a GPU on the die, because a cpu die is so small it is almost impossible to fit any decent GPU components in their, but a GPU is much larger and is better suited to the dual role

ProjectVulcan5734d ago (Edited 5734d ago )

It is a piss poor comparison though. GPUs are enormous, a GTX280 has three times as many transistors as a core i7 and they are nearly all logic transistors (logic calculates, cache is just dumb memory), not cache which makes up a great deal of a CPU. It has more transistors than a Tukwila itanium (a massive server CPU with huge cache) Take a GPU's strongest suit, massively parallel floating point performance, and pit it against a CPU. The CPU will lose badly, a GPU has ridiculous FP performance and absurd bandwith of course! Look at the performance per watt, if they state its only 2.5 times as fast thats also not that great for the GPU. GTX280's TDP is nearly twice as much as a core i7. Now try and run a benchmark that requires a lot of branch prediction code, the GPU will fail so hard and get arse whipped by a pentium 4....

They both do different things very well, and other things not so well. a CPU is still required as it is so general purpose. Most supercomputers are generally banks of CPUs because you can run anything you want on them, not so with GPUs.

This article is basically saying a knife is better at cutting than a spoon. Wow, REALLY?

Ju5733d ago

The cache issue is maybe the bottom line here. Self managed caches cannot keep up with cache coherency across massive parallel cores. I guess if they wanted to say something, than going massive parallel with a general purpose CPU discovers it's bottleneck because of the cache logic cannot keeping up with bandwidth demands - someone somewhere said, 6-8 core is a max where the cache bandwidth can keep data in sync between cores, after that it simply will slow down the cores to a degree it doesn't make sense to add another core.

GPGPUs don't use managed caches, data are prepared by the high level language and optimized for dataflow in the application, hence bypassing the HW cache problems. Eevn if C++ runs on those GPUs, it will only run effectively, if you take this into consideration; OpenCL also requires you to prepare code to proper alignment and HW restrictions. With modern SW tools this is possible but basically the problem is just pushed from a lower level HW problem into a SW higher level. And it seams to work just fine that way.

Motion5734d ago

4GB of 2 year old Kingston RAM outperforms 320GB Hard Drive!!! Amazing!

moe845734d ago

This really isn't surprising. The chips in most graphics cards have the ability to put out more computing power than any cpu on the market, today.

This is one of the reasons why F@H gpu client can get much more done(quicker too) than the F@H cpu client.

Microsoft_Spokesman5734d ago

This is off-topic, but my fav CPU is Pentium 4.

Show all comments (25)
30°

How Mouse: P.I. For Hire Ties a Slapstick Cartoon Mystery to This Year’s Most Surprising Shooter

It’s a little mind-boggling how many elements the developers at Fumi Games are packing into the upcoming Mouse: P.I. For Hire. It’s blending an old school shooter inside a black and white noir thriller, mixed with a chunk of cartoon violence, complimented by a catchy big band soundtrack, and tons of hand-drawn animation in a world dominated by mice and rats. Yeah, it’s a lot.

Read Full Story >>
news.xbox.com
20°
9.0

World of Warcraft: Midnight Review - CGMagazine

World of Warcraft: Midnight is an excellent follow-up to The War Within and refines all that makes WOW great.

Read Full Story >>
cgmagonline.com
30°
8.0

Poker Night At the Inventory Review – Gamerhub UK

Still dealing well after all these years.

Read Full Story >>
gamerhub.co.uk