Why do CPUs use less electricity than GPUs?

Why are CPUs only about 80W TDP? Can't they make ones with three times as many cores that have 250W TDP like graphics cards?

Reply to
Commander Kinsey
Loading thread data ...

Yes. One is called an AMD Threadripper 3970X.

--
Brian Gregory (in England).
Reply to
Brian Gregory

Power consumption is limited by power dissipation.

Presumably the CPU manufacturers figure that they wouldn't sell all that many parts that needed massive heat sinking, and the graphic card manufacturers know that they are selling into a market with other priorities.

--
Bill Sloman, Sydney
Reply to
Bill Sloman

Surely those of us that buy huge graphics cards would accept a CPU with a similar sized heatsink. The motherboard would have to look different though, or maybe the heatsink could somehow go over the top of the RAM?

I think the problem might be price. I've just been told elsewhere that there's an AMD Threadripper 3990x with 64 cores, 128 threads and a TDP of 280 watts. But it's $3500. Graphics cards are cheaper as they have smaller instruction sets I guess.

Reply to
Commander Kinsey

Took a long time for that to come out though. GPUs have had 250W TDP chips for decades.

Reply to
Commander Kinsey

ote:

as many cores that have 250W TDP like graphics cards?

t many parts that needed massive heat sinking, and the graphic card manufac turers know that they are selling into a market with other priorities.

similar sized heatsink. The motherboard would have to look different thou gh, or maybe the heatsink could somehow go over the top of the RAM?

here's an AMD Threadripper 3990x with 64 cores, 128 threads and a TDP of 28

0 watts. But it's $3500. Graphics cards are cheaper as they have smaller instruction sets I guess.

GPU was necessary when CPU was not fast enough. But nowadays, CPU can do m ost of the tasks. GPU instructions are more limited. Namely, not good in branching. For GPU to be useful, it needs high bandwidth memories also, bu t that gets to be as expensive as a CPU system.

Reply to
edward.ming.lee

I can max out any CPU and/or GPU easily. Folding at home. Rosetta at home. Einstein at home.....

Reply to
Commander Kinsey

No way can a CPU render the kind of scenes in e.g. AAA title video games that even a $150 GPU can do.

Vertex and pixel shaders are massively parallel.

Reply to
bitrex

Because while power dissipation is proportional to number of cores and clock speed, for the vast majority of applications CPU clock is no longer a bottleneck.

If you really need more general-purpose CPU power you can get a system that supports multiple physical processors, or cluster them.

The trend is to make CPUs more power-efficient, not less!

Reply to
bitrex

I think that is the issue. They sell more CPUs in laptops than desktops an d more in phones than laptops. Even servers are power constrained. So pow er consumption is the key these days. It takes a lot of money to pump out new lines of CPUs and while they do have some desktop oriented processors, they aren't going to be 10 times the power of the laptop CPUs. The market is just too small to justify the investment.

--

  Rick C. 

  - Get 1,000 miles of free Supercharging 
  - Tesla referral code - https://ts.la/richard11209
Reply to
Ricketty C

Big TDPs in a single unit aren't good for much but bragging-rights, and "enthusiasts" mostly have stretching-contests over their GPU setups these days.

Reply to
bitrex

Also, x86 general-purpose CPU cores have large die areas as compared to e.g. GPU cores. You can only it so many on a die of a given area before you start running into bottlenecks trying to route interconnects between them and on and off the die. You can have bigger die but then production cost goes up. And the more cores you put on a die the lower your yield, not every core always works.

So there's some sweet spot, four or eight seems to be a current sweet spot.

Reply to
bitrex

One other thing I forgot to mention, while performance per clock hasn't increased that much over the past decade, performance per watt has.

The high-end desktop CPUs are monsters compared to the best available laptop CPUs, but the price tag is premium.

When you go down in price a bit and are comparing newer laptop CPUs vs somewhat older generation desktop CPUs, the gap shrinks a lot.

Reply to
bitrex

Low power is a good thing. The 250 watt chip needs a big loud fan or liquid cooling. The GPU can use more parallel processing for a small class of functions. The CPU has general processing, not so much in parallel. Therefore a low power CPU has low watts and 100% performance for widely varying general tasks.

Reply to
omnilobe

The main CPU is optimised for general computing using Intel legacy instructions and relatively few applications can sensibly make use of more than one thread. And even if they can the memory bandwidth tends to saturate at around 6-8 threads running flat out. Beyond that you can use more power but without gaining speed and you may even lose some.

Running a high end parallel chess engine and deliberately fixing the maximum number of threads it can use I find the power usage increases monotonically with number of threads active but performance reaches a maximum at 6 threads and *declines* after that. A combination of the engine doing useless work faster that is subsequently wasted and memory bandwidth saturation generating wait states for stuff not in cache.

The main thing is that there really isn't all that much demand for huge numbers of fast CPU cores outside of scientific computing and they have developed libraries to allow them to subvert the extremely parallel but simpler architecture of graphics cards. Even the chess guys have started to go that way with fat Fritz using a engine based on Alpha Go that will only run on high end graphics cards. The times they are a changing.

formatting link

Mainstream CPUs that run excessively hot are not the the future. The main effort now is actually to get the CPU power down as low as is reasonably possible to extend battery life in mobile devices.

Most office PCs today are never run at 10% of their peak performance. And in an interesting quirk for 2D graphics in an office environment the built in support is *faster* than a high end card at doing the same job.

Graphics cards really come into their own when display rendering and shading or still worse full raytracing is involved. It takes a hell of a lot of horsepower to do real time video frame generation for game play.

--
Regards, 
Martin Brown
Reply to
Martin Brown

That doesn't make sense. If people put up with a loud (and only when running flat out) fan for a graphics card, they'd put up with it for a CPU. Remember, it's still quiet if you're not taxing it, and since it would be more powerful, you wouldn't be.

Reply to
Commander Kinsey

Nonsense, loads of applications multithread nowadays. I actually have one CPU which uses 2 or 3 threads when the program wasn't designed to. There must be some automated thing in it.

Big CPUs have big fast caches.

Then the engine needs writing better. Chess can easily branch out into 1000s of calculations that can be done at once.

Then get faster memory.

Not all programs can offload stuff, it depends on what is being calculated. GPUs are very simple devices.

They only get hot when they're running flat out. So it's either still a quiet CPU, or it's one that gives you huge amounts of power when you need it. Even if you don't do heavy work on it, basic stuff can need sudden bursts of power - I've seen a reasonable CPU go to 100% just installing windows updates. Things would get done in fractions of a second and the interface would become much smoother.

And that happens too. If you're not using all those cores, they don't make heat.

They are, but not continuously, so wouldn't get hot.

Reply to
Commander Kinsey

Every GPU I've had has had a huge heatsink on it that could keep it cool without the fan having to go too fast. Often the smaller fan on the smaller heatsink on the CPU makes more noise.

--
Brian Gregory (in England).
Reply to
Brian Gregory

A lower clock speed it fine for a GPU since you can compensate by having the equivalent of lots of cores. Thus it's easy to make a nice big hot chip for a GPU.

In a CPU things need to be more precision, we need to be able to boost any core up to a high clock speed because single core performance is important in a CPU. Therefore a big chip with lots of cores would have a low yield and be expensive to produce. That's way AMD invented a way to do it with multiple smaller chip units for their most powerful processors.

--
Brian Gregory (in England).
Reply to
Brian Gregory

.

I'm not sure your characterizations are realistic. The thing that causes h eat are the calculations. Adding more logic to do more calculations actual ly will increase the power consumption faster than increasing the number of pipeline stages to speed the clock. While both are linear in dynamic powe r dissipation, there is a non-trivial static power consumption. Adding log ic not only increases the dynamic power dissipation, it also adds to the st atic power dissipation which higher clock speeds do not.

So I have to assume there are architectural reasons for limiting the pipeli ning and so the clock speed in the GPU.

--

  Rick C. 

  + Get 1,000 miles of free Supercharging 
  + Tesla referral code - https://ts.la/richard11209
Reply to
Ricketty C

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.