OT Dual core CPUs versus faster single core CPUs?

Not trying to decide which is better for everybody. Just interested in the difference between multiple core CPUs and faster single core CPUs.

Are there any mainstream applications that would benefit from one and the other?

My wild guess. Continuous multitasking versus intermittent bursts (if the bursts usually do not coincide). But I don't know of any applications to example that.

Thanks.

Reply to
John Doe
Loading thread data ...

I do run into cases where an app totally pegs one of the cores, sometimes due to a crash, sometimes due to a not particularly well written application that is a resource hog. Regardless, even with one core completely occupied, the other is free to respond to user and O/S events whereas a single core setup, even a very fast one, would probably be quite unresponsive and might require hitting The Big Red Switch (or the equivalent nowadays) to regain control.

--
Rich Webb     Norfolk, VA
Reply to
Rich Webb

My understanding is that these days, it's possible to get more computing power per watt using a multicore approach. Going to higher and higher speeds (per core) requires the use of a smaller and smaller feature size on the chip, and this can increase static power losses. Lower operating voltages are required to keep from popping through the thinner insulating layers on the chip, and this generally means that higher currents are required, which means that I^2*R losses go up and overall power efficiency goes down.

Using a somewhat lower-technology silicon process with lower losses, and replicating it several times, can yield the same amount of computing power at a lower level of energy consumption.

For desktop consumers this may not be all that significant an issue. For server farms, where the electric bill can be a major portion of the total expense over time, it can make a big difference. For laptop owners, it may extend battery run-time or reduce battery weight significantly.

There would be additional benefits if the chip (or the ACPI BIOS or the operating system) can halt or even power down the extra core(s) except when their services are needed.

Any application whose processing can be partitioned, and run in phases, is a candidate. Image processing (e.g. tweaking photos or artwork) or audio digital signal processing (e.g. running streaming MP3 or Ogg Vorbis encoders for a Shoutcast/Icecast stream server) would be candidates. Cryptographic acceleration (e.g. SSL connections) might be another.

I think that a significant benefit can arise if the code being run on any given processor, will fit well into a single core's instruction cache. If the OS is smart enough to "lock" the thread in question into a single core, you can avoid a whole lot of icache misses and reloading which would occur on a single-core processor, and this can allow the core to run at closer to its theoretical limit.

--
Dave Platt                                    AE6EO
Friends of Jade Warrior home page:  http://www.radagast.org/jade-warrior
 Click to see the full signature
Reply to
Dave Platt

If you're talking about replicating the same speed, yes that's very easy to believe.

Reply to
John Doe

It is not a simple comparison, many different factors to consider. But, overall, it is the total MIPS/MFLOPS that really count.

Reply to
PeterD

That's why I'm asking here... as caretakers of the universe, you all know everything.

Reply to
John Doe

Dual core is just a hint of what's happening. There are already chips with hundreds of cores.

Most people don't need huge number-crunching ability; they need reliable, low-power computing. If current programming methods are extended into multicore - parallelism, virtualization - we'll just compound the mess that computing is today.

The real use for multiple cores will to be to assign one function per core. One would be the OS kernal, and only that, and would be entirely protected from other processes. Other cores could be assigned to be specific device drivers, file managers, TCP/IP socket managers, things like that. Then one core could be assigned to each application process or thread. Only a few cores need floating-point horespower, which might be hardware shared. Power down idle cores. Voila, no context switching, no memory corruption, and an OS that never crashes.

Microsoft won't like it at all.

John

Reply to
John Larkin

Writers for various computer magazines seem to agree that what you want is the fastest Core2 Duo that you can afford. I have a 2 GHz Core 2 Duo on the laptop that I am writing this on. It is slightly slower (~10%) on mumber crunching than the 3.4GHz (equivalent) AMD single core machine upstairs. The laptop runs Vista, the AMD runs XPsp2. Both have 2G RAM. I think the fastest C2D is around 2.8 GHz.

Tam

Reply to
Tam

Matlab can benefit from multiple cores but with LTspice being single threaded you get the most bang for your buck by going to the fastest CPU you can afford.

Howard

Reply to
hrh1818

Does anyone know what the difference is between an Intel Dual-Core and the Core 2 Duo? Is one 32bit and the other 64?

This here machine has a dual core and it really shows up as two separate CPUs in the control panel.

--
Regards, Joerg

http://www.analogconsultants.com/
 Click to see the full signature
Reply to
Joerg

I've read but haven't researched that the Core 2 Duo is 2-4 times faster than an equivalent Dual-Core. They said it manages CPU usage much better. I look forward to seeing multiple CPU core usage graphs in Performance Monitor, if Windows XP will do that.

Reply to
John Doe

MS has doomed it self.. they have removed XP from the stores as of today unless they changed their mind since then.

formatting link
"

Reply to
Jamie

You should boot up the new Knoppix DVD (finally, a new release 5.3.1). It sees all cores too... and bus widths.

ftp://ftp.kernel.org/pub/dist/knoppix-dvd

ftp://ftp.kernel.org/pub/dist/knoppix-dvd/KNOPPIX_V5.3.1DVD-2008-03-26-EN.iso

Reply to
MassiveProng

I see, same reason for high-voltage power lines. Does the lower voltage also have anything to do with the fact electrons travel shorter distances? Thanks.

--
Thanks to the replies.
Reply to
John Doe

The term for today is

SLEW RATE

Do you think we could get anywhere near the speeds we run at trying to get up to the original TTL voltage thresholds?

The heat product and power need would be far higher as well.

Reply to
MassiveProng

These days the faster single core CPUs have less bang per buck than the multicore (or hyperhtreaded) CPUs (check out benchamrks on the sort of app mix you want to run). Hyperthreading was a pre multicore technology but still offers a performance advantage in the right applications.

Image processing and heavy duty number crunching designed to be multithreaded can benefit hugely (eg, FFTW in that recent thread) from multicore. Naive code that assumes single core cannot. But even then having a spare CPU can be useful.

I often have one core dedicated to analysing my chess games in the background (the performance impact is unnoticeable except on the cooling fan speed). I limit it when I am working to a single core.

Explorer in XP sometimes does this at bootup. On a dual core machine it is easy enough to kill the aberrant process but on a single core machine the CPU is locked up in a tight loop with interrupts disabled.

My new machines are dual core. Unless you have very unusual requirements it is unlikely that you would benefit from a faster single core.

Regards, Martin Brown

** Posted from
formatting link
**
Reply to
Martin Brown

Thanks, that's a good idea.

--
Regards, Joerg

http://www.analogconsultants.com/
 Click to see the full signature
Reply to
Joerg

If really they did it that would be a major corporate blunder.

--
Regards, Joerg

http://www.analogconsultants.com/
 Click to see the full signature
Reply to
Joerg

I'm not sure why you say the currents have to increase as feature sizes decrease. The big speed improvement from smaller feature sizes is the reduced capacitance. So the RC times (assuming the R is constant) go down and the clock can run faster. Or the current can decrease yielding much lower power consumption. This is what has been fueling improvements in IC fabrication for the last several decades. Where this breaks is when the voltages get so low that the transistors don't actually shut off entirely resulting in much higher quiescent currents... AND... the R in the RC starts to increase limiting speed improvements.

Over the last four or five years processors have gained *nothing* in clock speed. When I bought my last PC in 2002, the fastest chips from Intel were clocking at 3 GHz+. The fastest clock speed today??? about

3 GHz! This is a little bit apples and oranges because the architectures have changed a bit. The P4 in use 6 years ago was optimized for raw clock speed at the expense of longer pipelines causing more delays on branches. But the point is still that in 6 years the speed of CPUs due to clock increases is nearly zip.

With ever increasing density from the smaller process geometries, the question becomes, if you can't speed up the clock, what can you do to make the CPU faster? They have already added every optimization possible to speed up a CPU so that just adding transistors won't achieve much. So the only other way to get more speed is to add more CPUs!

So that is why we have dual, triple and quad core CPUs now instead of just making the CPUs run faster.

The goal is speed vs. cost. If we are talking about PC type CPUs, it is never an advantage to use older technology as long as it is not so bleeding edge that you can't get decent yields. Adding multiple CPUs is a significant step because of the increase in die area increasing the cost. But as the process improvements provide more transistors on the same size die, the only useful way to take advantage of the gates is to add more CPUs.

Actually, they have turned the power curve around by using different structures for the transistors. They are harder to make, but the added cost is justified by the reduced power consumption. In 2002 CPUs were reaching the 100 W level. None were below about 60 W. Now you can find CPUs that are only 35 or even 25 W without turning performance back to the stone age like the Via chips.

Reply to
rickman

Good luck on that "never crashes" thing. How do you think a given CPU will get its program? How does it know it is a device driver rather than an application? How does memory *not* get shared? Main memory will never be on the CPU chip, at least not as long as they want to keep increasing performance at a low cost.

Multi CPUs is nothing new. It has been done for ages. In fact one of the very first computers was a multi-processor machine. The problem is that it is very hard to use many CPUs efficiently. We are bumping up against some real limitations in CPU speed improvements. So we have to start using multiple CPUs. But these are also inefficient. We are reaching an age where progress will be coming slower and with more cost and pain.

Reply to
rickman

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.