OT Dual core CPUs versus faster single core CPUs?

So you have forgotten about the Pentium FDIV bug already?

Reply to
JosephKK
Loading thread data ...

Pretty much. I have had XP running for two months continuous. Did that with 98SE as well.

Reply to
JosephKK

But CPU bugs are rare and documented, and there are workarounds. In a typical PC, in the OS and a reasonable set of apps, there will be thousands of bugs, maybe tens of thousands, most of which are undocumented and never fixed. Next rev will keep many of the old bugs and add thousands more. There's just no comparison between crashes caused by software vs hardware: I bet the ratio is ballpark 1e5:1.

Existing hardware design methodologies work very well; billion-transistor chips are reliable. Current software methodologies are clearly broken: a million line program typically has thousands of bugs.

John

Reply to
John Larkin

That only works if the three versions of the software were written independently to satisfy the same specifications and preferably using different tools. It is done only in absolutely mission critical life or death software. I think parts of the space shuttle launch sequence uses this approach (and sometimes the launch is cancelled because the systems disagree at a checkpoint).

Otherwise all you are ensuring is that the same software run three times gives the same answer (which might or might not be true depending on the FP rounding rules). And you have to be very careful that the additional complexity does not itself add a new mode of failure and unreliability. A failure in the supervisor that compares the answers for instance.

A much cheaper way to improve software reliability is to port it to another machine or even a different compiler. We just about always found something of interest every time this was done even for code that was extremely robust and had been run on everything from a Cray down to a Z80 (the latter was done to win a bet). Static testing of software is possible but comparatively few shops do it seriously.

CPUs typically have a few bugs each but they are seldom of major consequence. The last one I can recall that was serious egg on face was the Intel F00F bug. So before you gloat to much about hardwares seeming infalliblity I suggest you read the abstract at:

formatting link
?tp=&isnumber=&arnumber=4211889

CPUs are much more intensively simulated and individual key component blocks like multipliers and dividers are much more amenable to formal methods proof of correctness and reuse than generic freeform business software. Even so there is a test vs performance trade off for release.

ISTR when Cyrix commissioned their formal specification of the x87 to produce a cleanroom clone that was pin for pin compatible they found a couple of dozen minor defects in the original Intel x87 chip.

I am truly amazed in the current generation of P4 chips that the register colouring and speculative execution doesn't cause more problems.

In a sense that was as much an algorithmic/firmware error as a hardware bug (and it was exceedingly rare that it triggered). I had one machine with the fault - provided to me by a customer to ensure that our fixed software worked OK even on the defective CPUs.

For comparison the XL2007 pre SP1 cannot annotate log graphs correctly above 10^8 (two ticks labelled 10000000) and infamously displays

65535-eps as 100000 for certain unlucky values of eps typical PC, in the OS and a reasonable set of apps, there will be

I'd be more inclined to bet 1000:1 maybe 10000:1 at the outside. I have seen the odd bug and/or undocumented feature in most CPUs I have worked on - most of them unimportant, a couple show stopping, and some of them are even useful. One or two were a major security risk.

It is worth pointing out here that all modern chips are designed using software. The big difference is that committing to large scale bulk chip fabrication is *so* horrendously expensive that it doesn't happen until the thing simulates perfectly and tests out OK in prototype hardware against aggressive whitebox testers.

Software by comparison is dirt cheap to duplicate and first to market advantage is huge. The result is regretably a "ship it and be damned" management culture. You can always issue chargeable hotfixes or service packs.

BTW A million line manually written program with average industry practice should typically have around 500 bugs in it. Best practice is one or two orders of magnitude better if you are prepared to pay the price (and wait longer).

Regards, Martin Brown

** Posted from
formatting link
**
Reply to
Martin Brown

Well, it's actually microcode firmware, but I won't get too picky. See:

for references to the numerous Core2Duo bugs.

I've had to update several machines BIOS's in order to provide workarounds for bugs that were causing chronic system crashes. I'm not qualified to determine if these bugs are serious or even a problem. I do know that I've slammed into them and had to apply workarounds.

--
Jeff Liebermann     jeffl@cruzio.com
150 Felker St #D    http://www.LearnByDestroying.com
Santa Cruz CA 95060 http://802.11junk.com
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

Why? This is known technology since the 1970's mainframes.

Reply to
Dennis

As are the verification techniques.

--
Keith
Reply to
krw

Funny enough, I have a P4 3.2ghz hyperthreaded box (not real dual cpu) at home and have a 2.3ghz core 2 duo at work. Both have 2ghz or ram. Speed differenc is negligable. A few things are faster, notably doing large builds of software, and SQL server. OTherwise they are the same.

Unless your are doing stuff that can benifit from concurrent threading, then 2 cores is no better than one. 99% of programmers dont understand concurrency, and of those who use most cant implement it.

The reason my software builds run faster, is that each core can be compiling a seperate object concurrently. The reason SQL server speeds up, is that it can be inserting some data, and updating an index at the same time. Obviously there is more to it than that, but thats the general idea.

What most people dont understand these days is that the biggest performance gains are not had by upgrading processors and adding memory, its in fact by getting faster hard drives. TRy it sometime :)

Reply to
The Real Andy

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.