OT Dual core CPUs versus faster single core CPUs?

You apparently haven't been reading how nanotubes are going to save us all!

(Ducking... :-) )

Reply to
Joel Koltner
Loading thread data ...

The OS cpu will assign it a task, create its memory image, set up its privileges, and kick it off. And snoop it regularly to make sure it's behaving.

How does it know it is a device driver rather

See above.

How does memory *not* get shared? Main memory

Hardware memory management can keep any given CPU from damaging anything but its own application space. Intel has just begun to recognize - duh - that it's not a good idea to execute data or stacks, or to allow apps to punch holes in their own code. Allowing things like buffer overflow exploits is simply criminal.

But speed is no longer the issue for most users. Reliability is. We need to get past worrying about using every transistor, or even every CPU core, 100%, and start making systems manageable and reliable. Since nobody seem able to build reliability or security into complex software systems, and the mess isn't getting any better (anybody for Vista?) we need to let hardware - the thing that *does* work - assume more responsibility for system integrity.

What else are we going to do with 256 CPUs on one chip?

John

who just rebooted a hung XP. Had to power cycle the stupid thing. But I'm grateful that it, at least, came back up.

Reply to
John Larkin

Nanowires will put hard drive capacities onto a chip die. All with actual magnetic domains too.

Getteth thy selfeth a clueeth.

Reply to
MassiveProng

And when the chips behind that logic latch up?

Reply to
MassiveProng

Logic in FPGA's can be incredibly complex, things like gigantic state machines, filters, FFTs - and they run for millions of unit-years without dropping a bit. Even Intel CPUs, which are absolute horrors, are reliable. Big software systems are unreliable, as a power function of the size of the program. The obvious path to reliability is to run smaller programs on severely hardware-protected processors,

What else are you going to do with 1024 CPU's on a chip?

John

Reply to
John Larkin

You are missing my point. The fact that tasks run on separate hardware does not mean they don't share memory and they don't communicate. You still have all the same issues that a single processor system has. It is **very** infrequent on my system that it hangs in a mode where I can't get control of the CPU. I am running Win2k and I let it run for a month or more between reboots. Usually the issue that makes me reboot is that the system just doesn't behave correctly, not that it is stuck in a tight loop with no interrupts enabled. So multiple processors will do nothing to improve my reliability.

So this indicates that multiple processors don't fix the problem. The proper use of hardware memory management fixes the problem. No?

That is the big question. I like the idea of having at least two processors. I remember some 6 years ago when I was building my current computers that dual CPUs on a mother board were available. People who do the kind of work that I do said they could start an HDL compile and still use the PC since they each had a processor. I am tired of my CPU being sucked dry by my tools or even by Adobe Acrobat during a download and the CPU nearly halting all other tasks. Of course, another solution is to ditch the Adobe tools. Next to Microsoft, they are one of the very worst software makers.

Personally, I don't think we need to continue to increase processing at this geometric rate. Since we can't, I guess I won't be disappointed. I see the processor market as maturing in a way that will result in price becoming dominant and "speed" being relegated to the same category as horsepower. The numbers don't need to keep increasing all the time, they just need to match the size of the car (or use for PCs). The makers are not ready to give up the march of progress just yet, but they have to listen to the market. It will be within 5 years that nobody cares about the processor speed or how many CPUs your computer has. It will be about the utility. At that point the software will become the focus as the "bottleneck" in speed, reliability and functionality. Why else does my 1.4 GHz CPU seem to respond to my keystrokes at the same (or slower) speed than my 12 MHz

286 from over 10 years ago? "It's the software, stupid!" :^)

I am ready to buy a laptop and I am going to get a Dell because they will sell an XP system rather than Vista. Everything I have heard is that XP is at least as good as Win2K. No?

Reply to
rickman

I use Foxit and CutePDF. Both fast, bulletproof, and free.

Yes. So make the software, especially the OS, simpler. Vista tried going partly towards the "small kernal" approach for reliability, but took a big hit on performance by moving the graphics stuff out of kernal space. If it ran in its own CPU, there would be no penalty. Any "big kernal" OS (like Windows or Linux) will spend a lot of time context switching, stack swapping, reloading memory management hardware, doing interrupts, all the junk you'd not have to do if there were a CPU per process.

Microsoft's approach to multicore is to make things more complex, not less. Hell, Microsoft's approach to everything is to make it more complex. Ironic that the biggest software company on the planet writes garbage software.

XP seems fairly solid, in most installations. Being a Microsoft product, there are occasional systems that crash often, for no obvious reason. XP does boot up a lot faster than 2K.

John

Reply to
John Larkin

I'm afraid that you've just declared your intention to solve the "halting problem" - that is, to write one program which can determine whether another program is going to terminate correctly or run forever.

This has been proven to be impossible to do, in the general case. It's a consequence of Godel's "incompleteness" proof.

The supervisory CPU can certainly detect some kinds of malfunction of the program running in the secondary cores... but the ability to detect _all_ sorts of software malfunction, accurately, and shut down just those instances which are truly malfunctioning, without occasionally slaughtering ones which are workign correctly, simply does not and apparently cannot ever exist.

It's a shame!

--
Dave Platt                                    AE6EO
Friends of Jade Warrior home page:  http://www.radagast.org/jade-warrior
  I do _not_ wish to receive unsolicited commercial email, and I will
     boycott any company which has the gall to send me such ads!
Reply to
Dave Platt

Not at all. I'm not asking the supervisor cpu to predict anything.

Theory keeps a lot of useful things from being done. We don't need to do a perfect job in the general case, we only need to trap the great majority of malfunctions (which, if the system is done right, will be rare.)

The supervisor can check all sorts of stuff: memory/stack violations, queue service, cpu loading, basic stuff like that. It can also send test messages to which another process must respond, or set up simple watchdog schemes. The OS cpu can even delegate that level of supervision to another trusted cpu. But it should be physically impossible for any other process to crash the OS cpu.

None is that is hard. In fact, it's all simple. Heck, even single-CPU os's like RSTS or VMS used to run for months, between power failures essentially, before Windows came along.

John

Reply to
John Larkin

Well, how about...

Error detection. Have three CPU's do essentially the same calculations. If they agree, continue. If they disagree, take the result from the two that agree. The overhead is minimal as the processes are all concurrent. However, the power dissipation might be up to 3 times higher than with a single CPU.

CPU's are also subject to errors from glitches, RF, nuclear particles, leakages, and cache data errors. It's not huge but it's worth eliminating. By running concurrent duplicate processes, such errors can be detected when they happen, instead of many CPU cycles later as with a checksum or CRC method.

Real time computing is basically where a process obtains a processor interrupt, within a guaranteed time delay. With a single processor, the whole stack has to be saved in order to service the interrupt. With multiple processors, each processor can be allocated to servicing the interrupts as they arrive, with presumeably minimal delay.

Some problems are just begging for multiple processor platforms. For example, huge database lookups, where instead of sequentially sifting through a database, each processor grabs a piece to work with. This divides the problem into easily digestible pieces. Same with any problem that can benefit from parallelized processing (DSP, weather models, fluid dynamics, games, etc).

Maintenance task and monitoring can also be offloaded to dedicated processors. Although these are not major CPU intensive tasks, by offloading them to a "virtual dedicated processor" (my term), the OS software can be modularized and presumeably simplified (unless taken over by Microsoft).

Inter-process and inter-processor communications effectively become the same when everything is on one chip. The problems of communications bandwidth (propagation delays), timing skew, race states, and glitches, that are all present when the functions are on seperate chips, mostly disappear when they're all on the same die. Communications is MUCH faster on a single chip, than between chips on a board, or between boards.

Argh. Late for a free lunch...

--
Jeff Liebermann     jeffl@cruzio.com
150 Felker St #D    http://www.LearnByDestroying.com
Santa Cruz CA 95060 http://802.11junk.com
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

Actually, it is better to not have them all on the same chip. A bullet or fragment passes through that PWB or chip, and the entire system is down. Better to have computer "cites" placed throughout the airframe or system in question, and run code that in a worst case failure, can "seek out" another computer and get time slices on that machine to keep from losing the process itself at any time.

Instead of mere redundancy, one would have a huge distributed computing network where no process ever gets lost because a piece of physical gear has gone down.

There are nine cores on a cell CPU. IBM, however, sells "blades" that start out with two Cells on each. So even the local redundancy is not on the same chip. If each CPU had 1024 sub-processors in it then one could code an OS that insures that any broken sub-processor's running code would get passed off onto another working sub-segment.

I'd bet that IBM is going to play a bigger part in our future than we night guess. You should examine how instructions work through the pipes on the cell. They are getting 10x the performance of a PC for some things. Your FF transforms, for example, would run on the cell far faster than any FPGA. They could get processed as a part of the overall data traffic handling. Essentially the same as having the FPGA do it, but here it can be coded in software, and changed far easier. Even than the burn on the fly gate arrays. Chips cost money. software changes are cheap.

Reply to
MassiveProng

The word is KERNEL.

Reply to
MassiveProng

You have the same processes running on a redundant system, tick for tick. Then, a third oversight code loop looks for stopped processes on paired sets of computers. When and if it finds one computer that still is running or still wants to step forward, and the other is frozen or latched or broken or GONE, the "still running" computer is allowed to continue. This redundancy works, of course, through only one single catastrophic failure iteration. If the failure was not catastrophic, the now fixed computer could be brought back up, and "given" a mirror of the computer state that it was paired with, and try to re-integrate it as the redundant set for that pair.

When I used to build simulator racks for MD, we did 17 racks (times two actually so 34) for their considered to be mission critical computers on the C-17 (17 out of 54) we made the cables that interconnect the systems pipe up into the rack and through a peg board that allows them to create any short or open combination on a system. The front of each rack had Delrin slide mounts for two of the computers to be used to slide in and the peg board. So we had a data recorder rack, a HUD rack, etc., etc. They made sure that their master software would kick in the good computer in the event of a failure of any one of any given pair.

So it can be done. The problem is with the master control software one must write to manage the decision making process as to whether a given process or computer has stopped running.

That is why I think my distributed computing network scenario would be better.Then the entire process tree can be assured that it will be running somewhere, and continue to run somewhere should the current machine stop working.

Catching a single bit latch lockup is hard to do in a timely manner however. One reason why we have radiation hardened devices.

When you absolutely, positively must have the bits... all of the bits... and only that set of bits... delivered.

Reply to
MassiveProng

Are you running Windows? If so, you won't have to worry about them going idle.

Andy Grove giveth, and Bill Gates taketh away.

--
These are my opinions, not necessarily my employer\'s.  I hate spam.
Reply to
Hal Murray

I've seen it spelled both ways. And written three myself.

John

Reply to
John Larkin

Not in Computer Science Collegiate curriculums, you don't.

Reply to
MassiveProng

I've never taken a programming course. Or one in digital logic.

John

Reply to
John Larkin

formatting link

"Most people fail to consider that good programmers are very bright. Their thoughts are extremely well organized and most of them have the benefit of higher education. Their brains are not warped by overexposure to TV and their attention spans are not short-circuited by overindulgence in sex, drugs, or alcohol. They are not constrained by conventionality. If you want to get picky, there are a lot more programmers than there ever were writers. And programmers simply work harder than writers. Few writers work 100 hours a week; almost all programmers do."

The result, according to Wirth? "All programmers write at least as well as Faulkner. Most are as good as Proust, and about a third are as good as Dickens. Several hundred are at least as good as Shakespeare. So the manuals you thought were inferior were simply beyond your poor ability to appreciate. If you were a programmer, you would delight in their verbal virtuosity," he said.

In fact, Wirth claimed, even the grammatical errors and misspellings in the manuals were placed there deliberately. Most are elaborate literary allusions and puns; some are inventive Joycean neologisms. As an example, Wirth discussed the history of the word "kernal."

"Everyone, including programmers, knows the word is spelled k-e- r-n-e-l," he explained. "The deliberate misspelling is an implied criticism of the typesetter (a writer's bane for years.) Of course typesetters kern the letter l; thus, `kern el'. But kerning can only be done for certain letter combinations, such as two l's. Thus, `kern a l' dares the typesetter to kern an isolated l, an obvious typographic impossibility.

"Moreover," he continued, "`kernal' is an anagram for `rankle,' which describes programmers' feelings toward typesetters. Finally the inventor of this particular word, R. K. Lane (who is well known within the Southern California computer community) has concealed his name by means of yet another anagram."

Best regards, Spehro Pefhany

--
"it\'s the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany

OK. Then my response to the prior horseshit post is this:

In that case, you saw it spelled INCORRECTLY two out of the three "ways" you claim to have "seen it spelled".

Reply to
MassiveProng

I assume you are aware that neither of these programs are the same as Acrobat. CutePDF is just a reader. It is also not open source, it is just free. Likewise Foxit is not open source, it has more features, but they put "evaluation" stamps on pages you edit. I wonder if I can remove the stamps with Acrobat... that would be ironic, not to mention moronic.

I guess the point is that you have a choice of free PDF readers.

You still have not explained why multiple CPUs are required to make software more robust. I don't see anything you've said that does not apply equally to software running on a single CPU. As to the "penalty", in the case of multi CPUs, the penalty is that each of these CPUs will run an order of magnitude slower than the single CPU, perhaps even slower than that. I guess that if we are hitting a wall for processor performance, then multi processors can be made nearly as fast a single processor no matter how much room you have on the die. But I say there is no need to make CPUs faster. Criminy! These things do literally ***billions*** of things per second. Just how much does it take to put images on a CRT or to recalculate a spread sheet or to display a web page??? I seem to recall that all of these things ran perfectly well on a 200 MHz CPU 10 years ago. Do I really have *ANY* need for 10 CPUs each running at 3 GHz??? Not if the software were written to run better.

I find it funny when a PC takes some seconds to do something and when I ask one of the other engineers what the heck is the thing doing, I get answers like, "It takes a lot to do X". Does it really take

****BILLIONS**** of steps to do anything that is not supercomputer stuff??? I seem to recall that Cray super computers were not as fast as today's PCs. I know I worked on an array processor that was second only to the Cray in speed at the time and it did 100 MFLOPS. Now CPUs exceed that by an order of magitude and they still trip over their own feet when displaying a web page!!! No, it's not the hardware, it is the software. We could all live rich, full lives with 100 MHz 486s if they would just write the software to run efficiently. Well, maybe not 486s, but you get the idea. As a case in point, I am still using a machine I built some 6 years ago and it was a budget build with all the cheapest and pretty much slowest components of the time. It seems to do the job just fine even now. The only issue is when various software bogs it down and sucks the CPU cycles dry... like Acrobat!

Ok, rant over. But you have to admit that this is a little more interesting than arguing over how to spell chernaell.

Reply to
rickman

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.