a dozen cpu's on a chip

You really should investigate *why* your PCs are so unreliable rather than blathering on about how the world would be all sweetness and light if only Intel would make a 256 core CPU. You are clearly doing something pretty bad to the virtual memory if it stalls for seconds at a time. What do you see if you run performance monitor?

XP is fairly robust. Not perfect, but it should not be forever falling over unless there is some dodgy device driver, malware or other root cause.

Your imaginary solution would only work if all the CPUs were absolutely protected from each other. There are other ways to do this with a single CPU and a decent time sharing OS - the fact that MS implementations of Windows fall way short of this ideal does not mean that it cannot be done.

Silicon real estate is never free, and they use power. Having thread level context switching support might make good sense though.

Only if every CPU has its own dedicated memory. In flat linear addressed memory any CPU core can hit any address unless you take steps to prevent it in the OS.

Bloatware is where it is at. I doubt we will ever get rid of it now :( The importance of form over content is well known to salesmen.

Regards, Martin Brown

** Posted from
formatting link
**
Reply to
Martin Brown
Loading thread data ...

It is the output of ISO WG13 so you have to buy the (very) expensive ISO standard if you want to play with it. However, there are several compilers still available which implement ISO M2. Personally I don't like their choice of syntactic sugar, but I can see why they did things. I subscribed to another M2 dialect. By the time the full spec was published most industrial people had moved to other languages for purely commercial reasons.

The specification is in VDM-SL which can be checked by software tools. It isn't very easy to read though. A small piece is online as an example and is a bit out of date but gives the flavour.

formatting link

AFAIK it is one of only a handful of non-trivial languages with a complete formal language specification that has been machine checked.

Some problems with the Viper chip showed that hardware struggles with formal proof of correctness too.

Some of the better static analysis tools for Modula2 can see into the program and find very deep faults that would not be found in normal execution. Apparently reliable working libraries were found to have a few flaws when the most successful static testing tools became available.

Halting problems are notoriously difficult.

No. Although I was never a great fan of Ada.

I think it was John Barnes at Praxis that did the high integrity stuff.

It is a common misconception that Lisp is always interpretted. I was involved long ago in the development of a Common Lisp Compiler. There are incremental Lisp compilers which may look from the outside like interpretters but generate fast native code. The whole compiler and its libraries was written in Lisp and bootstrapped with a Lisp interpretter onto new hardware. Only the deepest layer of OS interface was done in assembler it ran on the Mac and later on the PC.

OS/2 was pretty good in this respect. Step out of line and your process gets swatted before it can do any harm.

Limiting address range each process is allowed to access will do.

You do have to be careful you don't create something that is unweildly. x86 has an array BOUND instruction but I can't recall ever seeing it used in anger. The Modula2 solution was to have pragmas that allowed development code to insert preambles and postambles to defend against stack overflow, numeric faults, page faults etc and a kernel to return a traceback or postmortem dump that would identify the failing code. The traceback still worked in prodcuction code but the testing was disabled.

The first PC compilers with these capabilites were around in the mid

1980's distributed by Logitech (now better know for its mouse).

Regards, Martin Brown

** Posted from
formatting link
**
Reply to
Martin Brown

XP isn't bad, especially to people whose standards were lowered by '95 and '98. To people who used to run DEC timeshare systems, or who do hard realtime stuff that may not have bugs, it's still pretty bad.

Blathering? Do you think that computer architectures are perfected, and will never change? Do you think that all the multicore CPU's being introduced will only be used to make things more complex and less reliable? Sure, pile OS virtualization on top of a heap of the gigabyte dogs we're running now, and use the extra cpu's to run multiple threads of Adobe products.

On the whole, this newsgroup should be renamed sci.electronics.tradition. It's practically impossible to get anyone to riff on ideas; these guys mostly want to defend current and comfortable practice. I shouldn't complain... I make a lot of money taking business away from people who refuse to think.

I suppose I won't bother to post my BGA transformer idea. Thousands of man-hours would be expended all over the world inventing reasons why it wouldn't work.

John

Reply to
John Larkin

On 13 mei, 20:44, John Larkin wrote:

John, it is not my turn, but anyways this is an open group. I think you are being a bit unfair. Many people here work on interesting things, some under NDA, and they will not give away some new ideas for good reason I am sure. And what is 'new' anyways. I know you are the best asm programmer in Frisco, if not the whole US, and for sure you will have little problem to solve this issue that is bothering everybody from MS to IBM to DARPA, and sure I would like to hear about your BGA transformer. Software genius is so rare however that I really know of one example. Back in the TV hacker days, when they finally had figured out the triple DES at the basis of the then new digital system, everybody did that real time triple DES in hardware. Special chips, like made by TI, that came with NDA, did the thing fast enough, sweating in heat. We tried on a real PC to do it in software, but way to slow. Then this Italian guy published some soft for the PC. He said, basically: This is a serial bitstream. I have a 32 bit processor, so I do each incoming packet bit by bit, using bit 0 of the CPU registers. with this I fill a buffer, then for the second operation do the first packet in bit 0, the second in bit 1, the third in bit 2, etc.. as all triple DES steps are the same, now one instruction per 32 packets, a 32 times speed up. I have this code, I dunno who was this guy, when I started reading his description I wondered.. 'hey this could work', it does. No need for FPGA. Now what that guy did that is genius in my view. It made software decode of encrypted DTV on normal PCs (1 GHz at that time) very possible with power to spare for the rest that was needed.

Over to you :-)

Reply to
panteltje

And Cobol *is* in the modern age with graphics and windoze compatibility. A lot more portable than C.

Reply to
Robert Baer

The business of having programs in the same address space as data would seem to be asking for trouble. Two independent memories, one exclusively for program and another exclusively for data would help; adding (programmable) protected areas for stacks and other program areas would also be of help. But, at the end of the day, the damnOS must be written to USE those protections, and in a logical, intelligent manner. And the M$ gooies fail in that regard...

Reply to
Robert Baer

** Well, a total of 31 processes were started, one after another (at random), but only one or two are running - and never at the same time. Take 31 cores, one for each process and the situation will be almost perfectly identical. Program instructions and/or data must come from the same memory. More cores make for more data contention.
Reply to
Robert Baer

Ha! The Java applet a certain weather site uses often prints graphics in absurd locations on my screen, e.g. Firefox toolbar, Winamp skin, etc. Repainting those areas (by moving them off and back on screen) cleans that up, usually. Not something that "can even happen" in the first place...

Java is much more well-behaved on my XP laptop, aside from its sucking down

100MB+ when the JRE is loaded...

Tim

-- Deep Fryer: A very philosophical monk. Website @

formatting link

Reply to
Tim Williams

At random? Well, it is a Microsoft product.

Of *course* they can't run at the same time if there's only one CPU.

Except that they can run at the same time. And there will be no task switching. And the CPU that runs the OS will be absolutely hardware protected from all the other processes.

Much less. Each can have its own small ram for stack and local variables, and a modest code cache, and there is no context switching overhead. For a given amount of performance, hits to global cache and to main memory would be radically reduced.

That's not where CPU design is headed. That ain't my decision.

John

Reply to
John Larkin

The benefit of using Nvidia's drivers is that they carry the entire API for each chip family, and therefore carry any instruction for a rarely implemented feature, such as 3D capacity (Asus). Asus' problem is that they included windows only drivers and DLLs along with the main driver set, and so the card will not fully function under Linux, for instance.

The Nvidia site drivers (should be) _are always_ the best drivers to go with in all my multi-brand, multi-$400 plus card experience.

Reply to
MassiveProng

Yes, but there are cases where one would expect to see plurality used or needed, and places where one would not.

I would expect to see CPUs, but I would NOT expect to ever see Unicefs

Also, if you had read closer, the rule I mentioned is the rule for pluralizing, not capitalization. Also, there are plenty of acronyms that are all caps, and are beyond 4 letters, and other usage is considered improper.

Reply to
FatBytestard

Leaking memory and IO handles has to do with untidy application programmers failing to free resources correctly. It has absolutely nothing to do having separate CPUs. I have even seen a VAX die that way, the last thing on its system console was a failure by the supervisor to open a channel to report that it had run out of IO channels.

I have worked on plenty of PDP11 and VAX in my time. DEC-10 was kind of cute too. However, the VAX could very occassionally crash even so. The mechanical disk drives were its weakest component.

Remind me. Just how many CPUs did a VAX 11/780 have? Hint: I know how many a 782 and 784 (exceedingly rare) had. Google "VAX 784 problems field rework" shows a relevant summary as top hit although the actual page cached and pointed to has been sanitised. DEc had a lot of bother with the 784 and 785 was a faster single CPU. Do you begin to see the fallacy of your argument?

It is nothing to do with how many CPUs there are and everything to do with a protected memory OS and process priviledge environment where threads are given only the resources they strictly need to do their job.

And the ultimately priviledged kernel is kept as small as possible. Memory ownership is way more important to system integrity than putting every thread on its own CPU whatever you may think.

Windows is hobbled by the backwards compatibility with badly behaved games and other programs that peek and poke directly at hardware.

If you really hanker after the old VAX environment you could try:

formatting link
Hardware emulators exist and hobbyist licences are apparently available.

For N>4 basically yes, unless very special circumstances apply that allow some of the CPUs to be dedicated to specific CPU intensive tasks or the problem has extremely high symmetry or is divisible in some other way that lends itself to spreading the load across many CPUs.

It will simply waste power and silicon to no good end in a general purpose PC. This doesn't mean it won't happen, but it will most likely be done to make some meaningless benchmark run faster.

I use mine to run very CPU intensive things in the background whilst still having more than enough horsepower to do normal work.

Your idea is so bad and misguided as to be risible. If you cannot understand this then there is no point in continuing.

Regards, Martin Brown

** Posted from
formatting link
**
Reply to
Martin Brown

formatting link

formatting link

formatting link

Dvorak has vague inklings as to what's going on:

formatting link

John

Reply to
John Larkin

On a sunny day (Wed, 14 May 2008 07:09:57 -0700) it happened John Larkin wrote in :

Not really, first the 4 TB is full with HD recordings in a day or so. Look up HD editing 'Clean disks?" whats he running windows? You tube streaming o na separate core? What a waste of a super fast powerful core. Webcam? Even less bandwidth.

Twittering, blogs, yes you _REALLY_ need 10 extra cores for that. The man is an idiot.

WHOAAAAAA! And that Intel guy is a salesman. Now that they are faster then AMD all of the sudden speed is important.

I have to admit that I was more pessimistic about Intel, but hey, maybe I will be proven right once they have 80 cores with 70 idle..

Reply to
Jan Panteltje

I am far too cheap to pay for it just to go looking for the mistake but I am going to assert without out proof that there certainly must be one. I assert this on the basis that humans don't do proofs and that computer checked proofs are only as good as the computer that checked it which its self must have been proven and so on "turtles all the way down".

I had heard it was "practical" reasons. They hed to get product out the door and thus had to pick a language and go with it.

Sly Fox Chicken Coop Gaurd Inc" at your service. The checking tools would have had to have been checked.

It doesn't state what happens if the extra add of the increment would overflow. Assume an 8 bit unsigned V

for V:= 1 to 245 by 70

you get loops for V=1, 71, 141, 211 and then what? Does it throw and overflow or not?

[....]

Yes as in imposible with a tool that checks in a finite time. There is always some bizare sort of looping that will hang the checking tool or be missed.

BTW: The ADA folks said "there shall be no subsets" early on.

Interesting. "Lots of Inconvenient Single Parenthesis" has come a long way.

Linux seems to be faily good on that subject too, but that is not what I meant. I was refering to protection on the level of the routine call.

Subroutines could be "pure" in that they only ever reference their parameters and only modify their defined return areas.

Doing it at the hardware level isn't all that hard. The reference only needs to contain the starting address and the length in special hardware. Accesses would always use the reference and an offset into it. The hardware would have to have an adder and comparison circuit special to the addressing process.

Borland pascal would do full range and overflow and stack checking. It did initialized object tests on virtual method dispatches.

Reply to
MooseFET

So you and Mr Brown agree that computing has already reached its pinnacle of perfection, and the trillion-transistor chips with hundreds of CPU's will always run Windows, and the individual CPU's will always multitask, because that's efficient.

Well, since absolutely nothing in technology has changed in the last

20 years, I suppose it's safe to assume nothing will change in the next 20 either.

John

Reply to
John Larkin

It may have a mistake in it. I am no expert on WG13s work, but the premise of the verification checkers is that at the lowest level the checker checks itself and is checked by other independently developed tools. It is always possible that they all share some common design flaw, but the bootstrap process starting from a checker that is small enough for a manual proof would seem to be watertight if done carefully. Machine proofs of the 4-colour map theorem have exploited this methodology for instance.

Remember that hardware designs are generally designed and simulated on software tools before they are committed to fabrication.

No. We got fed up waiting for the language committee to finish diddling with things and sloppy C / Windows GUI bindings eventually became a show stopper on the ubiquitous PC.

A strongly typed language was extremely painful to use with early windows bindings where everything was a more or less amorphous blob of 2 or 4 bytes coerced into random interpretations at the whim of the GUI designer and whatever he was smoking at the time.

Indeed and my understanding is that they were. And the final tools were bootstrapped up from a trusted simple tool that was manually proved by a team of mathematicians. Again you cannot absolutely rule out human error, but the approach is about the most robust anyone has come up with so far. Too few people can read these formal verifiable specification languages.

An interesting question and one that is not covered in this spec snippet (which I agree is a defect).

But there is no reason not to use the static testing tools that are available for your chosen language. And also complexity measurements give a strong hint on where to look for coding errors.

Some Ada folks made subsets non-the-less. It was just too big.

I prefer "Irritating". At the time LISP was in one of its periodic ascendencies as the most promising AI language and just becoming available on workstations. I expect it will do so again before too long.

The BOUND instruction already exists, but isn't often used.

Regards, Martin Brown

** Posted from
formatting link
**
Reply to
Martin Brown

Evidently not. VAX hit a brick wall for mainframe CPUs with N=4.

These are all fast multicore hardware designs intended for moderately exotic supercomputer type architectures and all admit that they will be pigs to program efficiently even with specialist tools. They do have their place but it isn't in general purpose desktop PCs.

What you mean is he shares your naive misconceptions.

Time will demonstrate which of us is correct.

Regards, Martin Brown

** Posted from
formatting link
**
Reply to
Martin Brown

On a sunny day (Wed, 14 May 2008 08:13:21 -0700) it happened John Larkin wrote in :

That statement none of us made, and I will never make.

, and the trillion-transistor chips with

???? I run Linux.

They may well, it often is more efficient then an extra core. I think Mr Brown already tried to explain this, and I am not really into arguing for the sake of arguing, as you seem to do. But I would have expected you, as the perfect asm writer, to perhaps have written a small multitasker (is not that what we all did set out to do once ;-) ), and then you would know the pros and contras of task switching, the overhead, and why it makes a lot of sense if you have 40 of the 50 processes sleep most of the time anyways, and even if all were to wake up at the same time to do something one core _still_ would not be 100% loaded. Background tasks, things that do not need to complete in a given time, can run when other high priority tasks do not, very efficient.

When is the last time you had a holiday? Weather has been good here lately. Frisco is at the same latitude I think.... Otis Redding song:" 'Sitting at the dock of the bay' (or something like that), try it, until the mind quites down, then forget the whole thing. Or did you have an illegal Mexican write that code ?

:-)

Reply to
Jan Panteltje

And you are arguing that nothing much will change over time.

Are you an engineer or an archivist?

John

Reply to
John Larkin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.