a dozen cpu's on a chip

I think it is far simpler than that.

IMNSHO the problem is due to softaware being licensed, rather than sold. The converse is (almost always) tru for hardware.

So the SW licenses all include lenghty "we neither know nor care if it works, screw you" clauses.

whereas hardware, being sold, has to abide by all of the relevant commercial law - you know, if it dont work, I get my money back kinda thing.

the "FU2" nature of sw license agreements has then promulgated the "ship without consequence" mentality (or, more correcly, the lack thereof)

Cheers Terry

Reply to
Terry Given
Loading thread data ...

I'd be satisfied with no known crash instance in the history of the language. Provability was a CS fad, until they discovered that they couldn't prove anything useful.

John

Reply to
John Larkin

And maybe they'll seriously implement it some day. It useless in apps that have code and data on a shared page.

I/D space separation was a given on a lot of 1970s-vintage systems.

John

Reply to
John Larkin

What a horror! Like all Intel architectures.

John

Reply to
John Larkin
[snip]
[snip]

Isn't that how liberals (aka leftist weenies) were hatched?

...Jim Thompson

-- | James E.Thompson, P.E. | mens | | Analog Innovations, Inc. | et | | Analog/Mixed-Signal ASIC's and Discrete Systems | manus | | Phoenix, Arizona 85048 Skype: "skypeanalog" | | | Voice:(480)460-2350 Fax: Available upon request | Brass Rat | | E-mail Icon at

formatting link
| 1962 | America: Land of the Free, Because of the Brave

Reply to
Jim Thompson

Naw the liberals occurred first by decades.

Reply to
JosephKK

Minimalist languages like Modula2 have been proved to be formally correct and compilers that implement the formal specification exist.

The static testing possible with that strict Pascal like grammar catches a very large proportion of common programming errors at (pre) compile time. It is still possible to write something that will crash, but you have to try a lot harder to do it.

Ada's high integrity subset for safety critical work is another one. The full Ada implementation is too much like a race horse designed by committee with way too many bells and whistles hitching a ride.

I suspect LISP comes pretty close to being impossible to crash, but when you include practical implementations some of the OS impurities will provide ample opportunity to bring the thing to its knees.

For my money any routine that attempts to read from uninitiallised memory, or write to memory it doesn't own should be terminated with a page fault there and then (unless the location has been pre-defined as memory mapped I/O).

Regards, Martin Brown

** Posted from
formatting link
**
Reply to
Martin Brown

Provability for some things in important software is still possible and done, but it is overkill for commercial software.

Provability is precisely what makes key hardware component blocks of modern CPUs reliable. It is much more common in hardware design.

You could never make a hardware floating point unit correct without a formal proof specification these days. Cyrix found about 20 small but non-trivial bugs in the original 8087 design when they commissioned a full formal design spec (and the derived testcases) for their own faster x87 compatible math chip.

Regards, Martin Brown

** Posted from
formatting link
**
Reply to
Martin Brown

I would like to see this proof. I seriously doubt that there isn't an error in it like the one pointed out by Godel in set theory. It may also not be a proof at all. Many so called proofs are just arguments that a proof could be done and not actual proofs. To be a proof all the steps must be shown.

I write a lot in Pascal. Although the strict type checking and more obvious syntax does prevent 90% of mistakes, it still leaves the 10%. Run time checking prevents some conditions from continuing to do damage but the program still bombs.

If you include hung systems in crashed ones, the problem becomes much harder (even though it started out as imposible).

Did you know that there was a mistake in the spec for ADA that allowed "correct" compilers to produce two different results from the same source code. I have the details around here somewhere. It was something very nonobvious about what type an expression had. It takes some pages of coding to make the problem show.

I assume that the subset was made after this was discovered.

ADA is 3 of everyones favorite languages.

Since LISP isn't compiled it is a lot harder to imagine it killing a system its self but the base interpreter can't be written in LISP so the error would be there.

Ownership should be strictly enforced for this to work. On the various X86 machines the hardware does exist that could allow the OS to lock things down. It would make the call process a lot slower but in critical systems it may be worth it.

I think that hardware designers could do a lot to improve matters. Having a call-return stack, a parameter stack, return value area, and local variables area all independant and enforced in hardware would make it much harder to mess things up.

When a pass by reference is done, the reference could contain the permissions and size information so that the called routine would be restricted to only writing where the callee wants it to.

Reply to
MooseFET

We have already discussed your blindness to its beauty and elegance.

You will admit that you can't exec data or the stack in an 8051 I assume.

Reply to
MooseFET

With Harvard you have no possiblity of self-modifying code. Self-modifying code is a possible way to work around the inadequacies of certain von Neuman architectures. ;-) Best regards, Spehro Pefhany

--
"it\'s the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany

Oh, here's my team of 8051 programmers:

formatting link

I wouldn't execute *anything* on an 8051.

John

Reply to
John Larkin

Not to forget; many great SCI-FI movies are based on a computer that "modifies its own code" at some point and becomes conscious (and evil). We would be missing on skynet and terminator.

M
Reply to
TheM

Other than I/O, you only need a "decrement and jump on non-zero" to do all posible programs.

Reply to
MooseFET

It is of course completely of-topic, but jus tto contribute for test :-) to the noise, I am sure I have seen a 512 core chip several years ago.

The problem is what to do with > 6 cores. As you al probably know Sony PS3 has a Cell processor with one big and 6? small 'helper' processors. Now in a multimedia application, or networking, two ways, say signal processsing decryption decoding graphics that will maybe use 4 cores. It is not easy to slit a program over more then one core. Even if threaded, it makes not always sense, I have written threaded programs where some threads use very few resources, running those on a separate core woul make little sense, Some multi media stuff uses no threads at all (Linux mplayer IIRC), while others, xine media player for example _is_ threaded. And this is from the POV of embedded. Now sure, you could run some FPGA synthesize on one core, PCB routing on the other, SPICE on a third.. however how often do you use it at the same time. So, and I am not even thinking Microsoft, they only have binaries for X86 of their OS, but the software that takes full advantage of so many cores for a _general purpose_ OS, has, as far as I know, not been invented yet. And are sequential cores always the best solution? Not sure, in the above example the decryption could be done faster by FPGA (1 clock) perhaps.

So, unless they come up with a software solution that makes full use of those cores, perhaps the only other option is to try to up the clock speed, new techniques to reduce power consumption are mentioned here and there. So how about 10 GHz or 20 GHz clock, would that not make more sense?

So, end of test message,.

Reply to
panteltje

My XP, not doing much right now, claims to be running 31 processes. Add in maybe another 30 device drivers, tcp/ip stacks, and file managers, and it would keep a 64-core cpu mostly employed.

Again, it's not about speed, although that would be improved too.

The speed thing is hitting the wall, which is why everybody is going multicore.

John

Reply to
John Larkin

Yes, but would it run faster? That is the issue. Many of those processes use very few resources, tha twas th point I was trying to make. And if that is so, it is no use to assign those to their own cores.

Look at the below list of this Linux system (ps av): Most of the proceses do nothing, there runs a h246 encode in the background, name server, mail server, ftp server, htttp server, and processor use is Cpu(s): 2.7% us, 5.4% sy, 41.8% ni, 48.5% id, 0.7% wa, 0.3% hi,

0.7% s

Load factor nicely balances around 1.0

~ # ps avx PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND 1 ? Ss 0:06 17 29 1878 648 0.1 init [2] 2 ? SN 0:00 0 0 0 0 0.0 [ksoftirqd/0] 3 ? S 0:00 0 0 0 0 0.0 [watchdog/0] 4 ? S< 0:00 0 0 0 0 0.0 [events/0] 5 ? S< 0:00 0 0 0 0 0.0 [khelper] 6 ? S< 0:00 0 0 0 0 0.0 [kthread] 29 ? S< 0:00 0 0 0 0 0.0 [kblockd/0] 30 ? S< 0:00 0 0 0 0 0.0 [kacpid] 93 ? S< 0:00 0 0 0 0 0.0 [kseriod] 115 ? S< 0:44 0 0 0 0 0.0 [kswapd0] 116 ? S< 0:00 0 0 0 0 0.0 [aio/0] 117 ? S< 0:00 0 0 0 0 0.0 [jfsIO] 118 ? S< 0:00 0 0 0 0 0.0 [jfsCommit] 119 ? S< 0:00 0 0 0 0 0.0 [jfsSync] 120 ? S< 0:00 0 0 0 0 0.0 [xfslogd/0] 121 ? S< 0:00 0 0 0 0 0.0 [xfsdatad/0] 288 ? S< 0:00 0 0 0 0 0.0 [kpsmoused] 292 ? S< 0:00 0 0 0 0 0.0 [reiserfs/0] 367 ? S Again, it's not about speed, although that would be improved too.

It was hitting the heat barrier at around 4GHz several years ago. Now we have 3 W 1 GHz processors it seems, they have made progress in reducing power consumption... so 'the wall' may have moved already?

Reply to
panteltje

Since it wouldn't spend a lot of time context switching, and since it wouldn't crash and require reboots, and since it wouldn't leak memory and require reboots, and since it wouldn't trash virtual page files, and since everything would keep running (as opposed to everything pausing for 10 seconds now and then), yes, for me I'd come out ahead.

So idle them when they're lightly loaded. Transistors are free.

Of course there is. It keeps them from trashing other processes.

Fine. Run each on its own CPU and let the idle ones sleep.

This is *going* to happen. Core count is only going up, and even an os as stupid as Windows 7 will be able to scatter processes amongst CPU's. So it might be evolutionary, so that we can take advantage of multicore but preserve our investment in bugs and bloat.

John

Reply to
John Larkin

Recently I read a book on the high end CPU design, dated 2006. They claim that the physical limits are quite far; the wall is really about the investment vs profit margin. Designing a new CPU and a new process became extremely expensive and risky endeavor; consequently, they are trying to get all the juice from what they have, by doing non-essential design improvements such as multicore.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

I do no think so, that would be a very in-efficient way, to assign a thread that wakes up ones a minute to do something simple (like for example filling a buffer) to a separate core.

There is inter -CPU communication to, are memories shared? Or separate like in Cell? Then you need to move data too.... I looked into programming Cell, have the soft, but it is considered 'very challenging'. And that is only 7.

Yea, well I dunno about MS, do not often use their stuff, but today did read that Win 2000 was more secure then Vista.

formatting link
They have an OS, need new products, maybe Win7 will be totally different again, what a nightmare. An OS needs to be a stable (also over time or OS generations) interface between programs and hardware. hardly so with MS, libc rules :-) Anyways writing an OS in C++ is already a sign something went wrong (I'll add: In My View)..

John, somehow the text I wrote, after the ps list, is missing, it was

*:
  • So in this case no extra core needed, it would make no difference!

*It was hitting the heat barrier at around 4GHz several years ago. *Now we have 3 W 1 GHz processors it seems, they have made progress in *reducing power consumption... so 'the wall' may have moved already?

Well, think for a moment, what could an OS look like that would effectively split a program that sucked system resources and was not threaded over 'n' CPUs? I really do not know. There is a chance to make history!

Reply to
panteltje

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.