the secret sauce

Indeed. Compare internal construction and instruction sets from x86 to

8080 to 8008 to 4040 to 4004. They show strong resemblance.

While Intel claimed some kin of compatibility between various generations, but e.g. 8080 to 8086 "compatibility" was only on assembler mnemonic level, but not binary compatible.

The proper way to handle compatibility issues (keeping old customers) would have to support both the old and new instruction set. This will require a mode bit to select which instruction set to execute at a given time, thus in the new instruction set opcode bit patterns could be freely allocated.

This was done e.g. with the VAX-11 series computers with native VAX instructions (32 bit) and compatibility (16 it PDP.11) mode. Of course this required separate instruction decoders for each instruction set, but the rest of the hardware was largely common.

Reply to
upsidedown
Loading thread data ...

Good book: The Soul of a New Machine.

It's about the design team of the 32-bit Nova. Someone made a rule that it had to execute old binaries without a mode bit. And the 16-bit Nova was a dog already.

In one chapter, a lazy guy spends a year not writing the FP microcode. When finally pressured to deliver, he did it in one weekend.

The HP 2114 series was another breed of ugly mutt, a 16-bit PDP-8. DEC was going to do that too, but Gordon Bell and Rick Merrill and a few guys had some beer at Gordon's house one night and rebelled. The result was the very elegant PDP-11. Rick's FOCAL interpreter had a lot to do with that instruction set.

(I am actually named in the source code of FOCAL-11, for writing the random number generator.)

And we all wound up with x86 and Windows.

--

John Larkin      Highland Technology, Inc 

The best designs are necessarily accidental.
Reply to
jlarkin

I have read that book a few decades ago. At that time as well as today, I think that was a stupid decision to avoid the mode bit.

The original Nova contained a _single_ four bit 74181 ALU, thus 4 cycles were required to handle a 16 bit word. When you look at the Nova instruction word bit field allocation some fields went directly into the 74181 function selection with some bits for input carry

0/1/complement etc control bits. The instruction set was so primitive that it resembled the microcode programming of some more advanced processors.

Trying to make some sensible 32 bit instruction formats by using some unused or useless bit Nova bit combination must have been a pain :-).

Reply to
upsidedown

A fave. Kidder gets the ethos exactly right.

"Are we going to do Eagle with kids, Alsing?"

A couple of years after the events of the book (1981-83) I and a similar bunch of (mostly) other kids built the first civilian direct-broadcast satellite system, Spacetel from AEL Microtel. I recognized the atmosphere immediately when I picked up the book.

It was the Eclipse they had to be compatible with, iirc.

UBC had one of those dogs in the undergraduate physics labs. They had us do a 2-D Laplace's equation experiment (fields around a long skinny capacitor with thick plates). It was done pretty well--we all had to do it experimentally with voltage probes in a water tank, analytically using an approximation of our choice (just not the parallel-plate formula), and numerically using our own code on the Nova. I forget what language it was in--maybe Fortran. (I was learning Fortran about that time for an undergrad research assistantship.)

IIRC the machine had 256k of memory that we had to split four ways. It wasn't exactly time-sharing, but you could segment the address space between jobs. The lab guy (Wolf Breuer) operated the box, so I don't know how that was done.

On purpose? ;)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Rick's version was some classic modulo math thing. It was statistically terrible. I did a pseudo-random shift register thing, initially as a loadable overlay. Rick liked it, asked for the source, and included it in a later version.

FOCAL was amazing. It made the 4K-word PDP-8s and 4K word PDP-11s useful.

I used it do simulate steamship power plants for the LASH and LHA ships, impressed some old salts, and got a bunch of business. Did a little circuit simulation too.

Plotted transient responses sideways with stars and pound signs on a Teletype and highlighted with red and blue pens.

--

John Larkin      Highland Technology, Inc 

The best designs are necessarily accidental.
Reply to
jlarkin

Intel isn't in the business of making wimpy license-based processors for embedded use. Their business is making and selling their processors for desktops and servers. There also aren't any serious servers with an arm CPU outside of some management chips or support peripherals.

There's not really any overlap, so I'm not sure what you're even trying to compare.

Reply to
Cydrome Leader

Those wimpy licensed CPUs can outperform Intel's by some considerable margin on any performance per unit power metric you care to choose.

formatting link

AMD have also become better at doing Intel but faster compatible chips.

formatting link

Servers is about the only market share they have got left and some of that is because the designers there are very conservative.

--
Regards, 
Martin Brown
Reply to
Martin Brown

Amazon's cloud system offers ARM servers with "up to 40% better price/performance" than x86 systems. The Neoverse chips, used in the Ampere Altra, are more powerful - they equal or beat the top AMD EPYC server cpus on many workloads (and therefore solidly trounce Intel's best server chips), despite being cheaper and much lower power.

Reply to
David Brown

A color TV used to cost a month's salary. RAM used to cost $50,000 a megabyte. Intel can't charge premium pricing for long, when anybody can throw 256 ARM or RISC-V cores onto some silicon and have it fabbed in Taiwan.

I've long been a fan of putting hundreds of ARMs on a chip. It would require a new, absolutely secure OS, which we badly need too.

Aren't there some big boys spinning their own ARM-based chips for phones and servers now?

formatting link

formatting link

and lots more.

Intel making drones is hilarious. Add that to the long list of absurd Intel failures. The only thing that ever worked for them was high-priced x86 chips, and that's getting old.

--

John Larkin      Highland Technology, Inc 

The best designs are necessarily accidental.
Reply to
jlarkin

Computing power hasn't been the bottleneck for a long time now. The principal bottlenecks are external memory latency/bandwidth, and power dissipation.

Too many cores in a chip rapidly hits the memory limits, unless either it is a special case application, or some other compromise is made. Personally I like the XMOS xCORE approach: up to

32 cores and 4000MIPS per chip, expandable, with latency guarantees.

As for a "absolutely secure OS", good luck with that fantasy.

I'll happily settle for a simple language (optionally plus library) with solid multiprocessing support. That rules C/C++ out.

Reply to
Tom Gardner

yeh,

"For every problem there is a solution that is simple, neat?and wro ng."

Reply to
Lasse Langwadt Christensen

It starts with bulletproof hardware memory and privilige control. And then actually using it.

c is a bad base for secure computing. It's not well suited to separating code from data from buffers from stack.

Intel and Microsoft sure allied to botch computing. Time to move on.

Intel is well behind the curve on fab too, which was their strong point to over-charge for ghastly architecture. Dare I mention Kodak or RCA or Studebaker?

--

John Larkin      Highland Technology, Inc 

The best designs are necessarily accidental.
Reply to
jlarkin

Am 11.01.21 um 17:03 schrieb snipped-for-privacy@highlandsniptechnology.com:

You could have asked Xilinx for a special Virtex with all DSP48 units replaced by an ARM. Oh, sorry, they all would have starved for lack of data.

Keeping up with feeding a single Pentiyummy is already an issue (pun intended).

Back in 286 times I had a quite respectable Transputer cluster, but the

4 MBytes of RAM for each CPU could not be integrated. Not for one, much less for many. Dead end.

Back then I was asked to smuggle a large Parsytec cluster to East Berlin. I didn'd dare to. Six weeks later the border was away. Megafail.

Gerhard

Reply to
Gerhard Hoffmann

Am 11.01.21 um 18:08 schrieb snipped-for-privacy@highlandsniptechnology.com:

As if that did not exist.

Intel's 432 was designed to effectively run ADA, and that was the first language available for it. Can't blame Intel for not giving nonstandard architectures a chance. Not that I like ADA and it's rendezvous concept and verbosity and formalisms. VHDL, it's breed, still turns me off, after such a long time, daily.

Was has happened to the ooooh so secure micro kernels? The Hurd? Tannenbaum's Amoeba kernel? Oh, they all were so avantgarde and good and fantastic and they never delivered. I think there is an Occam compiler that generates C.

Gerhard

Reply to
Gerhard Hoffmann

That too is present in the Intel Pentium hardware (since the 386) trouble was apart from OS/2 and some esoteric brands of Unix nothing really bothered to use the segmented permission architecture with separate code and data. Everything ended up as a flat memory model with at best very limited defences like the no execute bit :(

Now OS/2 wasn't quite bullet proof but it was still pretty good and the interrupt response was sufficiently good to be able to write a software emulation of the buffered 16550 serial driver for the bare Intel RS232. It was very difficult to write a user program that was able to damage the OS without first being summarily terminated with extreme prejudice.

If you want really tough segmentation Harvard architecture is hard to beat - but you still have to load the program somehow. It is always the weak spot since some creative aggressor will find a way to subvert it.

Whilst I am inclined to agree that C isn't a great choice I am increasingly of the opinion that the language doesn't make all that much difference. I have seen plenty of bad code in strongly typed languages.

There is a particularly horrible brand of C coder that throw things at a compiler on a wing and a prayer with the random application of casts until it compiles. This is unfortunately still far too common and definitely nothing like software engineering. On this we are agreed.

They may well be headed that way. Unfortunately so might ARM.

--
Regards, 
Martin Brown
Reply to
Martin Brown

OS/2 was actually a very good OS, IMHO. It had concepts that Windoz would only have wet dreams about. I always said that if IBM had given OS/2 away, they would have been in much better shape, and MS would not be the 1000lb gorilla in the room.... I had largely worked on DEC OS's and Unix V7 BSD during the early PC days. Funny how the 'BSD' became a common annoyance (almost expectation) of a computer where with Unix and RT-11 and RSX11, they just kept working....

Reply to
three_jeeps

rong."

For conversational security, I've always admired this little solution:

Reply to
whit3rd

True.

But some tools make it easier to produce decent code and more difficult to produce incorrect/dangerous code.

I prefer such tools.

Reply to
Tom Gardner

Technology from 30+ years ago. Look at Sel4 if you want a modern example.

CH

Reply to
Clifford Heath

Sel1 2 3 must be at least as old, if not older. L3 was 1988. Liedtke, the original author died in 2001, as Google found out.

It is closely linked to the language ELAN which was used to teach us entry level CS students correct programming in the software engineering group at the TU Berlin, the compiler ran on VM/370 and I had to ask for 4 MBytes extra. The operators did not like that. But it's a blast from the past when I find my ancient tutors on google when I follow your search string.

I tried to write in ELAN a Compiler for PL/Z8000, a glorified Assembler. The stack of paper grew and grew, and somehow so did the remaining work. I talked about that to my tutor from the operating system group, and he took me to their PDP11/40e, and there it was: the first Bell Unix V6 installation on our side of the pond. And C, and yacc. That made short work. I was sold. :-) If you study the original Unix kernel, you see how C is to be used, and it's a good thing.

I did not take note of the Sel1234 activities, although "our" univ seemed to be a hot spot. Our OS group collaborated somewhat with Andrew Tanenbaum; I saw a guest lecture by him and was mightily impressed. I even played with the idea to go to FU Amsterdam, but being an EE already I moved to chip design.

Gerhard

ps There are more unexpected cross links. awk was originally meant to translate automata for Weinberger arrays. (not unlike PALs, but not programmable after fabrication). In fact, the w in awk stands for Weinberger. (Aho, Weinberger, Kernighan)

Reply to
Gerhard Hoffmann

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.