scientists as superstars

Those Bessel things are cool. You can set FM deviation really close by disappearing the carrier or some sidebands.

PLLs are fun. My latest one triggers an LC oscillator from an external input, and phase-locks it, at its natural frequency, to an OCXO within a microsecond or so. It still hurts my head to think about that. I did the analog stuff and delegated the FPGA and the hard math.

A good OCXO has a few picoseconds of RMS jitter, timing out a 1-second delay. A cheap XO has many nanoseconds.

Bang-bang PLLs are interesting too, with a d-flop as the phase detector. I invented that in my youth too, for an analog time-slot telephone system. They also hurt my head.

Reply to
John Larkin
Loading thread data ...

In a similar vein I unwittingly invented FSMs plus a neat implementation technique in the first machine code program I wrote (for that 39bit serial computer with magnetic logic).

I also conceived of implementing a CPU using microprogramming (as in the AMD2900 bit slice processors).

But neither of those were difficult; they were quite simple, even if most kids don't think of them.

Reply to
Tom Gardner

I actually designed a CPU with all TTL logic. It had three instructions and a 20 KHz 4-phase clock. It was actually produced, for a shipboard data logger. MACRO-11 had great macro tools, so we used that to make a cross assembler.

When I was a Tulane, the EE department acquired a gigantic (basically a room full) military surplus computer that used a drum memory for program and data. The logic modules were big gold-plated hermetic cans that plugged in. The programmer had to distribute the opcodes at optimal angular positions on the spinning drum.

I have a book, IBM's Early Computers. In early days, nobody was entirely sure what a computer was.

Reply to
John Larkin

It's a fun book, and does a lot to deflate the Harvard spin, which is always good.

The sequel on the 360 and early 370s is a good read too, as is "The Mythical Man Month" by Fred Brooks, who was in charge of OS/360, at the time by far the largest programming project in the world. As he says, "How does a software project go a year late? One day at a time."

Obligatory Real Programmer reference:

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Burroughs programmed their computers in Algol. There was never any other assembler or compiler. I was told that, after the Algol compiler was written in Algol, two guys hand-compiled it to machine code, working side-by-side and checking every opcode. That was the bootstrap compiler.

Isn't our ancient and settled idea of what a computer is, and what an OS and languages are, overdue for the next revolution?

--

John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 

  Claude Bernard
Reply to
jlarkin

In his other famous essay, "No Silver Bullet", Brooks points out that the factors-of-10 productivity improvements of the early days were gained by getting rid of extrinsic complexity--crude tools, limited hardware, and so forth.

Now the issues are mostly intrinsic to an artifact built of thought. So apart from more and more Python libraries, I doubt that there are a lot more orders of magnitude available.

Cheers

Phil Hobbs

Reply to
pcdhobbs

The trick will be to get a revolution which starts from where we are. There is no chance of completely throwing out all that has been achieved until now, however appealing that might be.

I know of two plausible starting points...

1) The Mill Processor, as described by Ivan Godard over on comp.arch. This has many innovative techniques that, in effect, bring DSP processor parallelism when executing standard languages such as C. It appears that there's an order of magnitude to be gained.

Incidentally, Godard's background is the Burroughs/Unisys Algol machines, plus /much/ more.

2) xCORE processors are commercially available (unlike the Mill). They start from presuming that embedded programs can be highly parallel /iff/ the hardware and software allows programmers to express it cleanly. They merge Hoare's CSP with innovative hardware to /guarantee/ *hard* realtime performance. In effect they have occupied a niche that is halfway between conventional processors and FPGA.

I've used them, and they are *easy* and fun to use. (Cf C on a conventional processor!)

Reply to
Tom Gardner

Not in a single processor (except perhaps the Mill).

But with multiple processors there can be significant improvement - provided we are prepared to think in different ways, and the tools support it.

Examples: mapreduce, or xC on xCORE processors.

Reply to
Tom Gardner

We don't need more compute power. We need reliability and user friendliness.

Executing buggy c faster won't help. Historically, adding resources (virtual memory, big DRAM, threads, more MIPS) makes things worse.

For Pete's sake, we still have buffer overrun exploits. We still have image files with trojans. We still have malicious web pages.

Reply to
John Larkin

torsdag den 23. juli 2020 kl. 19.06.48 UTC+2 skrev John Larkin:

a tool that can cut wood can cut your hand, only way totally prevent that is to add safety features until it cannot cut anything anymore

Reply to
Lasse Langwadt Christensen

I'm talking about programmer productivity, not MIPS.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Why not design a compute architecture that is fundamentally safe? Instead of endlessly creating and patching bugs.

Reply to
John Larkin

Oh, there we do come to an immovable object: human stupidity.

Having said that, the right fundamental tools do help and the wrong ones do hinder. In that respect the current tools are deficient w.r.t. - parallel programming in general - multithreaded architectures - multicore architectures - NUMA, from registers through L* caches to local core and beyond - distributed architectures, especially w.r.t. - partial system failure - no universal time/state - plus the eight+ fallacies

There are signs some of those are being tackled decently, but most practical stuff is based on the illusion of a sequential single threaded von Neuman machine.

That will have to change, but I'll probably be long dead before it has sunk into the general consciousness.

Reply to
Tom Gardner

We don't want productivity, in as more new versions. We want quality, robustness and durability.

Jeroen Belleman

Reply to
Jeroen Belleman

Yes indeed. C and C++ are an *appalling*[1] starting point!

But better alternatives are appearing...

xC has some C syntax but removes the dangerous bits and adds parallel constructs based on CSP; effectively the hard real time RTOS is in the language and the xCORE processor hardware.

Rust is gaining ground; although Torvalds hates and prohibits C++ in the Linux kernel, he has hinted he won't oppose seeing Rust in the Linux kernel.

Go is gaining ground at the application and server level; it too has CSP constructs to enable parallelism.

Python, on the other hand, cannot make use of multicore parallelism due to its global interpreter lock :)

[1] cue comments from David Brown ;}
Reply to
Tom Gardner

torsdag den 23. juli 2020 kl. 20.34.25 UTC+2 skrev John Larkin:

a saw than can't cut anything is fundamentally safe, it is also useless

Reply to
Lasse Langwadt Christensen

It has already been done; the abacus. The only problems that remain, are operator errors.

The flaws in computer architecture are only visible because the computers are useful regardless. Naval architecture, on the other hand, always has its flaws sink out of sight...

Reply to
whit3rd

For multicore use see

formatting link

Reply to
Dennis

How can I put this... Maybe an analogy (in the full realisation that analogies are dangerously misleading)...

Just because I can run several compilation processes (e.g. cc, ld) at the same time doesn't mean the cc compiler or ld linker is meaningfully parallel.

That Python library is a thin veneer over the operating system calls. It adds no parallelism that is not present in the operating system; essentially it avoids all the interesting problems and punts them to the operating system. Hence it is only a coarse grain parallelism, and is not sufficiently novel to be able to advance the ability to create and control parallel computation.

In order to be interesting in this regard, I would want to see either a much higher level very coarse-grain abstraction (e.g. mapreduce), or finer grain abstractions as found in, say, CSP derived languages/libraries, or Java, or Erlang.

Reply to
Tom Gardner

Since you insist...

Python can make use of multicore if you use the language effectively. It can do it in two ways:

  1. Lots of time-consuming work in Python is done using underlying libraries in C, which should release the GIL if they are doing a lot of work. So if you are using numpty for numeric processing, for example, then the hard work (like matrix operations) is done in C libraries with the GIL released, and you can have multiple threads running in parallel. (And if you are doing a lot of time-consuming calculations in pure Python, you're doing something wrong!)

  1. Python has a nice "multi-processing" library as standard, which makes it very simple to start multiple Python processes and communicate using queues and shared data. That way you get parallel processing on multiple cores, since each process has its own GIL.

But Python is not really a great choice for heavy parallelism. The others you listed are better choices.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.