Is it a lost cause?

In the '80s I bought a 1969 Nova 1200 from a pal, which had been in storage. We loaded it into a truck, and drove it a few miles to my house, unloaded it, and set it up in my spare bedroom.

When he came by the next day to help me cable it all up and get it running, he said "Wait -- before we turn it on, make sure the console terminal is plugged in -- I wanna see if something works..."

"What?", I ask him.

"You'll see -- or not."

We turn the machine on, and a series of asterisks starts printing on the console.

"HOT DAMN!", he cried, "I turned this machine off seven years ago, but before I did I loaded a program that does nothing but print to the console ... I wanted to see if it stuck even after years of storage and moving about."

Reply to
Lawrence Statton NK1G
Loading thread data ...

Not quite. The Nova JSR instruction moved PC to AC3 ... Non-skip return was JMP 0,3

The Nova-3 added stack and frame pointers.

Reply to
Lawrence Statton NK1G

I think it's one of Gareth's sock puppets ... he follows gareth around, and has a habit of appearing in his threads, "disagreeing" with Gareth, and then picking up the trolling when enough people KF the original.

Just a gut feeling ...

Reply to
Lawrence Statton NK1G

You really should get out more. :-/

I have worked almost exclusively with ARM stuff since around 2010, and off and on half the decade previously. Now even my laptops are ARM-based.

I have been presenting the numbers for "real computers" here earlier. Numbers are very difficult yo get by, so we have to be indirect. If we let the numbers of MMUs set the count of "real computers" (i.e. with virtual memory) we see a 2015 production go past the world's population for the first time.

x86'es are a billion, max. The rest (Sparc, MIPS, ppc etc) are

300m max. I count mmu-equipped units here). The balance is ARM.

Look at it from a chip producer's standpoint. You can make the lesser ARMs from a readily available blueprint for around $0.30 a piece in license cost. This is no contest.

Yes, the x86 has the current speed advantage. But the ARMs are crawling up from behind, with a lot better mips per watt ratio. That is what it all boils down to in all the small stuff in your car, phone, alarm, stove, tv, modem, microwave, etc.

The next generation of these may have as much cpu power as your pc, at least to the x86 i5 level without burning more than around a watt. And 50 mW in trickle standby mode.

I did some back-of-the-envelope stuff about having the balance of the PDP8 weight filled by LiIon batteries, and that would give me at least 300 Ah of 12V LiIon power; which would power a raspberry pi (0.4A at 5.25V in semi-standby) for two and a half months.

-- mrr

Reply to
Morten Reistad

WHAT?! In *THIS ROOM* I have not less than four wildly disparate CPUs that I do active development on ( x86, ARM, Coldfire, Blackfin ) and my experience is about the same -- I'm trying to remember the last time I experienced a compiler bug in the wild, and it was at least 20 years ago.

Reply to
Lawrence Statton NK1G

It's what we had. It's what we used.

Reply to
Joe Pfeiffer

I have no sock puppets and neither am I a troll, unlike you who seems to employ the personal remark at every turn.

The PP is KFed here anyway for repeated rudeness

Reply to
gareth G4SDW GQRP #3339

original cp67 had a form of (software) stack ... 100 pre-allocated save areas. all calls/return were done by SVC call ... SVC interrupt handler would allocate/deallocate savearea for use by called routine. the svc call accounted for significant percentage of total CPU time spent in the kerenel.

As undergraduate in the 60s, I made two changes, 1) svc call routine when it ran out of pre-allocated saveareas would extend the number of saveareas by calling the page allocation routine with special parameter ... that would look for first non-changed page for replacement (didn't require delay waiting for write-out replacing changed page) and 2) there were a whole bunch of routines that were simple call/return ... and I changed them to straight BALR/BR call/return and to use a dedicated save area in page0 (works in multiprocessor since each processor has its own dedicated page0). I also cut the instruction time in the SVC call/return by 2/3rds. I also made a whole lot of other pathlength optimizations ... in same cases improvement by factor of 100. Overall as undergraduate in the 60s, I reduced CP67 kernel CPU time by 3/4s. A lot of this was picked up and shipped in standard CP67 product. Part of presentation that I made at 60s SHARE user group meeting

formatting link
CP/67 & OS MFT14

as undergraduate I was hired by the univ. to be responsible for production operating systems. the above presentation includes references to changes I made of os/360. On 709 IBSYS tape-to-tape (with 1401 front-end doing tapeunitrecord student fortran jobs ran under 1second elapsed time). Initial move to os/360 360/65 (360/67 running as 360/65), student fortran jobs ran over minute ... because of enormous disk i/o intensive operation. Adding HASP (spooling system), cut elapsed time/job to a little over half a minute. I started hand-crafted OS/360 system build/sysgen to carefully place/order system data on disk which got it down to a little under 13seconds.

For CP67 I also redid a lot of the I/O, added ordered seek queuing and for page activity, chained multiple page transfers into single I/O channel program. I also implemented dynamic adaptive resource manager (frequently called "fair share" for default resource policy)

formatting link

I also did global LRU, clock-like page replacement algorithm ... this was at a time when the academic papers were all about local LRU. some past posts

formatting link

I've mentioned that when Jim Gray left research for Tandem, he palmed off bunch of stuff on me:

formatting link
Is it a lost cause?

At Dec81, ACM SIGOPS, Jim asked me if I could help one of his co-workers at Tandem get his Stanford PHD ... which was on global LRU page replacement. The "local LRU" forces (from the late 60s) were strongly lobbying Stanford to not award any PHD that had to do with local LRU. Jim knew I had a lot of performance data comparing local LRU CP67 and global LRU CP67 implementations (showing global LRU much better) ... much of the other claims on the subject were just opinion and hand waving. Past post on the subject

formatting link
with this reply that I wrote
formatting link

aka it took me almost a year to get IBM management to send a replay. Conjecture is that IBM management thought it was a form of punishment because they blamed for online computer conferencing on the internal network (larger than the arpanet/internet from just about the beginning until sometime mid-80s). Folkore is that when the corporate management committee were told about online computer conferencing (and the internal network), 5of6 wanted to fire me. Some past posts

formatting link
and
formatting link

--
virtualization experience starting Jan1968, online at home since Mar1970
Reply to
Anne & Lynn Wheeler

His own paragraph contradicts itself.

--
"What do you think about Gay Marriage?" 
"I don't." 
"Don't what?" 
"Think about Gay Marriage."
Reply to
The Natural Philosopher

Exactly. Theres a dime a dozen PIC style processors, ARM, SPARC, MIPS, INTEL CISC.

The compilers mostly Just Work. Even if some of them may have restricted features, or shorter 'integers'

--
"What do you think about Gay Marriage?" 
"I don't." 
"Don't what?" 
"Think about Gay Marriage."
Reply to
The Natural Philosopher

One of the reasons I never bothered with them till the 80s' Too expensive. Too slow.

I was analogue hardware.

--
To ban Christmas, simply give turkeys the vote.
Reply to
The Natural Philosopher

It depends on what you trust.

I use Linux with gcc and FreeBSD with clang/llvm for my projects, and run them pretty much in parallell.

That is two separate OSes, cusps, compilers. The only SPOF in the setup is emacs. :->

I run the same regression tests on both sides.

And I haven't run into compiler issues that can be classified as bugs on production grade systems since 1994. But I still want to make sure.

And you could run a third with a commercial unix system.

We are well beyond any one person having control of the whole toolchain. You must trust the toolchain builders, but also verify.

-- mrr

Reply to
Morten Reistad

You seem confused between the application and the implementation. Most real-time systems are commercial. You might be excused because you obviously formed your ideas forty years ago except that, even then, many banks and stock exchanges were implementing bespoke real-time systems.

Reply to
Gordon Levi

There is no confusion between those who understand how the computer really works, the SEAL / SAS analogy, and those who do not and are consumerist programmers, the ordinary squaddie analogy.

Reply to
gareth G4SDW GQRP #3339

Barb was herding these 'permies' (which I cannot find in my glossary) and got out some of the best documentation I have seen, before or after. She is one of my heroes for this feat.

Yes, DEC ran according to best practices of ca 1970 when it came to software development. They had some of the best people, but there was in effect no management at all, the inmates ran the asylum.

But the quality of the professionals at DEC was good enough (excellent would I say) to live without effective management for more than a decade. This is no mean feat.

But it was a "duck pond management" situation (calm on the surface, but below; paddle like hell). This became unsustainable. The leadership at DEC also were clueless about the drivers of their business, and this made the whole business come to a halt when the micros hit.

They nearly folded in May 83, when they had to scrounge to get effective hardware out the door.

I asked about source control a few weeks ago, and Barb confirmed that they didn't use it, with some reservations for the later VAX/VMS line.

So, yes, in a sense you are right. But the success of DEC was based on a few hundred excellent people that did their jobs outstandingly well even if they never had the proper tools for work.

In hindsight. But I saw it happen in 82 onwards.

-- mrr

Reply to
Morten Reistad

Whatever happened to thick (and thin) films? Superseded by high integration on commonly available functionality?

Reply to
gareth G4SDW GQRP #3339

Could well have been, although looking at Wikipedia, the PERQ was built from various technologies, starting with AMD bit slice. It originated at CMU.

That I can believe!

There were two basic 2900 'generations' (let's not get into the 3900).

The first generation was the P-series (about which I know much more). That had a central SMAC (memory) with ports for OCP (processor) and SAC (high level perpheral controller). The SAC had the DFC, the GPC (general peripheral controller (console, tapes, printer, card reader/punch)), and the CLC (comms, 7502 based I think). So yes, the DFC was an early version, although P-series persisted for some years (I stopped working on ours when it was shipped out in 1986).

The second generation was the S-series, with a much simplified layout. The controllers were all different, I believe.

--
Using UNIX since v6 (1975)... 

Use the BIG mirror service in the UK: 
 http://www.mirrorservice.org
Reply to
Bob Eager

The z systems use a "linkage stack".

You push entries in the linkage stack with Branch and Stack, or Program Call. You pop entries with Program Return.

Control register 0 controls state switching and control register 15 points at the current linkage stack.

The linkage stack itself is in protected storage but it's not special hardware, just main memory.

How that differs from other machines I'm not clear on.

It always seemed very odd to me, that when you write C code, you're always told to always check the malloc() return code. Yet, calling a subroutine needs main storage and the language provides no way to check a return code for that. I'm not sure if it raises a signal. With a language that supports recursion, that could be an issue.

I generally avoid recursion for that reason.

--
Dan Espen
Reply to
Dan Espen

This is implementation (mostly OS) dependant. Usually there is a signal for stack overflow, but it often is the generic signal for invalid memory accessed (SIGSEGV in Unix/Linux).

Reply to
Rob

I suppose this is like candidates looking at the size of their hands.

I've done lots of bare metal stuff and lots of application development. Usually, bare metal impresses the programmers that have never done it.

Other than that, it's just writing code. You have to design it, deals with lots of cases, code it, test it.

Your assertion that real-time experience somehow makes you special is wrong. Oh, did I mention that it's also insulting? It's great to be special, but if you have to push someone else down to get there, you are fooling yourself. There are lots of genius application developers. Find some other way to feel good about yourself.

--
Dan Espen
Reply to
Dan Espen

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.