dead programming languages

One consequence of hard-to-edit and printed listings and eproms (or one-time roms!) is that people would *read* their code before they ran it, and bugs were rare. Most stuff worked first try. Similarly, I print schematics and look them over carefully, which tends to make the first board sellable. I know of one giant organization that schedules

*six* PCB iterations.

I wrote an RTOS while visiting a friend in Juneau. Longhand, and mailed back to the factory a page at a time. They punched cards and assembled it. I'm told it had one bug.

Cards were a leap past punched tape. I hacked FOCAL and the PDP-11 assembler to read cards.

Teletypes were horrible machines. The first "glass teletypes" and floppy-based editors were a vast improvement.

I shot some Spra-Kleen contact cleaner into an ASR-33 and all the polycarb parts instantly shattered.

Reply to
John Larkin
Loading thread data ...

Or do everything static and don't pass anything.

Reply to
John Larkin

i4004...that brings back memories/nightmares of all-night sessions... :-( toggle switch code entry!

Reply to
wmartin

I politely disagree. The system requirements specify system properties that need to be met not the software engineer. For example end to end latency of a control signal from a yoke to a flight control surface. It is up to the sw architect to define a sw arch that will meet those requirements. Major system problems occur when sw engineers make decisions about how threads are scheduled (in the small) in order to meet a number of system performance requirements that are 'in the large'. The architect specifies what happens if a deadline is missed. More importantly the scheduling approach should guarantee that deadlines are not missed. That is the goal of developing an architecture that preserves determinism. I don't care what language the embodies the implementation. As long as the performance architectural properties are met. The ADA language and run-time environment, if used correctly, goes a long way to ensuring determinism. When stressed, systems will miss deadlines. The point is to make this a extremely rare occasion and to provide mechanisms to deal with missed deadlines.if done properly, the most that should happen is perhaps a jitter in operation. If done badly, the whole system falls apart.

Reply to
Three Jeeps

Simple: never miss a deadline. If I run a control loop in a 100 KHz periodic interrupt, make sure it never runs for more than 8 microseconds.

Reply to
John Larkin

I designed a CPU from MSI TTL logic, for a marine data logger. It had a 20 KHz 4-phase clock and three opcodes.

Reply to
John Larkin

In my case, i2708. A friend and I wrote a floating point package for the 8080 this way. Later, it was replaced by an AMD Am9511. That IC took a lot of iterations, they were not bug compatible. What healed one version provoked errors in another one.

Later we got in-circuit emulators from tek, intel and HP. The Pascal compiler for the HP64000 was the crappiest piece of it that I ever have seen.

At the univ we had a PDP11/40E. Someone wrote a p-code machine for it in microprogram. It was blazing fast for the time, at least when it wasn't too hot. There were only 5 PDP11/40E in the world and we had 2 of them, both CPUs on the same UniBus, apart from tests only one working. E means microprogrammable. Usually it ran an obscure operating system called Unix V6, tape directly from K&R.

Gerhard

Reply to
Gerhard Hoffmann

Am 25.02.23 um 05:57 schrieb John Larkin:

We did a stack CPU patterned after Andrew Tanenbaum's "Experimantal Machine", slightly simplified and only 16 bit in HP's dynamic N-MOS process as a group project. Spice on a CDC Cyber 276.

On the Multi-project wafer, we inherited a metal bar from a neigbour project, so we did not have to debug it. Design rule checkers were not up to the task, then.

Gerhard

Reply to
Gerhard Hoffmann

What is so hard to assign RTOS thread priorities ?

Start by searching for least time critical timing and assign the lowest priorities to them (or move it into the NULL task). After this, there will usually be only one or two threads requiring high priority.

The higher a thread priority is, the shorter time it should run. If a thread needs high priority and executes for a long time, split the thread in two or more parts, one part that can run longer on a lower priority and one that executes quickly at a higher priority.

In most cases, you can only have the ISR and one or two highest priority threads running in hard RT, the rest threads are more or less soft RT. Calculating the latencies for the second (and third) highest priority threads is quite demanding, since you must count the ISR worst case execution time as well as the top priority thread worst case execution time and the thread switching times. The sum of worst case execution times becomes quickly so large that the lower priority threads can be only soft RT, while still providing reasonable average performances.

Monitoring how long the RTOS spends in the NULL task gives a quick view how much is spent in various higher priority threads.Spending less than 50 % in the null task should alert studying how the high priority threads are running.

Reply to
upsidedown

In a multi vendor environment in which standards allow all kinds of variations, you must be able to handle whatever representation might be received. A field device replacement might have different characteristics. It is not acceptable to restart a whole system just because a single peripheral device replacement.

Reply to
upsidedown

The same applies also to programmers with mainframe / batch background. When assigned to a multitasking project, you had to keep an eye on them for months, so that they would not e.g. use busy loops.

Programming on a PDP-11 systems (64 KiB application address space) seemed to cause a lot of problems to them, trying to squeeze a large program into that address space with huge overlay loading trees. It was much easier to split the program into multiple tasks, each much smaller than 64 KiB :-).

Reply to
upsidedown

Internal units must be consistent. In entering a measurement in energy you can have the choice between input in Newtonmeters or footpounds, but you should always trust the energy is in Nm (J) in internal calculations.

It is not a problem to add inputs (that are given using) in Newtonmeters or footpounds. A fatal error is if you add Newtonmeters and meters/second.

I was the architect of a program (1980) that optimised the total output possible of the Brent drilling islands. The outcome was in m3/sec, actually. The output was presented in kilobarrels/day. The input was rife with imperial stuff.

{I encountered an exception to the rule,once. You can comfortably calculate with an electron charge (10E-16 C). But you run into trouble if you a try to interpolate with a 6 degree polynomial on some machines with fp that doesn't go lower than 10E-80. The solution is to use pC (picocoulomb) and document that.)

Groetjes Albert

Reply to
albert

I didn't simulate mine. I just checked it carefully and it worked.

It had no ALU. The ADC output was ASCII so one could just move bytes to the printer. It grounded the wait line until it was ready for another character.

Next-gen data loggers used an MC6800 (slow depletion-load) uP and ran MIDGET, my tiny RTOS. The RTOS and the application were a single monolithic assembly. Not a lot of abstraction.

Reply to
John Larkin

On at least the Raspberry pi (1) and on the Raspberry one plus they are promimently present in the hardware manual. Basically you read from an address. The Raspberry pico system development kit devotes a whole chapter on clocks.

I had success bit-banging mechanical instruments (2 metallopones and an organ) using the RDSTSC, but these were synchronous, i.e. a separate clock signal. I attempt to generate a midi signal for a keyboard (on the same parallel port) and the period's were a clean 32 uS duration, measured to the precision of a 20 Mhz logic analyser, using one of 8 cores. However the signal was spoiled by a periodic 1 mS interruption. I tried to restrict booting to 7 processors, mapped all the hardware interrupts away from the 8th processor and used the 'taskset' utility .. to no avail. A while ago I succeeded in flashing the msp430 with bit-banging. This now fails also. The jury is still out but I suspect systemd is the cullprit.

Groetjes Albert

Reply to
albert

HP, Intel, and other people designed super-CISC microcoded computers that were awful. Their horribleness is probably what inspired RISC.

The first PDP-11 was the 11/20, and I had something like serial number

  1. We ordered the standard 4K words of core but they shipped 8K by mistake. I ran steamship throttle control simulations in Focal, which was an amazing language.

We later ran the RSTS time-share system on the 11/20. It would run for months between power fails. DEC should have dominated computing but screwed it up, so we got the Intel+Microsoft bag-of-worms that we now have.

What's crazy is that a computer, including the first IBM PCs, would power up and be ready to go in a couple of seconds. Now, with 4000x the compute power, it takes minutes.

One thing about these old stories is that they remind us of what might have been. Sigh.

Reply to
John Larkin

A dogma that is repeated without proof.

I programmed the delay line of the paranal telescopes. (ESO project) It spent at least 30 % of its time on the highest priority level doing floating point calculations for the position of the mirror and send that message out swiftly. The point is, high priority is exactly what it is, high priority. Receiving messages, calculating the next base position, user interaction that all can wait.

Makes no sense. There is no reason it can be split in this way. In this case the interrupt was a message from the metrology system where the current mirror was. All calculations depended on this value. No reason to be distracted by disk reading or whatever.

Groetjes Albert

Reply to
albert

A lot of overlay thrashing was audible. Things would rattle in the racks as disc heads flailed.

For some people, complexity becomes a fun game, which is why we have about 6000 increasingly-abstract computer languages.

A better game is to brainstorm for simplicity. Not many people enjoy that.

Reply to
John Larkin

And would you be able to smoke it afterwards?

Sylvia.

Reply to
Sylvia Else

Then why are you babbling about software bloat? Why don''t you babble about hardware bloat? How many billions of transistors on today's top end CPUs? How many in an 8051? No reason to claim the current top end CPUs are bloated. Every transistor has been added as part of some specific function. Software is the same way.

Maybe you are not ignorant, but I find it is the ignorant who talk about software bloat.

Reply to
Ricky

Woosh! That's the sound of the point under discussion rushing over your head.

Now you are trying to argue details of how much CPU time or memory, yet offer zero data. Ok, you win. The cost is only zero 99.999% of designs. Somewhere, there's a design that adding bounds checking pushed a specific design into a slightly larger CPU chip with some fractional amount more memory.

Most of the people posting here are happy to show they are idiots. You usually refrain from such posts. But once you've made a poor statement, you are inclined to double down, dig your heels in and stick with your guns, in spite of having zero data to support your point.

Reply to
Ricky

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.