Programming in assembler?

KInda hard to know where to put a crash dump on an embedded board with no disk and only 24K of RAM :-)

Which is why we had ICEs.

--
?It is hard to imagine a more stupid decision or more dangerous way of  
making decisions than by putting those decisions in the hands of people  
who pay no price for being wrong.? 

Thomas Sowell
Reply to
The Natural Philosopher
Loading thread data ...

We spewed the crashdump to a serial port. In the case of microprocessor based I/O controllers in the minicomputers, the minicomputers could read the crashdumps back over their interface to the controllers.

That would mean going out to site and rigging up an ICE, and waiting for the crash to happen again, possibly several times. That's wasn't acceptable.

ICE was used in our development labs, but not on customer sites. Actually, by the time we got to 68000 based I/O controllers, we implemented a debugging interface to work across the minicomputer interface, so ICE wasn't needed anymore either, and it could be done using a program on the minicomputer.

--
Andrew Gabriel 
[email address is not usable -- followup in the newsgroup]
Reply to
Andrew Gabriel

Exactly. Crashdumps is a state of mind thing. You have them produced so you can see what the real code did not what a recompile/reassemble with debug did. If you change the code you are not deubging the original issue.

I've tried to do it this way all my working life. When you do it well you can have customers the other side of world send you a dump file and you can tell the customer what they are doing wrong to cause the crash.

Reply to
mm0fmf

I was in those development labs ;-)

--
"In our post-modern world, climate science is not powerful because it is  
true: it is true because it is powerful." 

Lucas Bergkamp
Reply to
The Natural Philosopher

...and from 7090 in 1962.

Crash dumps are at least as old as high-speed printers. ;-)

--
-michael - NadaNet 3.1 and AppleCrate II:  http://michaeljmahon.com
Reply to
Michael J. Mahon

Prior to high-speed printers and operating systems, crashes of programs in development were often debugged interactively at the machine console--like a room-sized personal computer.

Later, memories got so large that printing all of memory was impractical, so they are saved to disk for later interactive perusal.

Ah, the days of wooden men and iron computers. ;-)

--
-michael - NadaNet 3.1 and AppleCrate II:  http://michaeljmahon.com
Reply to
Michael J. Mahon

...and when they'd been hand-patched until they worked, they got saved to mag.tape and used as the live version. The source card deck got brought into line if the patcher could remember when he'd done to make it run.

I got into the business just after this era, but the program that managed the tape library was like that the source was known to be out of date and nobody knew how it differed from the live version.

But shortly after that we got a better hammer: by this time, 1970, we kept source on disk or mag tape instead of cards and the assembler could merge a deck of patch cards with the source on tape or disk and assemble the result. When the fix was done our last job before heading home to bed was to drop the patch cards into the card deck for the the night's batch source update run. The compiler was written to use the same amendment commands as the source editor.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

You've reminded me of how the Burroughs Medium Systems (B2500/B3500/B4500) operating systems department handled source code control in the late 1960s.

The team was about 25-30 programmers (including 2-3 managers). The task was to extend and maintain the Master Control Program for Burroughs' medium-scale systems. The MCP was written entirely in assembler and was about 40,000 lines of code. It was a multiprogramming OS that supported up to a couple dozen simultaneously running jobs, each protected by base and limit registers. Total machine RAM was 120k-500k (later 1000k) decimal digits.

At the start of a release cycle, a 2-3 page list of desired enhancements and bug fixes was posted on a manager's door. Programmers would write their initials next to items they chose to implement.

As the 6-9 month release cycle progressed, programmers would add their patches to the master patch deck (which was periodically snapped to tape for backup). Each programmer used cards with a unique color stripe, so that the originator of any patch could be identified.

When merging your patches with the sequenced master deck, if you found someone else's patches intersected with yours, you went and had a conversation to understand if there were any interdependencies.

It's amazing how well this system worked, and how it led to small _de facto_ teams to deal with complex interactions. During the several years I was in the OS team, problems with source management were few and quickly resolved.

Burroughs had the philosophy that anything that couldn't be designed by a team of three or less shouldn't be done! Once, when an operating system for a new architecture was needed, the team for a previous architecture delivered a full listing of their MCP as a start. The new architecture team unanimously agreed to trash it as an unwanted distraction. ;-)

Needless to say, we were not prone to surfing on top of a hundred layers of poorly understood libraries, and each team member had both an area of deep specialization and a good grasp of the functioning of the entire system.

It's also interesting that Don Knuth was a frequent consultant to Burroughs in those days, and later Edsger Dijkstra.

There was constant attention to the tendency of designers to make things just more complicated than they could fathom--apparently to keep their lives interesting. ;-)

--
-michael - NadaNet 3.1 and AppleCrate II:  http://michaeljmahon.com
Reply to
Michael J. Mahon

And managers today think Agile is a new thing!

---druck

Reply to
druck

hadn't heard of that assembler. thanks.

Reply to
Big Bad Bob

First time I came across Agile it was in an article entitled "Extreme Programming", I read the article and asked when the author had been round spying on us. Apparently an earlier name of "Feature Driven Incremental Programming" had been less successful, although it did lead me to suggest that there are two kinds of programming - incremental and excremental.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

Nice! ;-)

--
-michael - NadaNet 3.1 and AppleCrate II:  http://michaeljmahon.com
Reply to
Michael J. Mahon

The last time I remember doing much commercially was in the early 2000's

- we needed a set of ASN.1 BER packed encoding/decoding routines to run on a 68020... the C version was just a bit too flabby!

(a pretty "hard" real time system, bouncing messages between cars and motorway gantries in a 50m patch of RF "illuminated" road)

Can make sense for things like micro-controllers in HDDs and SSDs where the combination of massive volume, and wanting to screw the cost down and performance up add to the desire.

Not sure if that was really a K&R first though... the same setup (almost[1]) had existed in BCPL which they borrowed from when doing B (and subsequently C etc)

[1] BCPL was a P code style compiler, so the actual targeting to the hardware was done in the P code interpreter / virtual machine rather than the compiler's code generator.
--
Cheers, 

John. 

/=================================================================\ 
|          Internode Ltd -  http://www.internode.co.uk            | 
|-----------------------------------------------------------------| 
|        John Rumm - john(at)internode(dot)co(dot)uk              | 
\=================================================================/
Reply to
John Rumm

Several BCPL compilers added a backend translator that translated the BCPL INTCODE into native assembler. This was the intended approach - at least that's what Martin Richards told us in 1980.

The idea was that you would write an INTCODE interpreter in whatever native language made it easy (it did not have to be fast, just correct), validate it by using it to run the pre-compiled BCPL compiler on itself and comparing the INTCODE output with the shipped iNTCODE compiler. Then go on to write a translator in BCPL, compile that to INTCODE and then use your interpreter and translator to compile a complete native compiler, finally validating the whole thing by recompiling it with the newly built native compiler and translator.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

Indeed so, that is my motivation, not need.

>
--
Do, as a concession to my poor wits, Lord Darlington, just explain 
to me what you really mean. 
I think I had better not, Duchess.  Nowadays to be intelligible is 
to be found out. -- Oscar Wilde, Lady Windermere's Fan
Reply to
Peter Percival

Depends on how you look at it:

formatting link

(Nice to know there are still people mad enough to do this stuff out there!)

--
Cheers, 

John. 

/=================================================================\ 
|          Internode Ltd -  http://www.internode.co.uk            | 
|-----------------------------------------------------------------| 
|        John Rumm - john(at)internode(dot)co(dot)uk              | 
\=================================================================/
Reply to
John Rumm

Folderol wrote, On 08/25/2016 02:11 PM:

The TRS-80had ASM in both Level I and Level II Basic, with data in and out using pokes and peeks. I recall speeding up a binomial plotter using ASM on a TRS-80 to do my algebra homework in 8th grade. I took me three times longer to write and test the code than if I'd just ploted the graphs with a pencil on graph paper, but it was the start of amazing my instructors all the way to CalTech.

Using assembler is precisely fulfilling the purpose of the Pi, facilitating computer language and sysop skills to be inexpensively accessible to the newbie. It seems that the majority of c.s.R-Pi posts have forgotten that by focusing on the resulting utility of the Pi as a micocontroller.

--
   All ladders in the Temple of the Forbidden Eye have thirteen steps. 
There are thirteen steps to the gallows, firing squad or any execution. 
  The first step is denial?                           Don't be bamboozled: 
        Secrets of the Temple of the Forbidden Eye revealed! 
           Indiana Jones? Discovers The Jewel of Power! 
          visit ?(o=8> http://disneywizard.com/
Reply to
Dr. Disney Wizard

Reading all the posts about the glory of assembler, ),

makes me think no-one read the link above. So Abstract copied below

Abstract

With the intent of getting an Ada waiver, a defense contractor wrote a portion of its software in Ada to prove that Ada could not produce real-time code. The expectation was that the resultant machine code would be too large and too slow to be effective for a communications application. However, the opposite was verified. With only minor source code variations, one version of the compiled Ada code was much smaller while executing at approximately the same speed, and a second version was approximately the same size but much faster than the corresponding assembly code.

(the assembler was hand written and carefully tuned)

However, it IS fun to read all the comments in the spirit of 'I did this in the 1950:ies with my eyes blindfolded using my left hand only'

:-)

--
--
Reply to
Björn Lundin

Interesting. Thanks for including the abstract: the link was well worth reading.

Back in the day, I started work by writing commercial programs in ICL PLAN 3 assembler. About two years later, the shop had switched to COBOL. The ICL compiler was OK - it averaged 6 instructions per procedure division statement and wasn't an optimising compiler, but despite the verbosity and source code size of the COBOL, I think programmer productivity, and certainly maintenance programming, outweighed all that, especially as the COBOL ran fast enough and didn't force us to fit more memory. For those familiar with the kit, this was a 1903S with 32 KWords of memory, running development under George 1 and with production code running under G1 or directly under Executive.

Rather later I found myself writing a lot of 6809 assembler until I got my hands on the PL/9 compiler. I found PL/9 source code was smaller and more readable and the binaries almost as quick as their assembler equivalents. The code generated for each statement was very hard to fault though again, there was no inter-statement optimisation. I soon found, by trying it, that substituting hand-optimised assembler (the PL/9 compiler accepted assembler inserts in functions) saved surprisingly little memory and, except in one or two cases, ran only fractionally faster.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

What the higher level language allows you to do more easily is to concentrate on the program structure, and optimise at a much higher level. It's all very well saving 10% by careful tuning of code whether it be assembler of higher level, but there may be a far greater saving to be had by tuning your algorithm to save 90% of the operations.

As I said before, a profiler showing you where the hotspots are in your code may be your most useful tool - the hotspots may not be where you expect.

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.