Programming in assembler?

Minus everyone who was involved when it started I think, pretty much all of the original developers had left^Wwalked out in disgust by 83 after our shares in one of the nest of holding companies turned into waste paper when all the assets were moved to another.

Hmm looking at their website there's not much left apart from the name which was picked up after the original ceased trading in 1990. I had half expected that Peter Harris would still be there.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot
Loading thread data ...

On Sun, 28 Aug 2016 08:10:31 -0000 (UTC), snipped-for-privacy@cucumber.demon.co.uk (Andrew Gabriel) declaimed the following:

Ah good -- my memory wasn't lying then...

As I recall, it was a big thing when they added a pair of 300MB units to the campus mainframe (we already had something like 6 100MB drives; the new 300MB units were --again as I recall -- going to be used for swap space and maybe the OS).

Though I thought our systems used 11 platter packs -- with the top and bottommost surfaces reserved for dust collection (so only 20 actual surfaces used for R/W)

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
Reply to
Dennis Lee Bieber

19 surfaces for data - one was used as servo surface to provide clock and track positions. The constructiion was in the high-density packs since the 3330 (100 MB).
--

-TV
Reply to
Tauno Voipio

Yes, 19 data surfaces, and one servo surface (in the middle of the pack).

300MB disks had 823 tracks/surface, and up to 34 x 512byte sectors/track. (Yes, that's only 272 decimal megabytes.) These had fast track-to-track seek times for their day, and 323kbytes/cylinder meant fewer seeks required if data layed out appropriately. They were sometimes used to replace older head-per-track drums (which never needed to seek), using just the outer cylinder, or outer few cylinders (short-stroking).

There are two more 'guard' platters at the top and bottom. The top one is a regular data platter (oxide coated each side) which failed it's quality test. The bottom platter is a larger platter which is not oxide coated, and seals against the dust cover when the pack is out of the drive.

The move to 8" drives also included a move to servo embedded in the regular tracks, so no separate servo surface was required anymore, and no regular head alignment either to maintain compatability of disks between drives, as the drives automatically compensated for any misalignment of the heads between surfaces. This became necessary as the track density became too high for any kind of accurate mechanical alignment. (I'm guessing that's also why the 600MB 14" packs didn't allow disk exchange.)

--
Andrew Gabriel 
[email address is not usable -- followup in the newsgroup]
Reply to
Andrew Gabriel

I've not done much actual programming in assembler in the last 20 years, but it's still very useful to have a good working knowledge of assembly to be able to diagnose compiler bugs.

Particularly the buggy messy produced by Microsoft's early ARM compiler for Windows CE devices.

---druck

Reply to
druck

Agreed that its useful. I was writing a lot of assembler privately after I stopped needing it professionally.

Mercifully, I missed those, but sometimes the code Borland C generated wasn't that much better. But then again I've never been happy reading little-endian integers in hex and never did like the non-orthogonal dogs- breakfast known as Intel assembler.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

And to decode crash dumps.

--
Cheers, 
John
Reply to
John Aldridge

I thought modern develoment environments took crasch dumps and highlighted the source code...?

--
Religion is regarded by the common people as true, by the wise as  
foolish, and by the rulers as useful. 

(Seneca the Younger, 65 AD)
Reply to
The Natural Philosopher

To an extent. The one I'm most recently familiar with (Visual Studio) usually got it right. But if the stack had got mangled it could get confused, and it wasn't always obvious from the source display which exact access had caused the crash -- this is optimised code, remember.

A quick peer at the instructions and registers was not infrequently helpful. Tracing the stack by hand was occasionally necessary.

--
Cheers, 
John
Reply to
John Aldridge

If you're looking for adventure of a new and different kind And you come across a girl scout who is similarly inclined Don't be nervous, don't be flustered, don't be scared Be prepared.

Tom Lehrer (of course).

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

Yes, but the cause of a crashdump is sometimes nowhere near where the crash happens, sadly, and then you need to start a detailed post- mortem, working backwards to discover why, for example, a pointer which had a correct value at sometime in the past now has garbage in it when it gets referenced again.

For those who haven't ever done this, if you have watched any of the TV programs on working back from a dead body to find the cause of death, the process is very similar. The significant difference is that most humans are meant to work the same way internally, but most programs don't, so you often have to do some extra work to find out how the program was supposed to work, before you can go the extra step and find out why it failed.

Some programs actually go to some effort to keep data around which is there only to help with post-mortem analysis and provide tools to analyse that data, often automatically, to help you quickly home- in on the problem. (The Sun Microsystems Solaris kernel is by far the best example of this I've ever come across.)

--
Andrew Gabriel 
[email address is not usable -- followup in the newsgroup]
Reply to
Andrew Gabriel

Dint need to pore over a crash dump to work that out.

I just shove in debug output to see WTF is going in.

In my day we didnt even HAVE crash dumps.

--
Ideas are more powerful than guns. We would not let our enemies have  
guns, why should we let them have ideas? 

Josef Stalin
Reply to
The Natural Philosopher

Not always an option - when the core dump comes from a customer site with no indication of how to reproduce it, which is the case for most of the cores I analyse.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

Interesting comment.

In my day we had 1) programmer-written debug code and output, and 2) crash dumps.

1 is useful, but frequently not "looking" where the problem is, and quite difficult when the source for the program is not available. (Or it's not your program.)

2 is extremely useful, and can be mined to discover many problems/unexpected behaviors in a single run, with or without source code.

Many programmers gain a much deeper understanding of the actual behavior of their code (including memory leaks and misallocations) after deep perusal (reverse-engineering?) of their code from a crash dump.

This was a critical programming skill when you were lucky to get one "turnaround" a day, and is still very illuminating. ;-)

--
-michael - NadaNet 3.1 and AppleCrate II:  http://michaeljmahon.com
Reply to
Michael J. Mahon

Top be fair, we often had in circuit emulators instead ;-)

--
"When one man dies it's a tragedy. When thousands die it's statistics." 

Josef Stalin
Reply to
The Natural Philosopher

Yep - same here. The other thing that hindered this approach was that customers running business critical apps were often unable to restart them to load a debug version, either because the cost of restarting the app was far too high, or because they weren't allowed to take a newly produced executable with your debugging code added and put it straight into Production without scheduling it to go through their whole QA process.

So we learned two important things - we have to be able to diagnose and fix faults without supplying a different executable, and secondly, we need to be able to do this entirely from the first (and hopefully, only) crashdump.

BTW, this has applied at 4 of the 5 companies I've worked for.

The team at Sun responsible for analysing crashdumps, and designing the tools and dump formats for doing so, had a motto about Solaris being the best OS at crashing! What this meant was that a Solaris crashdump aimed to contain all the information you needed to quickly find the fault first time, so a fix could be provided quickly without the need to provide debug executables and wait for the crash to happen again. Usually this worked, but when it didn't, consideration was given to what information was missing which would have enabled first time diagnosis/fix, and this was added to the OS crashdumps.

--
Andrew Gabriel 
[email address is not usable -- followup in the newsgroup]
Reply to
Andrew Gabriel

ICL's George 3 OS was produced the best crash-dumps I've seen. I never used Solaris so I can't compare the two. It gave you a dump of the chapter (G3 speak - its term for a code module) that crashed together with all data space it could access plus two process traces (fine-grained and coarse-grained). These were the contents of two circular trace buffers, with the coarse one covering roughly a time period roughly 10-20 times longer than the fine-grained one. This data set made it very easy to pinpoint where the crash happened and why.

The application crash dumps produced by the RRE Algol68R compilation system and those output by COBOL applications running under ICL's later VME/B OS were very nearly as good. These showed how you'd gotten to where the crash happened by dumping program modules in on-stack sequence. For each module you were shown variables declared in the module (as "variable- name = value" where the value was formatted to match the variable type) together with the path taken through the module (expressed as a list of line numbers). The Algol 68R dump also showed how many times each loop on the stack had been executed and whether a conditional had selected its THEN or ELSE branch.

The George 3 approach has influenced my own debugging tools: I use a set of C tracing functions and a Java class to implement them. They typically use command-line options to control the level of tracing detail, whether the circular buffer is to be used and how many trace statements it should contain. The default is 'no tracing', so it can and should be left in production code because the performance hit of running with either tracing off or with circular buffering enabled is minimal.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

When was that? I was analyzing dumps from OS/360 in 1964.

Reply to
Bob Martin

Well, my path didn't include IBM mainframes.

"I started out on Z80s And soon hit the harder stuff My friends said they'd be there for me When the going got tough But the joke was on me, there was no one even left to bluff. I'm going back to the source code I do believe I've had enough"

--
"Anyone who believes that the laws of physics are mere social  
conventions is invited to try transgressing those conventions from the  
windows of my apartment. (I live on the twenty-first floor.) " 

Alan Sokal
Reply to
The Natural Philosopher

I started work on GEC 4000 series minicomputers in 1980. They could generate crashdumps right back to their introduction in 1973 I think. The capability was built into the hardware, although by the time I started working on them, that was rarely used anymore because the operating system could also do it automatically without having to fall back to the hardware crashdumps which required manual intervention.

This was the norm at the time. It wasn't until DOS and Windows came along that people started accepting crashes without expecting them to be fixed so they didn't happen a second time.

--
Andrew Gabriel 
[email address is not usable -- followup in the newsgroup]
Reply to
Andrew Gabriel

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.