For a single linked list, why would anyone put the link to next object pointer anywhere else than in the top of the structure at (IX) ?
For double linked lists, you of course had to put the backlink at (IX+2), but how often do you need double linked lists and how often are items accessed by backlinks in normal operation ?
I still think that the PL/M-80 for 8080/8085 was a reasonable HLL at the time.
Of course, one can argue, are PL/M-80/86 or C HLLs :-)
Today, a C-compiler for PICs is useful, so you do not have to write everything in the awkward assembler.
The nice thing about any high/intermediate level languages compared to most assemblers is that you can create easily readable control blocks (if/then/else and for/while/loop).
I can't remember the details, but I expect it was (or I was anticipating) a doubly linked list or a node being in multiple lists.
I can't recall if having the link in (IX+0) made the code significantly simpler or faster.
Either way, my conclusion was that the space/time tradeoff didn't favour the Z80 over the 8080 - although the code was, arguably, a bit more readable in come circumstances.
Later, I ignored the 8088's segments and Intel's insistence that it was a transparent migration to 80286 and 80386. A few years later microsoft showed how painful it was. Quelle surprise.
Another loss of British subtlety when expressed in written text. :-) Such a statement really means: you sit down as an observer and then observe the sparks start to fly as the sequence of events starts to unfold. :-)
Twenty years from now, will new people be startled to encounter 32 bit integers ?
The other postings are bringing back memories of syntax long since forgotten by me.
Interesting. I looked on Farnell's website, and could see the Z80 was still available, but I could find no trace of the 6502.
Simon.
--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
16 bits (4 decimal digits) was on the low side of most real word applicatications, 24 bits (7 decimal digits) would have been enough for most real world problems, while 32 bits (9 decimal digits) is far enough for most day to day applications.
By that notion, having a root canal would be "entertaining"!
I suspect not. 32b is a rather impressive size for an integer datum. Instead, I think the surprise may be NOT finding hardware floating point support, etc.
E.g., Limbo initially tried the "Why use lead when you can use gold?" attitude wrt data sizes. Then, scaled back on this due to the present day practicalities (i.e., "gold" is still expensive!)
I lament the loss of *choice*. There were a lot more options available "back then" wrt CPU, etc. Many were technically far better than their survivors! So much for the "wisdom of the Market"!
Of course, some were also incredibly costly to produce! (F11?)
And, some made very bad predictions as to where technology would be headed (e.g., the 99000's WSP).
I think it exists as a core, nowadays. Or, as bastardized variants (e.g., isn't the 2A03 a 6502?).
It was a *really* tiny core -- like 3500 Q's. When you consider the i4004 was *almost* that big (ca. 2500 Q's) the difference in capability and ease of use was night and day! (*TRUST* me on that one! :> )
I worked on firmware for one product (a cellular phone), where we used the two Z80 register sets to implment very low overhead co-routines. It was pretty slick...
--
Grant Edwards grant.b.edwards Yow! Do you guys know we
at just passed thru a BLACK
gmail.com HOLE in space?
However, the word "default" is missing from that reply which is what I really meant. I also mentally inserted the word "default" into Don's comment and replied to that.
People obviously still (to use Don's example) use 16 bit integers today but it's not the default integer size unless you are working on certain microcontroller platforms.
However, back in the early 1990s it was still a common default integer size, but that doesn't change the fact a default size of 16 bits for int would be something most newcomers today simply would not expect.
Likewise, after 64-bit platforms have been established for a couple of decades, will it still make any sense to have 32-bits as a default integer size ?
Simon.
--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
I've just mentioned in another post what I meant was 32 bit _default_ integer sizes.
That's still true today, ie: IA-64. (Yes, VMS is a part of my day job.)
According to Wikipedia, that MCU (which I didn't know about until now) was discontinued in 1994, but yes, it was a 6502. There's also a number of other 6502 variants listed:
formatting link
Simon.
--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
8-bit PICs: the only architecture I know which is in common use today which makes the x86 look like a well designed architecture by comparison.
Among my annoyances is the total lack of a peripheral specific interrupt vector or _any_ interrupt index which lets you know directly which interrupt fired. Instead you have to examine each status register in turn until you find the one which caused the interrupt.
Of all the assembly languages I've used, I find ARM to be the most readable. You don't have high level structures, but you do have a very expressive syntax (for an assembly language) and things like conditional execution of opcodes.
Simon.
--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
I am not particularly fond of the 8051 either, and that's another one that refuses to die. I also disliked the COP8 architecture, for which I wrote quite a lot of assembly. It was immensely inefficient - each "processor cycle" was 10 oscillator clock cycles - and most instructions took 4 processor cycles.
The COP8 was like that. When combined with the tiny number of registers and a slow interrupt vector, with a 10 MHz clock it took about 80 us for the overhead of saving registers, identifying the interrupt source, and jumping to the relevant code. Restoring the context took another 60-70 us.
ARM is not bad, but I think the nicest assembly I have used is m68k. The msp430 is good too. PIC assembly needs macros to replace the silliest opcodes with clearer named ones - then it was unexpectedly reasonable. (It's a long time since I wrote PIC assembly, but I remember replacing the "BTFSC" instruction with "IfBit", and making flag bits macros that included the address and the bit number.)
I'm pretty sure that the Intel 8048 family existed solely to make everybody think the 8051 was a brilliant design when it came out a couple years later. From what I remember, it worked...
--
Grant Edwards grant.b.edwards Yow! The Korean War must
at have been fun.
gmail.com
Yes, understood. That's why I deliberately referenced "int" and not "short int", "long int", etc.
But, I suspect the processor *has* internal registers to "cache" the workspace internally. IIRC, the 99K actually used memory IN PLACE OF internal registers. E.g., you could snoop on those memory locations to peek "inside" the CPU's state.
From the military contacts I've encountered, I suspect the 6502 is used to "create explosions at a distance" :> But, I can't speak to that for sure as I've never pursued any of those inquiries.
I *do* recall the 8080 (or was it 8085?) was offered for a long time in a military grade, etc.
It might actually be fun to build something like a 6502 out of discretes (no, not tonka toy logic). I suspect it would be (marginally) more useful than a PDP8! :> Think of all the blinkenlites you could outfit it with!! ;)
I would add "by quite a margin" so. The person who has created it has been very good.
Decouple the language from the processor, add some later language developments - e.g. operations "source1,source2,destination", "update (or not) the CCR", and of course add to move.b and move.w movez and movex (zero extend/sign extend), make the macro argument passing/parsing really powerful - well, and some more stuff, and you have my VPA :-).
To be fair, look at its ancestry: e.g., I can recall looking at the 1650 some 30 years ago and (literally) saying: WTF? When you contrasted *it* with other contemporary offerings (even things long gone by -- SC/MP, PACE, 2650, etc.) it looked like a sequencer more than a *processor*!
If you're talking exclusively about *syntax*, I'd have to think on that for a while.
For its *day*, I found the 16032 to be one of the *cleanest* (programmer models) for a "heavy-weight CPU".
[Poor NS... couldn't market an MCU if their life depended on it!]
No, I said *discretes*. Sort of like DEC's 60's era "flip chips" but without the structured nature of that sort of a design. (imagine just a field of transistors deliberately interconnected to mimic the processor's internal logic). You'd be able to attach a blinkenlight to any point in the innards of the "processor" (conceivably, display *each* Q!)
Of course, you could rely on more "integrated" technology for the memory, etc.
And without the comparitive relieve from still-bleeding wounds caused by the 8048, I don't really understand why people find the 8051 acceptable unless they're locked in to it because of some weird peripheral. Compared to something like an MSP430, AVR, or small Cortex M-someting, an 8051 is torture.
--
Grant Edwards grant.b.edwards Yow! ... I don't like FRANK
at SINATRA or his CHILDREN.
gmail.com
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.