Obsolete 8051 Dev equipment

I have some 8051 ICE equipment back from my initial employment in the 80s

Nohau Emul51 Hardware (2 x Full length ISA cards, cables, pod with some adapters eg DIP40, PLCC44 etc) and software (on floppies, MSDOS-based).

Is it of interest to anyone or send it to landfill?

Reply to
Chris
Loading thread data ...

30 years too late for me :(
Reply to
TTman

One possibility is the Computer History Museum.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

Phil Hobbs snipped-for-privacy@electrooptical.net wrote in news: snipped-for-privacy@electrooptical.net:

It is still useable in a lab that has an old DOS machine.

They even have ISA adapters for the card if the machine is 'newer'.

Everything these days is JTAG for programing the flash area(s). Hardware ICE emulators still exist but not really needed in a modern development realm. And there are modern chips that are 80C51 clones or have "full emulation" and code execution built in.

Reply to
DecadentLinuxUserNumeroUno

An amateur(?) collector in Italy (! I'm in Australia) is interested in it - a little less Pb and plastic in the local landfill :-)

Reply to
Chris

Bit OT... I have an old rally nav/computer I designed back in the early

70's using intel 8748. No ICE in those days... code it, try it erase it,... mod it, try again...Happy days.
Reply to
TTman

Not in the /early/ 70s, you didn't! Late 70s or early 80s probably.

Reply to
Tom Gardner

*Early* (mid) 70's we were still using 1702s and i4040... I remember how excited I was when the i8008/8080/8085 came along! And, nearly creamed my pants at the z80! :-/
Reply to
Don Y

The Z80 /did/ look nice, until you tried to write assembler programs. 6800 was much nicer, 6809 even more so.

Reply to
Tom Gardner

Ah, different camps (M vs I)!

I had a buddy who liked the '09. I always saw his code as continually loading and storing -- from memory.

By contrast, I could juggle several data in the registers and preplan *which* register so I could have the data where I wanted it when I needed it. So, relatively fewer loads and stores than my buddy's code.

The 68xx had cleaner hardware interfaces, though. Much easier to implement dual-ported memory on them than on the Z80!

OTOH, it was always hard to do an apples-apples comparison in terms of hardware cost per workload. We settled on "constant memory dollars" as the normalizing factor (to avoid trying to map one osc frequency to an equivalent osc on the other).

[Back then, memory costs dominated the recurring costs]
Reply to
Don Y

My programs needed to follow data structures around the memory map. When I tried to do the equivalent in the Z80 I was surprised at how painful it was. The IX and IY registers looked good until you tried to use them - at which point the 8080 registers were better!

If I had wanted to do that, I would have used an

1802 :)

I remember during interviews for my first job, back in '78, being asked which processor I thought was best. My answer was that the front runners were all pretty much of a muchness, and that development tool support was more important.

I accepted that job offer :)

Reply to
Tom Gardner

SNIP

You're right... too far back in the memory to remember exactly. I just opened the one I have and the test label says April 1980...But I remember using them before that as the one I have is a second generation.So yes,late 70's....

Reply to
TTman

The NSC800 CMOS 8085/Z80 hybrid was the bees knees. Used that to build an all CMOS hand held terminal. Later on used a Z80 to make a DRAM memory module. Very slow, but functional...AFAIR, it was the core of a centronics printer buffer add on.

Reply to
TTman

IX/Y are good once you have them pointed *at* a struct. You can then index off of each to retrieve the individual members; doing so with the other "pointer" registers (BC,DE,HL) means lots of inc/dec operations *or* actual pointer math (via HL).

If I was going to have to "walk" through several data, I would typically have deliberately arranged the data in memory so that I could sequentially access them... on my way to accessing the NEXT set.

[I am very happy to not have to micromanage memory and instruction streams like this, anymore!] [[More often than not, the per-instruction comments would just be reminders to me as to what was where: r.HL points to destination, r.de points to source-1]]

I only saw one 1802 product in my career. It like many of the early MPUs, just never got a real foothold (SC/MP? 65816? 8x300? 2650? 16032?)

Sad as there was considerably more variety, back then.

Yes, but the flip side of that is *inertia* drove a lot of designs! If you'd dropped several kilobuck on an EXORJACKS or an MDS, it was REALLY hard to convince your boss that moving to a newer, more capable processor would pay off, in the long run!

(and having to SHARE a development system was a real PITA!)

OTOH, having less hardware than "needed" (?) taught me to convert tasks that others would have considered HRT into SRT -- and the whole notion of graceful degradation became a fundamental in all my future designs.

[It's interesting to think of the watershed moments in your career that had dramatic influences on your designs. Another, for me, was my first use of an X Terminal... the whole client-server concept was a HUGE revelation! As was netbooting -- skip all this ROM/FLASH nonsense!]

By contrast, my buddy always drooled for a *2*MHz '09. Something that he was never going to see in a product (due to the increased cost of memory -- 64K of dual-ported RAM wasn't cheap in those days!)

Reply to
Don Y

If you have a linked list os structs and (HL)+2 points to the next struct, it is a gaddawful pain traversing the list.

ISTR that the GEC4000 minicomputers were very good at Fortran array processing, but awful at C pointer/struct processing.

Sounds similar.

Indeed. At a slightly higher level, I never want to have to program a Set or Bag again.

... and some were really /weird/, just like some pre-cambian lifeforms.

Early in your career it was probably easiest to use a new processor by changing job :)

Yup. The moveable hardware/software boundary is fun, and something that is not well taught. At one interview at the end I was asked "Now Tom, are you really a hardware or software engineer?". I made my excuses, and left.

Mine was understanding what an Algol-60 compiler was actually doing (on a 39-bit computer). Decades later I met the implementer (CAR Hoare) and mentioned it - which raised a smile.

Another was Smalltalk, and how OOP directly mapped onto two traditional conversations with customers: - I like it but need another one of /those/ (i.e. instantiation) - I like it but need something the same except for X (i.e. inheritance plus polymorphism)

Reply to
Tom Gardner

You could do things like: LD L,(IX+NEXT) LD H,(IX+NEXT+1) where NEXT was the offset into your struct for the pointer to the "next" struct.

Yup. Digging ditches. I tend to feel that way about writing RTOSs. ("OK, so how do I want to handle interprocess comms in THIS application?")

But that was the market trying to sort out what "made sense". I recall Guttag pitching the 99K to us (Workspace Pointer). As memory was already "precious" to us, the idea of moving MORE stuff into it (esp things that would have such a dramatic impact on performance!) made it a non-starter.

Yes! (also easier to get exposed to different application domains: "No, I'm not keen on designing yet-another-foo!")

My education was as an EE. I was considerably more interested in hardware; there were lots of innovations taking place, at the time!

But, I was interested in COMPUTER hardware -- not analog or semiconductor design. So, as a naive kid, I opted for the EE/CS option. It didn't take long to realize that I was gettig a grounding in EE -- but, a shitload of more specialized education in CS! ("Today, we'll design a compiler...")

In the end, that has proven far more interesting as more and more "computer hardware" is cookie cutter COTS stuff. (I haven't designed a processor in 20+ years!) But, what is now *possible* and *practical*, in software, is continually evolving in new directions.

The idea of using virtual memory *in* a deeply embedded product would have had me rolling my eyes, years ago. Now, I wonder how long it will be before MOST such designs go that route!

Algol 68 was the first programming language I learned. My pseudo code often finds := littered throughout. (Limbo uses the same symbol for "instantiate and define") It was also my first exposure to recursive algorithms (which I, somehow, manage to find a practical use for in almost every project!)

It has been only recently (relatively) that I've been able to use HLLs (most of my products have been really cost sensitive). And, C tends to find more application, there, as MPU vendors have to decide *which* language(s) to support for their products and toolchains.

I've used a few ASLs, in the past -- but, mainly as ways to simplify common activities (e.g., it's easier to program a display animation with ASL primitives than to have to put lots of explicit function invocations in your code)

Last time I *used* a LISP dialect was in AI courses. (though I've had to read plenty of FOSS code developed that way)

Reply to
Don Y

I like processors where the RTOS and interprocess comms are implemented in hardware, and the language /starts/ by presuming multicore processors and processes. Plus the IDE guarantees hard real-time constraints without executing the code

Those CPUs are the XMOS xCORE, the modern descendant of Transputers and available from DigiKey, up to 32 cores/chip and 4000MIPS/chip.

The language is xC, the modern descendent of Hoare's Communicating Sequential Processes (CSP) and Occam.

The simplicity of all that makes programming /fun/ again.

I was and EE, since I had (and have) zero interest in compilers and databases :)

Very little has changed in 30/40 years, other than smaller faster cheaper.

Things that have changed include nano-power energy harvesting circuits, and the performance of ADCs and DACs.

Reply to
Tom Gardner

That comes at a (recurring) cost. While my "budget" ha increased from "single digits", it's barely into the two-digit range. You can buy a sh*tload of CPU+peripherals for $15-20! More than enough to support writing those primitives, yourself!

I've no interest in compilers -- though find myself now designing a front-end to support some language extensions that are inherent in my current design.

I have found *using* databases (designing schema) to be an exciting new skillset; in my current project, all of the persistent store is implemented in a RDBMS. Want the binary image for node #26? Do a query, then push the BLOB out over the wire to node 26!

[It seems that most applications that want to store data don't really want to have to also write code to *parse* it, if they didn't have to! So, store it in tagged fields and let the RDBMS ensure its integrity and proper form. (do you really want to read an IP address from a plain file and have to parse it to ensure it consists of exactly 4 octets and they fit a particular net/subnet?)]

Yes. But smaller/faster/cheaper over several decades makes many things that were "hard" now a piece of cake! I no longer worry about counting *bits*, packing 8 bools into a byte, reusing variables for different purposes in different parts of the code, etc. I can, instead, concentrate on what I want to *do* and making my source code more expressive of those goals (instead of having to leave cryptic notes to the next bloke explaining some crude efficiency hack that I've employed).

I'd add power consumption, in general. I can put a PC/AT in the palm of my hand, dangle it off the end of a network drop, power it *from* that drop and not even get warm! (important if you want to have a few hundred of them in an application!)

How you approach a problem is significantly impacted by what you can expect from the hardware (at a given price-point). E.g., instead of putting a 4-20mA sender in a sensor and interfacing to a current-loop converter at the processor, I can put the processor

*at* the sensor and ship the "digitalized" signal out over an ethernet connection. The sensor can also handle its own calibration, configuration, etc.

Previously, those things would have been handled at a higher level of abstraction.

Reply to
Don Y

I've been fortunate that in my work the performance and NRE costs have been important.

In all but one case my use of a database has been key-value pairs - so I've been able to rip it out and use other more modern technologies.

Agreed, but I think my point still stands.

But not having "any" PSU is a novel step!

Reply to
Tom Gardner

Don Y snipped-for-privacy@foo.invalid wrote in news:s3i9ra$9kk$1@dont- email.me:

Fully emulated in the MAME environment.

Hundreds of computers have been included along with thousands of upright video games from the arcade era.

formatting link

Reply to
DecadentLinuxUserNumeroUno

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.