C cross-compiler for 6800 (yes, you read correctly)

You have been searching for a C-compiler for several days, in that time you would have manually written quite a lot of manual assembler, especially, if the algorithm has been tested with some high level language at some other platform (such as Windows/Linux).

Of course, doing manual assembly from existing code (such as FORTRAN, Pascal, C) will typically produce quite inefficient code. If high performance assembly code is needed, either code everything from scratch in assembly or let a good optimizing compiler (if available) do the job.

If your main interest is retroing, very few high level compilers were available in the 1970's for the 8 bitters. The only I remember from those days was the PL/M-80 for 8080/85. This is understandable, since the Intel (and Zilog) architecture was quite awkward for assembly programming. The 6800/6500 families were much more assembly friendly, so these were programmed in assembly for quite a while.

Reply to
upsidedown
Loading thread data ...

Nice machine with sixteen word wide (16 bit) cache :-)

Reply to
upsidedown

Why do you need a call stack ? Do you intend to play with recursion or some other "modern" tricks ? :-) :-)

The 1802 had a sufficient number of registers that could be used as a program counter, thus one register could be assigned for the main program and one for each subroutine. The main program just switched to an other register as program counter and after execution, the subroutine returned control to the main program, by simply switching the register to be used as the program counter.

Reply to
upsidedown

I once wrote a 20,000 line assembly program for the COP8, running on a

32K OTP chip. It was one of these projects that started small, then needed an other feature, then another feature, then another, until the result was a monster. At each stage, it was faster and cheaper to add more to the existing system than to redesign it with a sane processor choice. But by the end, bugs could take an hour to identify, then it took days to find an extra spare byte of OTP, or spare bit of ram, in order to fix it. Often re-arranging the order of files in the linker script would make a big difference to the space, as certain operations (jump tables and data tables) had to be within a 256-byte page, so you got different padding for different ordering.

Add to that the big, expensive, and unreliable emulators - which then died. So later versions were debugged by burning the program into the OTP chips and testing them out.

So I too would not want to write anything but a trivial application in assembler - or C - for these chips!

But they /were/ solid and reliable microcontrollers - I don't remember any failures in the field, even in some quite nasty environments.

Reply to
David Brown

[...]

Maybe not recursion so much as re-entrancy, but it's mostly a culture shock situation. If everything you work with has hardware support for a call stack, it's quite a jolt the first time you run across something that doesn't have it. You never had to think about what to do without one...

Allocate half the register to one task and half to the other and you have free, instant, co-routine support.

--
Grant Edwards               grant.b.edwards        Yow! BARBARA STANWYCK makes 
                                  at               me nervous!! 
 Click to see the full signature
Reply to
Grant Edwards

You need re-entrancy only if you are using a pre-emptive multitasking. A small multitasker will require about 1-4 KiB of memory, so you would not use it for a small system anyway.

I started my career in programming with DDP-516 with 16 KiW of memory and return address stored at the subroutine entry address :-)

Anyone actually using coroutines for anything really usable ?

The only time I have seen it used for some real application is the RSX-11 (PDP-11) run time library register save (and restore) code. The return from a run time routine would have looked like any ordinary library function, but in fact, the registers were restored in the coroutine.

Reply to
upsidedown

Modern day architectures (e.g. power) do not have a hardware stack pointer either.

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
dp

I used one of those to cross-assemble 6800 code for my university project. Paper tape being read at 1000cps would probably break modern 'elf'n'safety laws.

But that was a modern machine. My first programming was on an Elliott-803, which had an architectural max of 8192

39 bit words. Power of two bits in a word, or an even number of bits - that's for the wimps :) Had a groundbreaking Algol-60 compiler too, by CAR Hoare.

Yes.

In the early 80s a company I worked at had a homebrew executive written in C for hard realtime control running on things like PDP11s and Z80s and anything else that came along.

The principle control structures were effectively co-routines (although some purists might disagree) that could be used for cooperative multitasking.

I used it to control a life support machine coded as a series of FSMs. The code was very easy to read and corresponded more-or-less directly with the specification. The pseudocode was forever { waitfor(condition1, condition2, condition3) if (condition1 || condirion3) doSomething() if (condition2) doSomethingElse() waitfor(condition4) etc }

The waitfor() caused the flow of control to "disappear down the plughole" and allowed other "threads" to execute. When another thread yield()ed and a condition was true, the "PC reappeared" and execution continued after waitfor().

Reply to
Tom Gardner

Or interrupts. Which, I suppose is a degenerate form of multitasking.

In embedded systems interrupts are widely used on systems with only a few hundred bytes of RAM and a few KB of ROM.

Sure. Back in the early 80's I worked on firmware for a cellular phone which used a Hitachi version of the Z80. The Z80 had two complete registers banks, and a single instruction could switch between them. The phone's firmware was designed as a pair of coroutines: one for each physical register bank. It worked very nicely. The next project (this time a cellular base station rather than mobile), the processor we chose didn't have multiple register banks, so we did a small multi-tasking kernel. It's a bit more overhead, but much more flexible.

--
Grant Edwards               grant.b.edwards        Yow! Am I having fun yet? 
                                  at                
 Click to see the full signature
Reply to
Grant Edwards

That sort of control flow is still widely used and I think is generally referred to as "stackless threading". Here's a nice "library" of macros that implementes it in C:

formatting link

If you're using gcc (which supports label pointers) the implementation is nice and clean and easy to understand. If not, it uses Duff's Device (which has a bit more overhead and makes your head hurt the first time you look at it).

--
Grant Edwards               grant.b.edwards        Yow! Where's the Coke 
                                  at               machine?  Tell me a joke!! 
 Click to see the full signature
Reply to
Grant Edwards

Nah, normal interrupt handlers are a form of preemption and these issues do arise in them.

Sure, it's a perfectly good way to get rid of control inversion if you have layers of processing of a data stream, for example:

formatting link

The GA144 Forth processor and its predecessors have a cute coroutine switch instruction that simply swaps the top of the return stack with the program counter. Each coroutine has to be aware of the registers and data stack of the previous one, but the programs on those chips are so small that this is manageable.

Reply to
Paul Rubin

The Ellie had two instructions per word: 6 bits opcode,

13 bits address, B bit, 6 bits opcode and 13 bits of address.

The B bit gave a weird indexing / base operation: The result of the left-hand memory operation was used to modify the address part of the right-hand operation if the B bit was set.

The Elliott 803 used ferrite memory toroids as logic devices and nickel-spiral delay lines for registers. Main machine cycle was 288 us, with three-phase clock to pump the data in proper direction through the ferrite-ring logic.

At least the bigger brother, 503, had 40 bits, with the last bit as a parity bit not shown to the programmer.

There was also a assembler/copmiler (actually neither) called Autocode.

--

Tauno Voipio
Reply to
Tauno Voipio

The 1802 was still popular for space applications into the 1990's. I think you could still get it in silicon on sapphire into the 21'st century.

If I recall correctly the reason the 1802 was rad-hard was just that the sheer size of the features meant that one ionizing particle zipping through the chip wasn't going to make much difference, where something with itty bitty features could go awry if just one electron got knocked out of place at the wrong time.

At least to my understanding, newer processors aren't really rad hard so much as that people have figured out how to work around their rad "softness".

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

The runtime I was referring to had conventional stacks unlike protothreads; within a thread the code looked and behaved as standard C code. I've looked at Dunkel's protothreads and thought them a bit limiting: you'd have to adapt a traditional coding style too much to work with them.

The runtime wasn't pure C: it always had a tiny bit of assembler to save/load (IIRC) the SP and PC and maybe some machine specific registers. No big deal, and it made everything else /so/ much simpler. Was setjmp/longjmp even in the language in in 1980/1981?

Reply to
Tom Gardner

I was under the impression that the sapphire substrate formed a useful isolator, which therefore had Good Properties w.r.t. radiation, in ways that I never bothered to understand.

Reply to
Tom Gardner

That may be, but I doubt that the el-cheapo 1802 that came with my COSMAC Elf was silicon-on-sapphire.

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

Many stackless machines had interrupts. In such machines, there were of course severe restrictions what you could do in the ISR.

Reply to
upsidedown

(I know /you/ probably know this, but others are reading this thread.)

That's perhaps a matter of definition - as the PPC has all the addressing modes needed to make a stack using any GPR, you could say it has as many stack pointers as you want (assuming you never want more than 31!).

It is common for RISC architectures not to have a dedicated stack pointer as part of their ISA, simply because the same operations you want on a stack are also useful for other structures, and therefore it makes sense to make it a general operation. The choice of register for the SP is then left up to the ABI (which is usually quite specific about it).

Many modern RISC microcontrollers have some sort of alternative compact instruction set (Thumb, VLE, etc.) which often force the use of a specific stack pointer register for at least some instructions or addressing modes (VLE on the PPC does not do that, IIRC).

But such architectures usually don't stack return addresses on a "call" instruction - the return address is put in the "link register" and it is up to the callee to store the old LR on the stack before using a "call" by itself. The LR is thus a register dedicated to the top-of-stack (or, if you like, it is a single entry hardware call stack).

Reply to
David Brown

Probably not, but IIRC SOI was available as an early option.

AvenetExpress will still sell you an 1802 for $131. But judging from my recent experience, getting them to deliver it may be a different matter :(

Reply to
Tom Gardner

Not really any more than on machines *with* stacks. You just have to save the state somewhere.

Consider S/360 and its descendents as an example. Interrupts stored the old PSW (IP+CPU flags, basically), and (depending on the interrupt) some information about the interrupt, all in fixed locations in low core. The "new" PSW would typically disable the type of interrupt (for example, you'd start the I/O interrupt handler with I/O interrupts disabled), and each interrupt handler would have a bit of reserved low core for storing regs, doing stuff, etc. If it needed to do something more than trivial, the interrupt handler would have to store the regs back in the interrupted task's state structure, and then go off and establish the environment it needed to do the more complex thing, and afterwards redispatch the interrupted task (or hand control back to the dispatcher). But once you saved enough stuff to get out of low core, you could re-enable interrupts again.

And that's no different than a machine with stacks - most machines with a protected mode and stacks will actually switch to a protected mode stack during an interrupt. But you almost always had better ensure that the CPU doesn't take an interrupt of the same type while on that protected mode stack, or the second interrupt will clobber the stack entries of the first.

Reply to
Robert Wessel

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.