1) There were availability issues with the 68000. IIRC, it wasn't going to be available in production quantities in time to meet Boca's schedule.
2) Intel was much smaller than Motorola, so IBM could buy a sizable chunk of Intel and have some influence. It would have been more difficult to do that with Motorola.
Just be glad it wasn't some completely propritary IBM processor with a closed-source bus design.
--
Grant Edwards grante Yow! Is it FUN to be
at a MIDGET?
visi.com
At design time I believe there was no 8 bit buss version of the
68000 available, and I think the 6809, especially the 6809 with memory management, did not exist. Intel had the 8088 available complete with the essential peripherals, such as clock generator, interrupt controller, etc. An 8 bit buss is much cheaper than a 16 bit job. Don't forget that the present ISA buss is already an extension kluge to the original 8 bit buss, by adding a second connector with all its alignment problems.
At any rate the 8088 was a more capable chip than the 6809, if not as pretty, and had the advantage of being more or less assembly mnemnonic compatible with the 8080. This meant a quick path to porting CP/M software, which was taken by WordStar. Developers didn't have to get stuck with the 640k barrier if they had just taken IBMs advice and routed things through the bios interrupts. So what if it was slow - there was no competition at the time.
--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
How long ago are we talking here ? I used the 6809 in 1981. Seem to remember the 68008 was available around that time too. Didn't the Sinclair QL use it ?
The processor is architecturally OK and I do not have much to complain about the MMU either, it worked as specified.
I worked with a project in 1982/83 which used a 6809 board with a customer specific RTOS (every company had their own RTOS in those days:-) with an extension board with up to 64 KiB of RAM behind the MMU. The extension board was used for data storage only and a window in the main processor address space was used to map a small segment in the data RAM. In addition to increasing the address space, this also helped in protecting the data during crashes and resets.
At that time the only languages available from Motorola was the assembler and a lousy Pascal compiler. For any larger project, the use of assembler as the only development tool is not very realistic, so part of the job was done by the Pascal compiler. Unfortunately this compiler was full of bugs and needed frequent workarounds. Also the generated code nor the libraries were fully re-entrant, so we were forced to use it in a single task only and write everything else in assembler.
As far as I know, no RTOS was available from Motorola at that time (compared to RMX-80 from Intel) and in order to fully utilise the Motorola MMU, an OS similar to RSX-11 would have been required.
The program development tool support for the Motorola systems was quite minimal. For instance, if we needed to get the dump of a memory area before and after an operation, the standard practice was to print the memory hex dump on a thin paper before and after and then put the papers on each other and looking against a light source, to look if there were any differences :-).
This might be a device specific problem, but I include it anyway, in case anybody else had similar experiences. Our Excorciser was extremely sensitive to static electricity when the emulator card and cables were installed. Any static discharge in the same room would cause the Excorciser to tilt e.g. during an editor session. Connecting the user's chair and wrist to the device frame helped somewhat, but if you intended to do some critical edition sessions, it was a good idea to lock the door to prevent anybody entering the lab.
I had been working with Intel processors and Intellec development systems both before and after this project and before that worked in the PDP-11/RSX-11 environment. While fully aware that you can not expect the same things from the 8 bit environment, the Intel program development environment was something I expected, but the Motorola environment was well below that.
While the Motorola processors are definitively cleaner than Intel processors, the problem with Motorola processors is that they did not have sufficient software support or they did not realise how complicated programs were to be executed on current processors.
Someone else mentioned the 8088 giving a shorter development path from CP/M. I recall well after the PC came out, Byte Magazine speculated that the next-generation IBM PC (which turned out to be the AT) would use a 68000 (surely well available by then), but of course it used the next-generation Intel 80286.
This assumes that a non-8088 design would have found success. There were a bunch that didn't, so maybe the IBM PC without the 8088 would have been just as sucessful as the IBM System/23 Datamaster or the DEC Rainbow.
Why should they care about that? They were prducing a line of computers with old models going obsolete and news computers replacing them:
Model 5100... Normal, lasted about a year then was replaced. Model 5110... Normal, lasted about a year then was replaced. Model 5120... Normal, lasted about a year then was replaced. Model 5150... BLAM! design decisions still haunting us in 2005!
The story I heard, both from IBM and Motorola folk, is that the 68K was the first choice, but it was just being rolled out and sampled, and IBM wanted exclusive use of the first several million chips and a volume that would require Motorola to build a new plant. At the time, Motorola was doing what they always did with a new processor: limit them to 2/month per company/division, and charge $500 each for them - to make sure the processor got maximum exposure, while minimizing the suckage from hobbyists.
So Motorola declined, and IBM went to Intel, who told them that they could provide the 8086 in quantity, but couldn't - but IBM had already started design and so they settled for the 8088.
As to the memory limitations, you have to realize that it wasn't IBM's computer group that developed the PC - it was their office equipment division. The PC was supposed to supplant/augment the Selectric typewriter, and to replace the keypunch machine by using floppies.
Also, the memory limit was 512K, not 640K. That's 32K of 16-byte 'paragraphs' (not pages). The original BIOS used signed arithmetic for calculating memory size, which cause problems when 640 was put it - the extra memory made the count negative.
-- _ Kevin D. Quitt snipped-for-privacy@Quitt.net 96.37% of all statistics are made up
I've heard this before, but I'm skeptical. IBM wasn't actually expecting the PC to sell very many units, and Motorola was shipping the 68000 in volume by then, so I don't think there actually would have been any issue.
Of course, the reality is that the PC sold much better than expected, so use of the 68000 could very well have run into supply problems.
But it seems much more likely that the decision was based on cost. An 8088 with the various support components (both Intel 8xxx chips and TTL) was much less expensive than a 68000 with the components it would have required.
They could also have chosen the superior, but more expensive system-wise, 8086 with its 16-bit bus, but chose not to. It was out well before the 8088.
Best regards, Spehro Pefhany
--
"it's the network..." "The Journey is the reward"
speff@interlog.com Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog Info for designers: http://www.speff.com
From the architectural point of view integrating the processor and the RAM on the same chip would make a lot of sense (but those working with the actual silicon processing might disagree :-).
Think about a typical RAM structure (at least dynamic RAM), in which the row address is decoded first and all cells in the row are activated, feeding the result from each cell to the corresponding column wire and into the sense amplifiers. When the column address is decoded, it selects one of the columns for output.
All the data from a single row is available on the column lines and needs to be squeeze out from the limited number of pins by selecting the corresponding column wire with the column address.
A 8 MiB (64 Mibit) memory might be arranged as an 8192 x 8192 array. After one of the 8192 rows are selected, data from 8192 columns would be available at once (1024 bytes). It would be very nice, if all this could be loaded into a cache page as a single memory cycle. While 8192 parallel lines would consume a lot of space on the chip, it would still be a lot more efficient than a package with more than 8000 pins and high power amplifiers to drive each external line :-).
This would eliminate the DRAM column decoder and the delays associated with it. The cache address decoding could also be simplified, since only a word select would be required.
With 50 ns DRAM cycle time, this would correspond to 20 GB/s or 160 Gbit/s transfer rate. Paul
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.