Many years ago when dinosaurs roamed the earth, there was the x86 and then there were the viable contenders. That list was headed by MIPS and PowerPC with others as "also ran". With time none of these architectures kept up with the megalith Intel's forces and the x86 took over the world resulting in the extinction of the dinosaur.
In more recent times the focus has shifted from raw power (so essential in the times of dinosaurs) to low power as dictated by the smaller size and battery operation of new devices (think mammals). Now the power guzzling of the x86 line is creating global warming and threatening our very existence. But today's processor on the white charger is the ARM, not PowerPC or MIPS.
Was it inevitable that these lesser architectures take second fiddle to the ARM or was it just a fluke of marketing or corporate mismanagement that led to their being relegated to the processing hall of fame?
Is there any compelling technical reason for the emergence of the ARM over other non-x86 processors?
ARM has a massive advantage going for it: a _massive_ hobbyist and low end embedded systems ecosystem which means it's been easy for a _long_ time for individuals to use this architecture for their own personal projects.
There's nothing hobbyist related in the PowerPC world and the only MIPS architecture variant I am aware of which is hobbyist suitable is the PIC32MX.
What people forget in this next-quarter driven world is that today's hobbyists and/or students are tomorrow's decision makers. That doesn't help you meet next quarter's target, but it does mean that those people will be starting to make recommendations to their new employers in the future.
Those people know ARM from their own projects and they know the ecosystem. I'll let you work out what happens next. :-)
At the technical level, ARM is a very elegant architecture. Over the last few days, as part of a hobbyist project, I've been going some bare metal work on x86 for the first time in a number of years and I had forgotten how utterly crap the x86 architecture is (at bare metal level) when compared to ARM.
The ARM architecture is very expressive when compared to MIPS, but I know nothing about PowerPC. I wonder how the code density of the various architectures compares.
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Power architecture is very much alive, just check what Freescale are making.
The reason why the most popular architectures are crippled - x86 is simply a mess, ARM is better but just has too few registers to be a viable load/store architecture - is not commercial.
Someone somewhere just makes sure most of the world uses - and is familiar with - the crippled architectures while the best ones (power is the best of those I have seen - by a margin, followed by MIPS) are reserved for more critical areas where it matters.
The other architectures never fully died out - but they were relegated to more niche markets. There are lots of MIPS processors around, especially in network equipment. PowerPC turns up in high-end microcontrollers such as in engine control, and IBM Power chips are popular in big iron computers (as are Sparc's). But they can't get the economies of scale needed for competing in the mass market.
I think there are a mix of reasons - some technical, some non-technical.
The x86 ISA is crap. It was viewed as limited and old-fashioned the day it was created, and the IBM engineers designing the first PC wanted nothing to do with it (they picked the 68000). But the chip was relatively cheap, and the 8-bit bus needed meant cheaper memory and main boards, so the PHB's pushed it as the cpu for the IBM PC. Fast-forward to today and we have fantastic implementations of the same crap ISA. (amd64 is somewhat better, but still not a nice ISA compared to MIPS, PPC, or ARM.) If the same clever folks at Intel had used the same time and resources on making a better implementation of MIPS or PPC, we would have faster, cheaper and lower-power cpus in the mainstream.
ARM were the right people, at the right time, with the right deals. The original ARM processor was a very nice design - it was done by smart folk who looked at available processors, and thought of a better way to do make a cpu. It was low-power and low-size from the beginning. And because they didn't have a fab themselves, it was a relatively simple matter to make the core as an IP for other manufacturers. The ISA itself is pretty good. It is arguably not the best - generations of Thumb and other instruction sets attest to that - but it is pretty good, and ARM have done a good job of moving it forward.
MIPS is a bit of a sad story. The ISA is excellent - it is a very elegant processor. But it has suffered commercially as a result of competition with the x86 market, betrayal by Microsoft on the Windows NT platform, and a series of owners of varying success. For example, they were bought by Silicon Graphics when they were struggling financially and SG was a heavy MIPS user. But then SG found that it gave more value for money to put Intel chips in their workstations than to put their own MIPS devices in the systems. MIPS have always been popular in networking equipment - they are very good at pushing data around, and have small and low-power cores that work well in SOCs. They used to be significant in the mobile telephone market, and are heavily used in set-top boxes, Bluray players, etc.
Unfortunately, MIPS have failed in the microcontroller market - despite making cores that are highly competitive with ARM Cortex across the range, and having decades of lead in the 64-bit class. MIPS-based devices tend to sell in very large quantities but to only a few customers - the mass of developers out there simply don't know about them, and can't get the chips or the information (companies that make MIPS-based SOCs often have minimum order quantities in the 100,000 range). They made an attempt with Microchip - but they were perhaps the worst possible partner. Microchip are very popular with small hobby users - but that gives them a reputation of /only/ being for small hobby users, and their other microcontrollers have a well-justified reputation for having weird and painful cpu architectures and often very poor tools. (Microchip have many good points too - I'm just talking about the cpu cores here.) So the PIC32 is assumed to be another in the line of bizarre Microchip-specific cores, and the decision to make the free tools limited in optimisation (rather than the more common space limitations) means that people trying them find the chips to be very slow.
So while most ARM licensees could swap out their Cortex cores with corresponding MIPS cores and get similar prices, performance, and power, and similar tools (gcc is the most common compiler for both platforms), MIPS has missed the boat.
Microchip also makes very buggy chips which doesn't help. As for the cores used, they just feel outdated. The M4K core used in the PIC32MX looks like it was designed to be a competitor to the ARM7TDMI, while the M14K feels like a reaction to the Cortex-M3, just without the elegance. You still need assembly glue or nonstandard compiler extensions. Meanwhile the competition has moved on to the M4F with floating-point support.
On a more positive note, MIPS recently launched release 6 of the architecture, which looks very interesting. It breaks compatibility in a major way - many less-used instructions have been removed to free opcode space, and the old fan-favourite delay slot instruction is also gone.
A little awkward too that Microchip produced a PIC32 evaluation board that wouldn't connect easily with anything except Microchip extension boards. The captive boards + captive compilers really limited the things that anyone would try to do.
PowerPC in quite a few of the Freescale parts is designed to work in bad environments both electrical and physical. (automotive engine controllers, and bad environment applications like process control.
It probably helped that they had only a small team, hence they could only design in so much transistors. A few decades later a limited amount of transistors meant a low power consumption, which made the ARM a good choice for the battery-powered hand held thingies that evolved into todays telephones and tablets.
When it came out I was very ethousiastic about the PIC32: a 32-bit chip available in various sleek DIP housings, at good prices! But the compiler situation made me look elsewhere. I definitely want independent and free (GCC and/or LVM) support, including C++, full optimization, startup code, and register definition header files. This is all readily available for most ARM chips.
Yes, these are their MCUs based on power. Mostly automotive, ECU targeted etc.
But they also have the QorIQ series - they could have come up with a better name for it but some products of it are already available and others look close enough. GHz range multicore
32 and 64 bit power arch. monsters - smallish to really large beasts.
Then they have parts not that new which are still to be matched for smaller systems - e.g. the mpc5200B, yet to be beaten by any competitor part in its niche.
Whoever laid out the power architecture in the 80-s has been quite a visionary, it does not leave much if anything to ask for after all these years. Except the awful assembly mnemonics, but whether one writes in my vpa (68k and further mnemonics) or in C this is a non-issue.
Like in high school debate teams, sometimes you have to argue even if you d isagree with yourself. I am playing devil's advocate and assign myself to the x86 team.
Yes, segmented 16 bits x86 is painfully ugly. But 32 bits x86 is fine, especially when we don't have to look at it. We just tell the compiler wha t to do.
We just can't ignore the fact that it was the first popular mass markete d chip and will continue to be around.
Most PC/labtop are still x86. It is nice to be able to develope and tes t on the same system. Used laptops are cheaper than 0. They are great development and test machines.
Regarding power, another post on S.E.D:
Well, which PC? The Atom (x86) dual-core Atom PC is twice the PC at 2W; so, it's about 1W per PC. The Cavium (ARM) chip is 48 core at 100W if fully u tilized. The core does not really matter much. More than half of the heat come from the 16Mbytes cache per core.
OTOH, it might not make sense to have uniform cache size. Perhaps some wit h 32M, 64M and 128M, etc.
One thing a lot of people forget is that ARM was actually the first commercial RISC processor. Yes, it was inspired by the work of the Berkeley-RISC and Stanford-MIPS teams, but their commercial results, namely SPARC and MIPS, came a little bit later.
Also most of the other early RISC processors were designed for fast workstations, while Acorn was looking for a successor for the 6502 they used in their earlier computers and it should have a better latency than a 68000 or x86. Thus their design led to a pretty power efficient CPU, because they weren't aiming for raw processing power.
A non-technical reason might have been the work of Robin Saxby, the first CEO of ARM. He basically set up office in a jet, flew around the world, and tried to sell ARM cores to anyone who was willing to listen to him. I guess it worked...
I still think one of the reasons why an architecture becomes popular is because people have the opportunity to be exposed to it before they have to start making recommendations within the workplace. The reason is the obvious one that people are far more likely to recommend something if they have prior positive experience of it.
That generally means having an infrastructure to get the architecture into the hands of people like students and hobbyists and at a price those people can afford. Of the alternative architectures listed, only ARM meets that criteria, with MIPS a very poor and distant second.
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
But pic32 is MIPS and GCC had MIPS support forever---probably longer than ARM! after all, MIPS was the original RISC architecture from Stanford (Hennesy/Patterson). I think it was more the development tools (hardware and software debuggers) that kept it proprietary and alone.
Not quite: MIPS and RISC came out in 1983 and ARM-1 in 1985. Berkeley RISC did not really have a commercial followup, and of course MIPS was the target of the Hennesy Patterson book that implanted the RISC ideology in the industry.
Microchip also added some optimization pass to the compiler. And at the time there was no easiliy available linkerscript, startup code, etc. And the Microchip compiler was C only, no C++ or other GCC languages.
Yes, gcc support for MIPS is much older than for ARM - it was one of the earliest targets supported after the original m68k.
I don't know that Microchip added much to gcc - at most just tweaks. What Microchip did was take the gcc source code, add in license protection, and bundle it together with a debugger, library and linker scripts. Because of the GPL, they /very/ reluctantly made the source code for their modified gcc available (including the licence protection, which other users had to then remove from the source before compiling). The GPL did not let them restrict users of gcc - so they added licensing clauses to their library (whose source was kept secret) to say that it may only be used along with Microchip-supplied binaries of gcc. The license protection on their gcc only allowed -O0 (no optimisation) and C only on the free version - you had to pay substantial amounts to be allowed to enable optimisation.
The shenanigans pulled by Microchip were technically legal and within the limitations of the GPL (other companies charge for gcc + extras bundles) - but morally they stole the compiler and sold licenses to their users, charging them significantly for something they could get for free, and spoiling their own market (which is not illegal, but is pretty stupid).
Some users went to the effort of compiling gcc themselves from Microchips source, but then they had to find a library themselves (typically newlib) and put things together. Or they went to CodeSourcery, who provide working free MIPS gcc toolchains or paid-for versions with support (from the people who wrote much of the compiler, unlike the Microchip folk). But mostly people felt it was not worth the effort, and bought ARM chips instead.
x86 (and x86-64) does the least amount of memory re-ordering of mainstream architectures. Alpha does the most, followed by ARM and IA-64 (Itanium):
This is largely an artifact of backward compatibility going all the way back to 8086. More aggressive re-ordering would likely break a lot of existing code. New architectures don't have existing code to worry about.
The issue isn't so much that x86 is more predicable per se, but that achieving predictability requires fewer memory barrier instructions, which means fewer opportunites for the programmer to omit one.