In article , David writes: |> > |> > It would be technically straightforward to use those to build some |> > very impressive blades, and make a big impact on that market. Yes, |> > it would involve a certain amount of work, but there were some |> > decent 68K compilers that could be updated etc. etc. |> |> I'm not sure how much SMP support exists on the high-end ColdFires, and |> they might have external address ranges of less that the full 4 GB (that's |> certainly the case for the smaller Coldfires, with which I am somewhat |> familiar). However, the fact that they have fast networking, DDR |> controllers, a flexible bus (attaching gluelessly to flash, for example), |> PCI, USB2, etc., and a power consumption of a few watts at most would let |> you put a lot of MIPs in a small space, for applications that can be split |> across loosly-coupled processors.
They probably do only support 4 GB addressing, but don't assume that large addressing, cache coherence, SMP and close coupling are all part and parcel of the same thing. They aren't.
As history relates (for those few people who remember it), there have been some very successful Unix systems with limited addressing per process and no cache coherence, but shared memory and close coupling. Yes, you have to place a few restrictions on how such systems are used, but they affect very few programs and even fewer of those at all seriously.
For example, running a single system image on a complete row of blades would be straightforward. mmap/shmem/etc. would have to be restricted a little, and they wouldn't support OpenMP or the use of POSIX threads for parallelism (a rare use of them) but that is about all. There would be little problem in migrating processes or even kernel threads between blades (very good for dynamic replacement).
However, I am not holding my breath for such systems ....
Regards, Nick Maclaren.