OT: Where do I find...

I find it a bit funny that everyone loves to hate Intel. Motorola had their shot. All of us had our chances to support something other than IBM. Intel chips weren't used in the original Macs, there were any number of commercial workstations that used the 68000, there were a number of consumer oriented PCs that used the 68000. So why did Intel dominate in the end? Because that is what *we* chose.

--

Rick
Reply to
rickman
Loading thread data ...

And a decade ago in the x86 space: the hypertransport links and memory architecture AMD introduced in sledgehammer in

2003 was also innovative and streets ahead of Intel.

edge. I once used a workstation that was a bit slice

That lasted what, a year? then the 68010 and 68020 came out and the bit slice workstation was shoved into the corner...

The point about a factor of two being insufficient is only too true - and many people/companies have fallen into the trap of believing it was sufficient. A factor of 10 begins to be interesting, but only if it can be brought into production quickly enough.

Reply to
Tom Gardner

shot. All of us had our chances to support something other than IBM. Intel chips weren't used in the original Macs, there

number of consumer oriented PCs that used the 68000. So why did Intel dominate in the end? Because that is what *we* chose.

We chose IBM PCs because of the standardisation they introduced. The internals were completely irrelevant. (And IBM had chosen Intel for non-technical reasons according to folklore).

Intel's innovative edge has been to invent the best semiconductor processes. That's been sufficient to dominate other forms of innovation. No point in hating them for that!

Reply to
Tom Gardner

The current Bulldozer-descended cpu's are also innovative--they just didn't work as well as intended. Similar things happened with Intel innovations like the Itanic. Innovation is risky which is why companies usually don't engage in as much of it as they could choose to. It's not because they lack smart people with good ideas. It's that you don't know ahead of time how well a complex good idea is going to work, and it can be very expensive to find out.

Intel is a conservative company that improves its designs by small increments per iteration. Before Bulldozer actually reached outside testers in 2011, people really did think it might beat the Intel stuff of that era. From what I hear, the upcoming Steamroller is what Bulldozer should have been, though it's probably too late by now.

Reply to
Paul Rubin

It was HP's innovation, which they "sold" to Intel.

The famous tick-tock process, which has stood Intel in good stead.

I'm out of touch with that now, but I have no reason to doubt what you say.

Reply to
Tom Gardner

I'm curious about why you think AMD could possibly reverse their fortune in the x86 arena. I haven't kept up with the architecture advances for the last few years, but the last time I checked AMD was a full process generation behind Intel which is some 18 months. Has that changed? Do you think there is anything a designer can do to mitigate that large a difference in process technology?

Someone had mentioned that AMD was getting into the ARM fray, which might be a smart thing to do, even if it is extremely crowded. It is a

*huge* market so that even a small share of a growing market could be better for them than their current share of a shrinking market.
--

Rick
Reply to
rickman

I don't hate Intel, but I am interested in the ongoing progress of cpu architecture / computing in general and they have contributed little imho, other than as you say, process technology, most of which is patented anyway, so contributes nil to the pool. They became dominant in the marketplace with their X86 architecture by being chosen by IBM for the original pc, nothing to do with tech excellence. That provided the revenue for process research and improvement, which does take serious cash, from fwih.

Yes, 68K did have it's chance and did quite well with Mac and early unix workstations, but never had the volume to compete with the pc on x86. Still, the desktop market has been flat or declining for ages, while the server sector is looking in other directions such as multicore / low power arm, so we may be starting to see the changing of the guard at last.

All empires rise and fall :-)...

Chris

Reply to
chris

Quite unfair. They've contributed very significantly on microarchitectural issues, even if the ISA itself had shown mainly slow evolution. And many of the microarchitectural stuff they've done has demonstrated that the ISA isn't really all that important.

Reply to
Robert Wessel

< Intel copied AMD's 64-bit register model, but they didn't copy the < instruction set: Intel64 and AMD64 are not compatible at the system

Does anyone know what happened to DEC (digital equipment corp.) in terms of business model? I was under impression that they had to layoff their service people - I thought hewlitt bought the DEC 64 bit cpu thinggy.

And by register model do you mean the ``super-scaler - out of order execution unit that clogged the pipelines and made the c2 useless to (very poor) assembler programmers (like myself) instruction cache'' thing? [ I'm Just being an arse.]

< programming level and are not perfectly compatible at the user level < [though redundancy in the instruction set allows compilers to work < around the differences]. Inside they have different < micro-architectures with quite different performance characteristics.

If I remember correcly amd uses/used the older x8086 string instructions while intel introduced the condidional move (MOVcc etc), 128-bit MMX register (that cannot be used with the FPU)...?

< OT: over in comp.arch Ivan Godard has been introducing a completely

this post (my post) is probably in the wrong newsgroup again....

Reply to
Steve Gonedes

I chose to use the amd for ``my'' gnu/linux system because of the syscall op^1. My memory is a bit foggy about (just about anything)... the op codes and floating - f*ck that pisses me off. You can get past the linux kernel gatekeeper by jumping between some ops - that's all I can remeber.

When using GCC I used the -march=athlon-4 -mtune=athlon-4. Unfortunately this worked very well. It's been a while since I have even thought about the amd rules for gcc. scary stuff is gcc.

footnotes

---------

1.) i.e., pass parameters via registers as opposed to stack allocation ^1;2. 1;2) the kernel code for the linux system is incredible. But you still need gcc - it's like CMU python (which I know nothing about.).
Reply to
Steve Gonedes

up

By "register model" I mean the set of named registers that programmers deal with.

Intel never actually *copied* AMD's architecture: Intel's first x86_64 already featured some tweaks vs AMD's chip. Since then the internals have diverged significantly.

instructions

No. MMX shares the set of 80-bit FPU registers, so MMX can't be used simultaneously with the FPU.

SSE uses separate 128-bit XMM (and now 256-bit YMM) registers which have their own [set of] ALUs and FPUs.

AMD initially tried to compete with Intel on wide SIMD, but all the current AMD chips implement some level of Intel's SSE extensions, and they dropped future support for 3DNow! a few years ago.

I'm not aware yet if AMD has implemented Intel's 256-bit SIMD extensions [YMM registers and AVX instruction set].

I'm reading it comp.arch.embedded.

George

Reply to
George Neuner

It seems they did (with minis) what IBM did before (with mainframes). Believe that the product lines they had is all the market will ever need/want.

DEC was bought by Compaq, which in turn was bought by HP.

--
Roberto Waltman 

[ Please reply to the group, 
  return address is invalid ]
Reply to
Roberto Waltman

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.