OT: Where do I find...

Ok, so AMD was hot for a couple of years in the server market. I think you miss the point. That is not what drives the profits of a CPU maker. Profit is in the numbers, the mass numbers. Otherwise, why would AMD even bother with the home/business computing market? The server market supplements their profits, but without the mass market they can't run the fabs.

No, I don't miss the point. AMD is the tail and Intel is the dog, always has and always will be. The "old fashion" 32 bit chips which Intel dominated the market, with did just that, dominate the market.

*That* is the point. Linux may have had 64 bit support, but that could only run on 20% of the machines, so that means it was likely installed on what, maybe 1%,... less?

Being the "driving force" means AMD designed the obvious extension to the 32 bit instructions which most likely anyone would have done, but otherwise means nothing. In fact, I'm pretty sure you can't patent typical instructions and you can't copyright hardware, so the only handle AMD even had on Intel were the mnemonics. I seem to recall that is why the Z80 instruction set is slightly different from the 8080 instruction set, but the opcodes are the same... with extensions.

The only instruction patent I know of is some feature of an ARM instruction that *requires* a particular piece of hardware to implement it. Otherwise anyone can duplicate the entire ARM instruction set other than that one interrupt feature. Or so I was told when someone had designed an ARM7 for FPGA. He did such a good job of it that ARM contacted him. He never said exactly what the discussion contained, but he pulled his design and went to work for ARM.

What's silly? I was trying to differentiate my statement from referring to the Itanium 64 bit instruction set. I was making no statement about the Intel x86-64 instructions vs. the AMD instructions. The Itanium instructions were the "old" 64 bit instructions, the x86 compatible instructions were the "new" 64 bit instructions.

--

Rick
Reply to
rickman
Loading thread data ...

formatting link

Many years ago there was a Scientific American article which analyzed Intel profits. They showed a *very* clear cycle and analyzed the reason behind it. I don't recall the details, but it had a lot to do with the relationships between making capital investments and reaping the rewards. I seem to recall that the profits increased with each cycle.

Yes, things will change and Intel has a lot more capability to change with the market. AMD has been in running the red for more than 60% or more of its life, at least in the last 15 years. I was never in a position to make any money off of it, but I advised my friends to buy their stock on three separate occasions when they were in a major slump losing huge amounts each quarter, but with a huge prospect in the wings. The release of the Athlon was one of those times. Each time they swung into the black in 6 to 12 months and stayed there for a couple of years while they had a technical lead on Intel pushing market share up 1 to 2%.

But they could never hold the fort against the dreadnought and would slip back into the red for a few years until they could find another way to gain a foothold.

Those days are gone. First AMD started loosing ground in the fab race. They just couldn't afford to make the ever larger investments in capital that Intel could and started falling behind in process technology. They would reach the next process node 6 months after Intel and have to deal with the lower ASPs in the meantime. Then Intel's lead increased to a year. Seeing no way out of this progression, AMD decided to dump fabs and got into all sorts of hot water.

I see they are currently selling for $4 a share with $1 per share loss. That says in four years they are worth nothing...

I wouldn't advise anyone to bet on AMD still being AMD in two years.

--

Rick
Reply to
rickman

64 bit applications have existed ad least since the early 1990's e.g. DEC Alpha running OpenVMS or OSF/1.

In those days, very few organizations could afford multiple gigabytes of physical memory, so in those days, the only real application was handling big data bases. In a programming language, you simply declare a multiple terabyte array and doing an assignment statement like C = Arr[i] will cause a page fault and a page load into physical memory, using the normal page fault loading from one of the disks in a big disk farm, if the page had not already been loaded into physical memory.

Conceptually, the main memory is just a cache (some would call it L3. others L4) and the actual data is stored somewhere on the rotating disks.

Reply to
upsidedown

That is a very interesting assertion. They are still making them. YCLIU. It never mad it in the (m)ass market, but lots of long lived servers use it and it has its own legacy SW market.

instructions.

>
Reply to
josephkk

already

This is starting to sound a bit like moving the goal posts to me. The very first Itaniums were 32 bit (mostly server machines) in a 16 bit consumer world. The biggest difference is that x86 was never designed to go to 32 bit or 64 bit but the Itanium is(was from the start). Also the Itanium was the fashionable design at the time (VLIW internal architecture).

?-)

Reply to
josephkk

There were never any 32-bit IPF machines, or even a defined 32-bit subset of the ISA (unless you want to count the x86 emulation).

Reply to
Robert Wessel

It's a flop. It was a flop from the start, and it remained so ever since. You can ask Oracle.

As you say, it gets used in long-lived servers. And arguably it is quite good for such systems, along with chips like IBM's Power family.

But the Intanium was designed to take the "normal" server and workstation market as well as the big-iron market. It almost completely failed in these areas - it was only with HP's intensive pushing (in particular, pushing it onto VMS customers who were perfectly happy with Alphas) that it got anywhere at all. And since these are long-lived servers, of course they still have them, and still need upgrade and maintenance plans.

Reply to
David Brown

very first Itaniums were 32 bit (mostly server machines) in a 16 bit consumer world.

I don't recall any 32-bit Itaniums. Merced, the first, was 64-bit. In fact, it was originally called the "IA-64." Given the way the instruction word (64-bit) is defined, I'm having a hard time visualizing how that could be. Unless, perhaps, a 32-bit external bus with 64-bit internals was done. Similar to the 8088's 8/16 or 68008 8/16/32 internal/external data widths. But I don't recall there being any processors that did this.

Can you cite something that shows this? I've just done some searches and everything I find says 64-bit or IA-64 or words to that effect.

-Bill

Reply to
Bill Leary

No to both. All Itaniums were 64bit, and they arrived at a time (around Y2K) when both consumers and offices worldwide had long since switched to 32-bit machines.

Reply to
Hans-Bernhard Bröker

Google for "micro atx". You might be able to find some in a blade format and slide four into one case.

--
Paul Hovnanian     mailto:Paul@Hovnanian.com 
------------------------------------------------------------------ 
If the first attempt at making a drawing board had been a failure, 
what would they go back to?
Reply to
Paul Hovnanian P.E.

Aha! Thanks, Paul! ...Jim Thompson

--
| James E.Thompson                                 |    mens     | 
| Analog Innovations                               |     et      | 
| Analog/Mixed-Signal ASIC's and Discrete Systems  |    manus    | 
| San Tan Valley, AZ 85142   Skype: Contacts Only  |             | 
| Voice:(480)460-2350  Fax: Available upon request |  Brass Rat  | 
| E-mail Icon at http://www.analog-innovations.com |    1962     | 
              
I love to cook with wine.     Sometimes I even put it in the food.
Reply to
Jim Thompson

Actually yes and no. The architecture was 64-bit but the implementation was rather 32-bit. Quite a bit like the early 68000, Itanium originally had 64-bit registers, 32-bit data paths and ALU. Or did it? Just try to get good information.

?-)

Reply to
josephkk

I'm happy with the Zotac AD06 I just bought to replace my desktop. It's not for my primary system (that's a Macbook), but it is good value and quite suitable for my purpose. 8GB, small SSD and external

2-bay USB3 drive (IcyDock). Nice, please consider.

Also, Synergy rocks. Dump the KVM and use a multi-input screen.

Clifford Heath.

Reply to
Clifford Heath

Does that handle mouse and keyboard as well? ...Jim Thompson

--
| James E.Thompson                                 |    mens     | 
| Analog Innovations                               |     et      | 
| Analog/Mixed-Signal ASIC's and Discrete Systems  |    manus    | 
| San Tan Valley, AZ 85142   Skype: Contacts Only  |             | 
| Voice:(480)460-2350  Fax: Available upon request |  Brass Rat  | 
| E-mail Icon at http://www.analog-innovations.com |    1962     | 
              
I love to cook with wine.     Sometimes I even put it in the food.
Reply to
Jim Thompson

Assuming you're talking about Synergy, yes. It's a cross-platform open-source client-server system designed for sharing the same mouse and keyboard between multiple systems. Run the server on the device which has the physical connection, and configure it so it knows which edge of that computer's screen you can run off onto another computer's screen - as many screens high and wide as you wish. Works across Windows and Linux, though Mac support is a little buggy (neglected) unless something's changed.

You need more than one screen, or to be willing to switch inputs on your monitor. I used to use three screens - the laptop screen to the right, main screen (dual input) in the centre, primary screen of the desktop computer on the left. The desktop's second screen is connected to the monitor's second input, so I always have one screen visible on each computer, but have to hit a button to select the other computer into dual-screen mode (Synergy thinks there are four screens).

Great desktop space-saver anyhow. I must find the time to give the OSX version some love.

Clifford Heath

Reply to
Clifford Heath

< Intel copied AMD's 64-bit register model, but they didn't copy the < instruction set: Intel64 and AMD64 are not compatible at the system

Does anyone know what happened to DEC (digital equipment corp.) in terms of business model? I was under impression that they had to layoff their service people - I thought hewlitt bought the DEC 64 bit cpu thinggy.

And by register model do you mean the ``super-scaler - out of order execution unit that clogged the pipelines and made the c2 useless to (very poor) assembler programmers (like myself) instruction cache'' thing? [ I'm Just being an arse.]

< programming level and are not perfectly compatible at the user level < [though redundancy in the instruction set allows compilers to work < around the differences]. Inside they have different < micro-architectures with quite different performance characteristics.

If I remember correcly amd uses/used the older x8086 string instructions while intel introduced the condidional move (MOVcc etc), 128-bit MMX register (that cannot be used with the FPU)...?

< OT: over in comp.arch Ivan Godard has been introducing a completely

this post (my post) is probably in the wrong newsgroup again....

Reply to
Steve Gonedes

I chose to use the amd for ``my'' gnu/linux system because of the syscall op^1. My memory is a bit foggy about (just about anything)... the op codes and floating - f*ck that pisses me off. You can get past the linux kernel gatekeeper by jumping between some ops - that's all I can remeber.

When using GCC I used the -march=athlon-4 -mtune=athlon-4. Unfortunately this worked very well. It's been a while since I have even thought about the amd rules for gcc. scary stuff is gcc.

footnotes

---------

1.) i.e., pass parameters via registers as opposed to stack allocation ^1;2. 1;2) the kernel code for the linux system is incredible. But you still need gcc - it's like CMU python (which I know nothing about.).
Reply to
Steve Gonedes

up

By "register model" I mean the set of named registers that programmers deal with.

Intel never actually *copied* AMD's architecture: Intel's first x86_64 already featured some tweaks vs AMD's chip. Since then the internals have diverged significantly.

instructions

No. MMX shares the set of 80-bit FPU registers, so MMX can't be used simultaneously with the FPU.

SSE uses separate 128-bit XMM (and now 256-bit YMM) registers which have their own [set of] ALUs and FPUs.

AMD initially tried to compete with Intel on wide SIMD, but all the current AMD chips implement some level of Intel's SSE extensions, and they dropped future support for 3DNow! a few years ago.

I'm not aware yet if AMD has implemented Intel's 256-bit SIMD extensions [YMM registers and AVX instruction set].

I'm reading it comp.arch.embedded.

George

Reply to
George Neuner

It seems they did (with minis) what IBM did before (with mainframes). Believe that the product lines they had is all the market will ever need/want.

DEC was bought by Compaq, which in turn was bought by HP.

--
Roberto Waltman 

[ Please reply to the group, 
  return address is invalid ]
Reply to
Roberto Waltman

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.