OT: Where do I find...

Yes, Oracle is a complicated issue - they'd rather you ran your database on Sparcs (or at least their own x86 servers). But Oracle will take what it can get to earn a profit - if they thought there was money to be made selling Oracle on Itanium, they'd continue making it.

No, I don't come from a DEC background - I've never used Alphas, VMS, or anything DEC made. I just remember reading about VMS customers being unhappy about being forced into the Itanium.

I have heard of IBM mainframes that have being running for over 30 years without stop - and with no original parts, other than the mounting frame itself.

Reply to
David Brown
Loading thread data ...

Well, I _do_ come from a DEC background :-) (PDP-11 then VAX then Alpha) and VMS is still a part of my day job; my embedded work is just a hobby.

When DEC moved from VAX to Alpha, it was regarded as a major leap forward and hence was highly welcome.

However, when Alpha was killed in favour of IA64, the general reaction was "Why ?" "What does this offer the VMS community over Alpha ?", etc.

There was some talk about how IA64 was the future and how it would scale better than Alpha, but many in the VMS community were unconvinced.

However, IA64 has lost one of the reasons for existing as the effective End Of Life (EOL) for VMS has recently been announced; HP has announced that VMS will not be ported to Poulson and that new sales of existing IA64 variants are currently planned to end at the end of 2015.

A copy of the letter to customers has been posted on a VMS forum here:

formatting link

Standard support for VMS on existing servers will continue for a few more years (end of 2016 for Alpha; end of 2020 for IA64), but 2020 is been seen as the EOL for normal VMS systems.

Of course, HP didn't even have the integrity to announce the EOL _as_ a EOL; they just said that machines will be supported to a listed set of dates.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

Itanic was released in 2001, Oracle purchased Sun in 2009.

Oracle bought Sun to get hold of Java; for a long time there was serious doubt about what, if anything, Oracle would do with Sun's hardware business.

And then there's the Mark Hurd thing...

Hence your comments are valid now, but not at key points in Itanic's history.

I'm sure they were unhappy, but Alpha wasn't a panacea and had no future first with Compaq and then with HP.

Reply to
Tom Gardner

It was, but by then VAX was very long in the tooth.

Nothing, of course, but that wasn't the point.

That's a difficult call. Itanic _was_ the future because it was HP's core market.

After HP split off Agilent in 1999, there was a saying that "The real HP is still alive, but it is now called Agilent". Carly Fiorina did everything in her power to see that was the case.

Reply to
Tom Gardner

Of course, this must have been a core memory site (power loss no issue, the program resumes after power restoration), on a semiconductor memory site, this at least required diesel generators with a lot of fuel for UPS power.

A VAX cluster system could run for years. To perform a OS version update, move programs from the CPU to other CPUs, update CPU OS version, boot CPU. The cluster serves customers all the time during this process.

For non-cluster system, I preferred to (cold) boot them once a year to detect failed components and errors in bootup process. Forgetting to boot some summer during low usage, I just marked the date next year for booting in my calendar.

Reply to
upsidedown

very first Itaniums were 32 bit (mostly server machines) in a 16 bit consumer world.

I don't recall any 32-bit Itaniums. Merced, the first, was 64-bit. In fact, it was originally called the "IA-64." Given the way the instruction word (64-bit) is defined, I'm having a hard time visualizing how that could be. Unless, perhaps, a 32-bit external bus with 64-bit internals was done. Similar to the 8088's 8/16 or 68008 8/16/32 internal/external data widths. But I don't recall there being any processors that did this.

Can you cite something that shows this? I've just done some searches and everything I find says 64-bit or IA-64 or words to that effect.

-Bill

Reply to
Bill Leary

FSVO "existing". VMS doesn't run on Tukwillas either - so only on still shipping Montecitos or Montvales (and I don't know HP's product line well enough to say which).

Reply to
Robert Wessel

No to both. All Itaniums were 64bit, and they arrived at a time (around Y2K) when both consumers and offices worldwide had long since switched to 32-bit machines.

Reply to
Hans-Bernhard Bröker

Ok, Intel took a left turn with the Itanium and so got behind the power curve on 64 bit instructions.

Nonsense. This is just too obvious a feature but only makes sense when the mix of support chips and CPU reaches some specific point. That happened when it was just too much bandwidth to ship across the bus between the two. AMD and Intel made separate decisions about when to make that change.

That's just your opinion, no?

Again, a design decision. I don't know the details of why Intel decided to go this route at that time, but they did make the fastest processors when the P4 was hot. And it was "hot", lol. AMD made a different decision and got back the performance lead for a short while. But this too came to pass.

You can recall ancient history all you want, but all the battles are over and AMD lost the war and Intel won.

LOL! I am using a five year old laptop that not only burns my lap, it almost sets paper on fire from the hot air exhaust. It has permanently turns me off from AMD processors. AMD was *two years* behind Intel in coming out with truly low power CPUs, they didn't have the resources to go down that path.

That is what the CPU market is all about, resources. Resources to design with, resources to develop new process technology and resources to bring the products to market. Those resources cost money. Intel has a lot and AMD has none.

--

Rick
Reply to
rickman

So you are happy to agree that Intel copied AMD on this one?

Intel was firmly behind the idea of putting the memory controller on the chipset. They felt it made the system more flexibly - you could use a different chipset for different memory types. They also worried that it would increase the power consumption on the CPU too much. AMD put the memory controller on the cpu, and got much better memory speeds (in particular, they had far lower latency, and scaled well for multi-socket systems).

Yes, based on reality.

Intel's early multi-core was crippled by poor bus design - the cores shared the same slow bus to memory. AMD showed them how to do it properly.

For multi-socket systems, AMD was /way/ ahead of Intel at making efficient buses and memory systems.

AMD famously beat Intel to the 1 GHz clock mark, and led the MHz battles for a while. Intel went all-out to get higher clock speeds, introducing absurdly long pipelines to do so. The P4 got them past AMD in clock speeds, but they were barely (if at all) faster than AMD's chips at real work. AMD's chips did not run as fast clocks as the P4, but did more useful work per clock, and had much lower penalties for jumps. So again, AMD had a much better design (even if they were not necessarily faster overall, they were faster per watt, and faster per dollar).

Again, Intel copied AMD and made better chips.

I don't disagree with the outcome here - it has been a good while since I have bought an AMD chip as Intel's chips have been better for most purposes for many years. Between the 386SX and up until the Core 2, however, I have bought mostly AMD (and occasional Cyrix/IBM), because they were better chips and gave far power power for the money.

These days, Intel makes the best x86 processors, with the best value for money. My point is merely that they got there by copying many of AMD's ideas.

AMD's "low power" processors were never particularly low power - but their desktop chips (and especially server chips) gave more power per watt.

AMD did an incredible job with very little money. Intel chips are in a far better position now as a result of competition with AMD, and being able to copy their ideas. (AMD also copied some of these - in particular, from the DEC Alpha.)

Reply to
David Brown

VMS V8.4 runs on Tukwila.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

Actually yes and no. The architecture was 64-bit but the implementation was rather 32-bit. Quite a bit like the early 68000, Itanium originally had 64-bit registers, 32-bit data paths and ALU. Or did it? Just try to get good information.

?-)

Reply to
josephkk

Sorry, but sounds like Intel fanboy to me :-). AMD have a long and illustrious history in the design and production of innovative product, right back to the days of the old 4 bit slice 2900 family, which so many of the early minicomputers were based on . Contrast Intel, who were very late to the 64 bit x86 game and were in effect forced to follow AMD due to the failure of Itanic and the success of AMD with their again innovative update of X86 to 64 bits. Intel really haven't done anything truly innovative and original since the 8086, but of course, ymmv. It's all about market control, with direction and release of innovation strictly for the bottom line and nothing to do with the advancement of computing.

Itanic was designed to be so complex that no one would be able to copy it, but performance was and still is underwhelming, despite Intel / HP throwing billions of $ at it in over a decade of development...

Chris

Reply to
chris

The 8086 wasn't innovated or original. It was a poorly done upgrade of an existing design that had alreayd been left in the dust by Zilog and others. Compared to its peers like the Morotola 68K family, the

8086 was and obsolete kludge at birth. IBM chose it anyway, and we've suffered for 30 years because of it...
--
Grant Edwards               grant.b.edwards        Yow! Maybe I should have 
                                  at               asked for my Neutron Bomb 
                              gmail.com            in PAISLEY --
Reply to
Grant Edwards

billions

No it wasn't /designed/ to be complex. It was complex as a direct consequence of the architectural decisions. It failed because the compilers couldn't make good use of the architecture. The i860 had similar problems.

Apart from that, for 30 years I've liked the system-awareness visible in AMD's ics, starting with them producing octal TTL buffers when everybody else was producing hex buffers (for 8 bit busses!).

Intel has always relied on its mastery of semiconductor processes.

Reply to
Tom Gardner

Itanic sounds like an reincarnation of the iAPX432, too complex for the available technology of its days.

Due to the technical/commercial failure of the i432, intel had to produce some stop gap solutions like 8086/80286/80386 to stay in business.

Reply to
upsidedown

The Itanic failure was because the compilers couldn't make use of the powerful (and power hungry) hardware in the Itanic.

The i432 failure was because the hardware was underpowered.

Reply to
Tom Gardner

Even optimal (peak) rates on many of the IPFs wasn't really all that impressive. Yes, they all could issue six instructions per cycle (until Poulson), but clock speeds on most of the implementations was quite low, and the need for a considerable number of address computation instructions further reduced the effective number of slots issued per unit time. What was left, even at peak rates with all slots full of useful instructions, was OK, but nothing that would really offer a compelling case to invest a lot in the platform.

Part of that was the perpetually late delivery, and trailing processes used for IPF.

And then the compilers failing to do a good job filling those slots was just the icing on the cake.

I do applaud Intel for actually truing something different though. While it may not have worked out, ISAs have been boring as all heck since the start of the RISC era. IPF was at least a leap in a different direction.

Reply to
Robert Wessel

Partly true: the PA-RISC architecture was running out of steam, and the only way for HP to offer a migration path was to develop Itanic. If it had hit the marketplace as planned it would have had decent performance, but it slipped and slipped for multiple reasons.

Clock speed per se is always a poor measure; it depends on what you can do in a clock (e.g. compare PA-RISC with x86 processors of the same era).

That's valid.

Even if everything else had gone right, the compiler issues would have clobbered the performance. There are some that were saying that the compiler problems were known to be intractable decades(!) earlier.

Itanic was an HP effort, with Intel fabbing the devices. HP started in 1989 and Intel only started working on it in 1994. Later on the HP team did transfer to Intel.

Reply to
Tom Gardner

I used to be all about AMD. The only PC I built used an AMD. But they are no longer the up and coming challenger. They are the "agony of defeat" clip at the end of the Intel intro.

Yes, AMD *was* an innovative company some 30 years ago. But even then, the 2900 bit slice family was short lived in terms of product edge. I once used a workstation that was a bit slice implementation of the Motorola 68000 because it could run about twice as fast. That lasted what, a year? then the 68010 and 68020 came out and the bit slice workstation was shoved into the corner...

Yup, Intel took a left turn with that one. What's your point?

AMD bought ATI and almost went under with the debt load. I don't think Intel ever had to explain billion dollar losses. No, in fact, they had to pay AMD 1.5 billion or something like that when they lost a law suit. Heck Intel's profit for one quarter is almost as much as AMD's market cap!

Where is the value in technical innovation if you can't run the company?

--

Rick
Reply to
rickman

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.