Britain "Open University" ses:

The original designs were generally OK. The main problems were that systems documentation was usually terrible and was seldom or never maintained once the system was up and running. I've worked on sites where the analyst team solved *that* problem by shredding their records once a change had been designed, implemented and was up and running live. I've also worked on far too many systems where the only reliable documentation was the program source - and you just had to hope that the variable names and comments were written to help rather than hinder future maintenance.

Of course, it doesn't help when the documentation is on paper and the program sources are on cards or paper tape: the chances of the two being even vaguely in sync approximates to zero. I suggest that the un-noticed revolution occurred when source version control systems, e.g CVS, came into general use and so let the documentation and program source be stored in the same place and maintained together. The only advance on that is something like the JavaDocs utility, which lets module-level documentation be included in the program source, where it has at least a small change of being kept up to date.

However, these aren't any use in the face of a PHB who considers that maintaining the documentation is just a waste of time and money. And banks are full of PHBs, which explains everything about the state of their computer systems: the old code is still running because by now its almost impossible to understand or maintain and nobody in management is about to pay for documenting it well enough to re-implement: that would use up the money that rightfully belongs in their bonus packages and we can't have that, can we?

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie
Loading thread data ...

On Mon, 6 Oct 2014 11:20:35 +0000 (UTC), U. R. Invalid declaimed the following:

DEC products (the larger PDP-11s, and the whole VAX/Alpha series) had a long life, however, in the aerospace world... Though the primary language during my tenure (the 80s and 90s) at Lockheed was FORTRAN 77. We did have a (in my opinion) derailing when one project chose VAX Pascal (for a real-time system? Ugh).

My current employer has VMS running under an emulation package on a Windows box, in order to run a VAX-based cross-compiler for Ada (to 68K processor) -- because the development environment has been certified for aircraft. Porting to a different environment (say a GNAT cross-compiler) would require recertifying the generated application code.

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
Reply to
Dennis Lee Bieber

Wow.

DEC didn't build business computers. DEC wisely decided not to compete head-to-head against IBM.

DEC computers were in nuclear industry, power plants, refineries, medical instruments, research laboratories, LORAN navigation, factory test equipment, numerically controlled machines, telephone switches, etc, etc, etc, ...

Ever hear of Arpanet? DEC computers (and a Honeywell IMP).

I'd say all of those were mission critical. Or does your definition of 'mission critical' just involve running payroll. I'd say keeping the lights on and the phone working was just as 'mission critical'.

None of that ran COBOL. Most of that equipment never saw a raised floor, either. But it ran 24/7/365 and had good reliability.

"Triple Face-Palm"

Rob.

Reply to
Rob Doyle

I'd disagree with that too.

In the '90s I was involved in a major, but ultimately abortive, banking project for Barclays that would have run on clustered DEC VAX kit. It sat of DEC's RDBMS and I seem to remember writing quite a bit of COBOL for it.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

Probably there were performance issues? I worked on a project on VAXes in that timeframe and I could not believe how slow they were. We had to use 386 PC's (the then-current workstation) as a "coprocessor" on the network for computationally intensive tasks. They were about 10 times as fast as the VAX 6210.

Reply to
Rob

Today, its not IBM mainframes but SPARC clusters that run Oracle.

--
Everything you read in newspapers is absolutely true, except for the  
rare story of which you happen to have first-hand knowledge. ? Erwin Knoll
Reply to
The Natural Philosopher

RDBMS is not (often) computationally intensive though.

Its big IO and BIG storage oriented

--
Everything you read in newspapers is absolutely true, except for the  
rare story of which you happen to have first-hand knowledge. ? Erwin Knoll
Reply to
The Natural Philosopher

Yes, but of course what was called "big IO and BIG storage" in those days is now available as standard on a low-end PC.

At that time I used an IBM 62RW100 5.25" SCSI disk which was normally used in such environments (AS/400 I think).

It offered an impressive 820MB of storage at a blazingly fast 2.5MB/s transfer rate. I could impress all my friends who had only 80MB at

600kB/s which was the usual disk size and performance back then.

The VAX cluster at work had similar disks.

Today a base-configuration PC has a disk that is 40 times as fast and

1000 times as large.
Reply to
Rob

A VAX6210 gets about 2.75x the performance of a VAX 11/780. On Dhrystone, a Vax 11/780 gets 1757 and 386/33 is about 6500-7000.

So a 386/33 is about 4x times the performance a 780 and about 1.5x time the performance of a 6210.

We all know that Dhrystone is meaningless now and has been for a long time due to optimising compilers and modern CPU caches etc. But back in the day it was useful to gauge what different machines could maybe achieve.

In 1991 I started using a 386/33 running DOS based cross-development software targeting Z80/6303/68000 CPUS for embedded applications using C/assembler in my day job. Before then I was doing the same work using an 11/730 running BSD 4.3. The performance improvement was unbelievable especially as the PC hardware cost was 60% of 1years maintenance contract on the 730.

So I am not surprised you found standalone PCs faster than a low end VAX. What was missing from a PC was a proper OS and plethora of software with a track-record behind it. We had DOS and a lot of crashes and reboots instead.

Perhaps you could compare apples with other pomaceous fruit in future because you comparison of apples and oranges will result in the expression yielding an answer in mangos!

Reply to
mm0fmf

And here you are comparins single-user operations. The VAX was vastly better than a PC at handling multiple terminal users. (Except the microVax, which did most of the interrupt handling in software, rather than dedicated hardware.)

The biggest limitation of the VAXes of that era was the limited disk size, which in turn limited the pagefile size, which in turn limited the options for very large virtual memory. VMS6 made big improvements there, coupled with the availability of larger discs and RAID arrays.

--
Alan Adams, from Northamptonshire 
alan@adamshome.org.uk 
http://www.nckc.org.uk/
Reply to
Alan Adams

Indeed it was but I was comparing single user operations. I was the only user on the 11/730! It was my 730. It broke my heart when it was decommissioned as we went from an albeit slow "proper" computers to jumped up home computers with a toy OS. Just they were so much cheaper doing what we wanted and, of course, faster.

Now here I am 22 years later and we all have i7 laptops at work which act as RDP clients and X servers onto big multiuser Windows or Linux machines. We've gone from one computer:many users ( or my own 730 ) to many computers:one user each and back to one computer:many users. Although it's really farms of servers. (Over 10k Xeon cores and 44PB of disk). And it still takes bloody hours to compile anything!

Reply to
mm0fmf

Try having lots of source files and a proper makefile. And linking object modules...;-)

--
Everything you read in newspapers is absolutely true, except for the  
rare story of which you happen to have first-hand knowledge. ? Erwin Knoll
Reply to
The Natural Philosopher

makefiles? How 20th century! It's all SCons now. :-)

Reply to
mm0fmf

doesn't seem to have speeded up your compilation however ;-)

--
Everything you read in newspapers is absolutely true, except for the  
rare story of which you happen to have first-hand knowledge. ? Erwin Knoll
Reply to
The Natural Philosopher

No, it doesn't seem to scale, mores the pity.

Still it's not my decision what we use. I'll still bitch about it though!

Reply to
mm0fmf

Indeed we were comparing at C sourcecode level, and we used the Turbo C compiler on the MS-DOS "operating system" plus a "DOS extender" (a kind of kernel to make the system run in virtual memory mode), and compared to Digital's C compiler under VMS. We were being told that C was not a primary language for Digital and the compilers for other languages were better. But we did not want to code in Fortran or Pascal.

We had the main application running on the VAX and it sent the computationally intensive work (RSA encryption...) to the PC. There were about 6 PCs connected on ethernet, and some protocol to detect if they were up and serving work requests. I wrote a "multithreading" layer on top of DOS to handle the work. There were no users on the PC, and not on the VAX either (apart from console operator). It was merely a network server for requests in some telecom system. To upgrade the VAX cluster (2x6210) to perform like the 6 PCs would be too expensive, so we lived with this construct.

Today, a single PC running Linux would be able to handle the full system quite easily. Maybe even a Raspberry Pi, when there is RSA hardware or the GPU could help (don't know about that)

Reply to
Rob

This was not Oracle. DEC had its own, fairly good, RDBMS at that time. Called IIRC, RDB, and equipped with a rather interesting, language- independent, way of defining and compiling interface modules that did much the same as more modern RDBMSs do with their built-in procedures.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

Nah, it never got far enough to uncover those.

It was killed by several factors including project management issues, too little idea of what the project was supposed to achieve, far too big a specification and design team and the general uselessness of the IEF design tool and methodology that we were expected to use for all project design and documentation.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.