Is it a lost cause?

On Sunday July 3 2016 13:57, in comp.sys.raspberry-pi, "Martin Gregorie" wrote: [snip]

COBOL (an acronym for COmmon Business-Oriented Language) was designed at a time when computers and computing had just started to expand out of the military (where computing was primarily used to break codes and compute artilery firing solutions) into business.

At the time, while the art and science of programming was still developing, the intended users of the language were Accountants and other Financial workers. The COBOL language is the language of an Accountant, describing how (s)he manages ledgers, accounts, payrolls, taxes and the like.

A typical use-case would be the accountant describing how to compute how much to charge a customer for a purchase of a number of items: "Multiply the number of items by the per-item cost giving the gross cost. Multiply the gross cost by the taxrate giving the sales tax. Add the sales tax to the gross cost giving the net cost."

And, that, in COBOL, would be MULTIPLY NUMBER-OF-ITEMS BY PER-ITEM-COST GIVING GROSS-COST. MULTIPLY GROSS-COST BY TAX-RATE GIVING SALES-TAX. ADD GROSS-COST, SALES-TAX GIVING NET-COST.

You see, COBOL wasn't originally designed for scientific calculations or process control or even as a computer-science tool; it was designed for (North American) business, and (North American) business of 1960 spoke, not in formulae or parenthesis, but in English.

[snip]
--
Lew Pitcher 
"In Skills, We Trust" 
PGP public key available upon request
Reply to
Lew Pitcher
Loading thread data ...

Hmmmm...... I worked as a programmer/analyst/architect in a financial institution for about 30 years, and I recall working on

- a real-time system to read MICR data from cheques, determine a physical action to execute based on that data and data retrieved from multiple IMS and DB2 databases, and then execute the physical action on a MICR sorter, all in the milliseconds that it took a cheque to travel the (minimum) 2 metres from read-head to first stacker module,

- an IMS & DB2 based Online Banking system that guaranteed sub-second response time for all financial and inquiry tansactions to well over 10000 active VTAM-supported terminals spread from the Atlantic to the Pacific in Canada, down the eastern seaboard in the US, and in many foreign countries (such as Japan, England, Spain, etc.)

- a VTAM/TCPIP/MQSeries-based data transport (that, for reasons beyond my paygrade was written in COBOL) that moved financial transactions between internal mainframe systems, external mainframe systems, customer-based mainframe systems, and our internal web services, again with sub-second response time.

Don't even try to tell me that "commercial data processing" is for "second-rate" programmers, because to do real-time processing on millions of dollars per transaction requires a finesse that you probably will never get from your run-of-the-mill "real-time systems".

--
Lew Pitcher 
"In Skills, We Trust" 
PGP public key available upon request
Reply to
Lew Pitcher

Despite what you say, all the above looks fairly run-of-the-mill COBOL and data processing. What real-time programming did you do in the way of device drivers, for example?

Reply to
gareth G4SDW GQRP #3339

Nope, all US. I guess asininity is universal.

--
Pete
Reply to
Peter Flass

Lew Pitcher writes:

IMS originally done DBMS for apollo program

formatting link

IMS group was later moved to STL (since renamed silicon valley lab). Original sql/relational was System/R done on vm370 370/145 at san jose research ... some past posts

formatting link

The followon to IMS DBMS was suppose to be EAGLE ... and while the corporation was preoccupied with EAGLE, it was possible to do tech transfer to Endicott for release as SQL/DS. Later when EAGLE imploded, they asked how fast would it take to port relational to MVS ... which is eventually released as DB2, originally for analytical & decision support *ONLY*).

Along the way Jim Gray left IBM Research for Tandem ... and was palming stuff on to me ... consulting with the IMS group ... and helping support Bank of America ... an early System/R customer.

I've mentioned this reference to meeting in Ellison's conference room Jan1992 on HA/CMP cluster scaleup. The Oracel senior VP in the meeting would periodically tell the story that he did the SQL/DS tech transfer from Endicott to STL for DB2

formatting link

Along the way, the (mainframe) DB2 group was complaining if I was allowed to continue, it would be *at least* 5yrs ahead of anything they were doing. Within a few weeks of the Ellison meeting, cluster scaleup was transferred, announced as IBM supercomputer and we were told that we couldn't work on anything with more than four processors.

old cluster scaleup email around Jan1992 period

formatting link

ha/cmp posts

formatting link

a little x-over with this post in the thread

formatting link
Is it a lost cause?

my wife was in the JES group and worked on loosely-coupled JES2 & JES3 (co-author of JESUS which specified how to combine all the necessary features of JES2 & JES3 into single product ... which never happened). She was then con'ed into going to POK to be the (mainframe) loosely-coupled architect (mainframe for cluster). While there she did peer-coupled shared data architecture ... some past posts

formatting link

she didn't remain long because 1) little uptake (except for IMS hotstandby until SYSPLEX and Parallel SYSPLEX) and 2) communication group was constantly trying to force her into using SNA/VTAM for loosely-coupled operation.

In the mid-80s, IMS hot-standby got really interested in the SNA/NCP/VTAM emulator work. The problem was that while IMS hot-standby could fall over in minutes, VTAM session restart time (on the standby processor) grew non-linear, 20+k terminals could take 90mins elapsed time. It was straight-forward for the SNA/NCP/VTAM emulation to maintain (active) shadow sessions on the IMS standby processor so take-over for everything was a couple of minutes (instead of an hour or more).

note that in the late70/early80s, there was a lot of work on subsecond interactive response. YKT research was touting they had the best online systems in the company with avg. quarter second system response. The issue was that local channel-attached 3272 controlers with

3277 terminals had .086 hardware response. To get quarter second human response, needed .164 system response. I had a number of online systems with 90th precentile .11 sec trivial system response ... for .196 second human response ... old post
formatting link

problem was that the replacement for 3272/3277 was 3274/3278 ... where they had moved a lot of electronics out of the terminal head back into the (shared) controller ... and hardware response tended to be betweeen .283 to .530 seconds (for best case, fastest, direct channel attached

3274 controller) ... making it impossible to achieve quarter second response seen by the human user. We made complaints to the 3274 about the worse human factors for 3274/3278 ... and were eventually told that 3274/3278 wasn't designed for interactive computing ... but for "data entry".

past posts mentioning Thadhani publishing studies on showing improved human productivity with .25sec (or better) response (seen by human).

formatting link
3270 Terminal
formatting link
Is there an SPF setting to turn CAPS ON like keyboard key?
formatting link
Who originated the phrase "user-friendly"?
formatting link
From Who originated the phrase "user-friendly"?
formatting link
Who originated the phrase "user-friendly"?
formatting link
The PC industry is heading for collapse
formatting link
Writing article on telework/telecommuting
formatting link
cp67, vm370, etc
formatting link
Why File transfer through TSO IND$FILE is slower than TCP/IP FTP ?
formatting link
PDP-10 and Vax, was System/360--50 years--the future?
formatting link
3270 response & channel throughput
formatting link
Dualcase vs monocase. Was: Article for the boss
formatting link
System Response
formatting link
How Much Bandwidth do we have?
formatting link
You count as an old-timer if (was Re: Origin of the phrase "XYZZY")
formatting link
You count as an old-timer if (was Re: Origin of the phrase "XYZZY")

--
virtualization experience starting Jan1968, online at home since Mar1970
Reply to
Anne & Lynn Wheeler

Yeah, that's the waterfall model, which was very popular in the late

1950s and 1960s: start with requirements, move to design, then implementation, verification, and end up with maintenance.

By the time Fred Brooks wrote "The Mythical Man Month" in 1975, everyone who was actually doing programming knew it didn't work, but we've been arguing ever since about what programming models actually do work.

Reply to
John Levine

Please, that's COMPUTED GOTO, the predecessor of the C switch statement. Anything you could do with assigned goto, you could do with computed goto. Assign puts the address of the target statement into the variable, computed puts an index into the variable, and it indexes a list of statement numbers in the goto statement.

Reply to
John Levine

That's because they have separately compiled procedures (or subroutines or functions) and don't need it. When you link the program, you tell the linker which procedures go in which overlay segment. All of the IBM mainframe languages used the same linker so you could overlay programs written any of them so long as you could tell the compiler to generate separate object modules.

I never used IBM mainframe overlays, but in the 1980s I used a MS-DOS linker that worked almost exactly the same way to fold about 800K of C code into the 640K DOS address space.

Reply to
John Levine

A polite view of EXECUTE is that it implements a one instruction subroutine :-)

I believe self modifying code is justified when the CPU lacks a feature required to accomplish a necessary task.

Code obfuscation is a clever hack but not for production code.

The IBM 360's EXECUTE modified the 2nd byte of the target instruction. For "S" (storage) format instructions, the 2nd byte is the length as a literal. By executing an "S" instruction, the length was now variable from any of the general purpose registers.

Classic CPUs followed these simple cycles

- fetch an instruction from memory into the instruction register

- increment the PC (program counter) [the instruction could depend on the PC already pointing to the next instruction to fetch, important for PC-relative addressing]

- execute the instruction in the instruction register

Some systems hijacked the memory access to jam in an alternate instruction. For example, Intel 8080 peripherals reply to an interrupt acknowledge with a 1 byte RESTART instruction (RST 0 - RST 7) to force a jump-to-subroutine to the interrupt handler because there was no interrupt vector hardware.

Some systems allowed loading the instruction register to jam in an instruction. It was then executed immediately, without any memory access or program counter increment. A few examples of that:

1) The custom CPU of the AN/ALR-66 radar warning receiver allowed loading the ALU result directly into the instruction register. Since it had only 1 index register, I kinda almost justified using that for indexing other tables without juggling the index register for every access.

It worked like this: The instruction format for memory access (load, store) had the address at the least significant 12 bits. (it was a 24 bit machine). So adding an index/offset/subscript to the instruction (from one of the general registers) resulted in modifying just the address, leaving the rest of the instruction unchanged. Put the result into the instruction register and VOILA! Instant alternate index register!

I tested that only while I was learning the system and NEVER PUT ANY SUCH HORROR INTO PRODUCTION! The reason? It was impossible to debug. There was no way to examine the result of the ALU before it was executed because it was internal to the CPU beyond the reach of the display panel or any logic probes.

2) Vintage computers such as the LGP-21 allowed direct access to the instruction register via programming or the front panel, thus "the story of Mel". 3) Microprocessor systems too cheap to have a front panel with circuitry to read/write RAM forced the poor user to jam instructions into the CPU (instruction fetch from switches instead from RAM) just to read/write RAM, set PC, etc.

But that's following the fine tradition of cost reduction used by early machines such as the Bendix G-15, LGP-30, LGP-21. I vaguely remember how each Flexowriter keystroke shifted 5-6 bits into the instruction register to 'single step' execute once enough bits were accumulated.

The days of self modifying code are over with instruction pre-fetch, pipelines, cache, strict Harvard architecture (separate I and D memory) and microcontrollers/SoC that run code only from flash.

That's what killed cleverness such as running one ahead of the instruction fetch as in this clever way to clear all memory on an IBM 1130 with just 2 instructions. [the IAR: Instruction Address Register is what we now call the PC: Program Counter].

C000 load accumulator from memory address IAR D000 store accumulator to memory address IAR

Then ZAP! All memory was 0xD000 (all memory was 16 bits wide). Without any index register for *c++, how does it work? The magic is IAR-relative addressing.

The "load accumulator" loads the accumulator with the contents of memory at (IAR + 0): zero offset from the IAR. Since the IAR was already incremented, it points to the STORE instruction.

The "store accumulator" stores the accumulator (containing the store instruction itself) to memory at (IAR + 0). Since the IAR was already incremented, that is the memory location immediately AFTER the STORE instruction executing.

Lather, rinse, repeat :-)

-- jeff jonas

Reply to
Jeff Jonas

Thanks for remembering that. Code reviews took away all the "fun names" for things :-(

One version of FORTRAN (perhaps WATFIV) had DUMPLIST: dump specific variables upon ABEND instead of wasting paper with a full core dump. I had the program auto curse-itself upon error with dumplist/oh-shit/i,j,foo,bar,baz

But it's all window dressing. C/C++ using a pointer-to-function or reference-to-function is still taking the address of a function/subroutine to CALL/GOSUB/JSR it later.

Oh, that's just disgusting! Unless the calling point was saved, how did you COME-FROM to return back to the calling GOTO?

As I commented in an earlier posting, a FORTRAN variant had the 'computed goto', which could kinda, almost implement a SWITCH/CASE, complete with default case and case fall-thru. GOTO (20,30,40,50) i C fall thru here for i4

20 ... got here for i .eq. 1 30 ... got here for i .eq. 2 40 ... got here for i .eq. 3 50 ... got here for i .eq. 4

-- jeffj

Reply to
Jeff Jonas

The shell 'eval' is a similar concept: submit a command to the interpreter but with some degree of isolation so errors can be caught externally. Often used to re-evaluate things like shell variables, for auto-generated command lines.

Reply to
Jeff Jonas

Please, feel free to provide a link. That's the way I remember it.

--
Dan Espen
Reply to
Dan Espen

That's a little harsh, AIUI it worked just fine for replacing rooms full of clerks using ledgers and anything else where the requirements were completely understood and not subject to change. Unfortunately by the mid

70s pretty much all of those applications had long been done.

Programming models that work aren't too hard I've worked using several, programming models that work *and* keep manglement happy are another matter. The current flavour of the month (sprint/scrum) is IMHO a bit too heavily weighted to keeping manglement happy.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

I think there needs to be a distinction between self modifying code (evil incarnate) and self extending code (quite common - think JIT compilers). I've never seen a good reason for the former and in these days of deep pipelines and multi-layer caching it would almost certainly be a terrible thing to do from a performance perspective.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

COBOL started in the era of no OS, where a minimal loader loaded the code into a bare-metal machine, so when it needed more memory than the system had, it had to arrange to bring in the segments needed only as and when they were needed.

Early PC DOS was also a very minimal OS, and apps larger than the available memory also had to do their own memory management.

This functionality was quickly pushed down into the OS though (long before DOS - DOS was simply miles behind when it launched). This meant that app programmers no longer had to manage their own overlays, as the functionality became handled transparently by the compilers and linkers, and eventually pushed even lower into the OS as OS's got more advanced and made use of improved hardware memory management features.

--
Andrew Gabriel 
[email address is not usable -- followup in the newsgroup]
Reply to
Andrew Gabriel

formatting link

Reply to
J. Clarke

That method gives you consistency which can be more important than correctness.

If you encounter a bug in the compiler but cannot get that bug fixed, you need to be able to implement a workaround which will work even if the compiler bug is fixed. I don't see how you can ensure your code will work without knowing machine language.

Using another compiler may solve the problem but then your product is tied to a particular implmentation of the language. I don't see how you can ensure that the "users" of the program you are writing will use a particular rendition of the language.

I'd be very nervous not knowing what machine code was generated. But I am an auld fart who hasn't done any real work in computers for decades.

/BAH

Reply to
jmfbahciv

ROTFL. Some of us got paid very, very well to play.

/BAH

Reply to
jmfbahciv

Or you can use an indirect with an offset into a table of instructions which we used on the PDP-10s. Very handy.

/BAH

Reply to
jmfbahciv

I thought Admiral Hopper wrote COBOL.

It depended on the shop. I was all three including a few others. IIRC, the Patriot missle's code was COBOL written in Natick Labs.

COBOL was used when decimal arithmetic was important waybackwhen. It also had the feature of being self-documenting if the variables were chosen well.

/BAH

Reply to
jmfbahciv

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.