Lack of bit field instructions in x86 instruction set because of patents ?

Indeed, my model for operator console/debugger was on the CDC 6500. The Console PP could look anywhere in memory at anytime, didn't need the CPUs cooperation at all.

I do know debuggers, printfs, etc., can change the timing. I use in memory logging, event counters, watches, etc., etc., as needed when available. Sometimes, though, you have to debug with nothing but a serial port and no debug environment...

- Tim

Reply to
Tim McCaffrey
Loading thread data ...

True, although many of the inexpensive ICEs today can get quite confused if the CPU starts wandering off into the weeds and begins executing bogus instructions, writing power control/debugging control register, etc.

Printf-type debugging can be the most productive approach for some embedded systems, in many cases, so I certainly wouldn't write off a candidate who was doing their debugging that way. On the other hand, I'd also want them to be familiar with what a proper debugger *can* do for them.

I also tend to agree with John that, while certainly even very good programmers do some debugging, a good programmer's attitude needs to be that the primary "debugging" task is done while writing code, and running the code is meant to verify it actually works. I've had the unfortunate experience to work with several programmers who would actually claim "[their] code is done" as soon as they had a successful compile with no testing whatsoever, even though historically there had bugs in their code at least 90+% of the time that would be *immediately* discovered as soon as you actually *ran* the code. :-(

---Joel

Reply to
Joel Koltner

They were spurned because during the microprocessor revolution, the main way to make a true hardware-level debugger was by using a bond-out version of the IC, and this priced ICEs for microprocessors often well out of the range of small companies or hobbyists. I'm painting with a rather broad brush here -- they were exceptions, certainly -- but a decade ago if you wanted a true hardware ICE for an 8031, an AVR, or a PIC, you were looking at what -- in today's money -- was often >$10k.

Whereas printf() debugging is effectively free.

Even today, while ICEs have come down to the 3- (or even 2-) digit price levels, most of them don't support the "live variable watch" that your VAX debugger did. (But they do all support breakpoints -- although sometimes only a couple in hardware, more than that have to be emulated in software via single-stepping, which is orders of magnitude slower than the CPU speed --, single-stepping, examining variables when stopped, etc.)

---Joel

Reply to
Joel Koltner

Not at all:

Did you ever _use_ the Logitech Modula II compiler?

I did, and it was the most broken piece of programming I've had the misfortune to work with: Among _many_ other things, the compiler would generate faulty code for all loop constructs except one (WHILE afair).

To be totally fair, the compiler developers probably knew about most of these bugs, and were only midway in the process of making the compiler host itself: The version they sold me were called "V0.3c"! :-(

At this time Borland came out with Anders Heijlsberg's Turbo Pascal, which ran rings around Logitech M2, in pretty much every possible way.

Terje

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

Los Gatos used metaware's technology for a lot of stuff dealing with defining languages related to VLSI tools ... including starting out a (mainframe) pascal language. the pascal language eventually evolved to the point that it was released to customers as pascal/vs (and later also released on aix).

pascal/vs was used to implement the mainframe tcp/ip stack ... which suffered from none buffer overflow vulnerabilities and exploits commoningly associated with C language implementations ... misc past posts mentioning C language

formatting link

it wasn't so much that it was impossible to implement buffer overflow problems in pascal ... it just was that it took enormous amount of effort to have buffer overflows ... compareable to the effort in C language environments to NOT having buffer overflows.

unrelated to the use of Pascal language in the mainframe tcp/ip stack implementation ... here are various past references to doing the rfc1044 implementation for the mainframe tcp/ip product ... and in some tuning work at cray research getting something like 30 times the thruput with 1/20 the pathlength (nearly 3 orders of magnitude improvement)

formatting link

in many ways, pascal shares the difficulty of having buffer overflowd with the much more ungainly PLI ... slight drift here about air force security study of "multics" (implemented in PLI) including mentioning of not having any buffer overflows:

formatting link
Thirty Years Later: Lessons from the Multics Security Evaluation

somewhat more topic drift ... multics was going on the 5th flr of 545 tech sq ... and the science center doing virtual machine cp67 work was on 4th flr of 545 tech sq ... misc. past references

formatting link

i was undergraduate at a univ. that had installed cp67 and I got to play with it. the vendor even asked if i could make some specific enhancements. in retrospect ... some of the enhancements may have originated from these guys ... which I didn't learn about until much later:

formatting link

for some turbo pascal (& c) trivia ... there was an effort a couple yrs ago to recover several old borland turbo distribution diskettes (i had to find a 5.25 floppy drive, i eventually managed to recover close to 30 floppies):

formatting link
Turbo C 1.5 (1987)
formatting link
Turbo C 1.5 (1987)

i had logitech modula2 diskettes but didn't try and recover them.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70
Reply to
Anne & Lynn Wheeler

Borland themselves released several Turbo Pascal versions some years ago as part of a corporate electronic museum. They seem to currently be available at:

formatting link

G.

Reply to
Gavin Scott

I loved having Pascal/VS available!

I used it once to write the mainframe part of a heavily customized Kermit implementation that was capable of using 1900+ byte large packets, by filling most of a screen image.

The client was on the other end of a 3270 protocol emulator, talking VT100 or similar, and it would remove the screen formatting sequences and end up with the same packet data.

The fun part was that this worked pretty much immediately, and ran just as fast as the file transfer facility on a dedicated 3270-PC, and 10x faster than the (Kermit) reference version which used the default single line packets.

Terje

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

re:

formatting link
Lack of bit field instructions in x86 instruction set because of ?patents ?

not long after the "troubles" in '92 (went into the "red") ... the company was spinning off and/or moving to COTS for some number of things (lots of cost/capital reducing measures). some of this was moving to outside vendor off-the-shelf electronic design tools ... which also involved transferring some number of internal tools to outside vendors.

we had already left ... but i got a consulting contract to port a >50k statement (rs/6000) pascal/vs (electronic design) application to other platforms (as part of outside vendor picking up the application). pascal/vs had lots of enhancements and some number of other pascals appeared to have been used for little more than student education projects ... which significantly complicated the port (that plus one of the vendors had outsourced their pascal to an organization 12 time-zones away).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70
Reply to
Anne & Lynn Wheeler

Crafting a perfect line of code for me is as much fun as (probably) achieving some heroic S/N ratio or some such is to you. :-)

And yes, I write code that works. I'm one of those 90% design, 10% test guys. ;-)

Cheers! Rich

Reply to
Rich Grise

Yes, and for some moderately large projects. It was a reliable industrial strength compiler for the x86 from about version 2.01 onwards (c/a 1985). 2.10 was particularly good and stable. I do recall some serious fun when they suddenly changed the default byte alignment in structures in one release (which was not a very friendly thing to do).

ISTR it was a slow 4-pass compiler and it's code generator was more than a bit pedestrian even for the time (but you could add inline code). The overlay linker was incredibly good, and the debugging facilities were first rate - comparable with the mainframe tools they copied. Parts of DEC were actually using the language internally at the time. In the event of a rare system failure you could be almost certain of finding the root cause from the PMD in the symbolic debugger.

If you buy a pre-release version 0 bootstrapping prototype what do you expect? I am surprised they sold you it in that state.

A spin off from the Borland compiler development group headed by Niels Jensen produced by far the most appealing PC version of Modula2 as JPI. It's libraries were non-standard, but at the time it's code generation was state of the art and fast. They developed it independently and it had some very good vintage versions 1.17 and 3.04 spring to mind.

They had a no-compete clause with Borland for C-compilers and when it timed out they launched the most horribly buggy C-compiler I have ever seen. It would crash spectacularly on minor syntax errors (missing ; and that sort of thing). Code generation was OK if it worked. It did for them and Clarion took them over (as they used the JPI M2 internally).

These days XDS has probably the most interesting Modula 2 compiler and they now give it away. It's code generator includes powerful static dataflow analysis to find path dependency bugs. It can find faults in old legacy code and has libraries that mimic JPI and PIM3 dialects.

I'd have expected Modula 2 to appeal to electronics engineers because it provides a means to make robust independent modules that can be trusted to do exactly what they say on the tin.

Regards, Martin Brown

Reply to
Martin Brown

Here's one of my all-time favorites (for the tcp copy&checksum loop): Timings are for a classic Pentium or modern multicore.

next: mov [esi+edi],eax ;; Cycle 1, U pipe first: mov eax,[esi] ;; 1, V pipe

adc edx,eax ;; Cycle 2, U pipe, Carry from prev iteration lea esi,[esi+4] ;; 2, V pipe

dec ecx ;; Cycle 3, U pipe, does not modify Carry! jnz next ;; 3, V pipe

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

In hindsight, so was I, but this was the very first compiler I ever bought, and they did run ads for it (without any caveats or version numbers), probably in Byte.

JPI M2 was just beautiful, and they did extremely good work with the TP-compatible Pascal version as well. I spent at least a couple of hours on the phone (long distance, from Norway!) with one of the developers, discussion various optimizations I'd noticed they employed (or not).

I was particularly impressed by they way they even managed to automatically merge pairs of byte variables (changing allocation order if they had to), and do assignments to both with a single 16-bit store operation.

I'll look into that, thanks!

That was the reason I bough

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

There's the difference: achieving a good s/n ratio is an end accomplishment, something we can sell. We can't sell one line of code, no matter how elegant.

Some things in engineering are just tedious: checking, testing, parts lists, test procedures, manuals, ECOs. Most of programming *should* be tedious: rereading, commenting, testing, documentation, all the stuff you need to do to get everything right. If you can do all that and also be an artist, good.

The best programmers I know don't do tricky stiff, don't take risks. Their code looks very ordinary, except it's well commented, and except that it just works.

John

Reply to
John Larkin

In one case I recall using the CGA display *as* the debugger. We had some TSR code under DOS/Win95 (before it became clear that trying to write such code and expect it to run on any users system without problems was madness) which would lock the PC up hard. Not having any hardware ICE or other debugging facility, we used the frame buffer in the CGA card as a footprint log, encoding events as different symbols and colors. The memory in the video card would survive the destruction of the running OS environment, and you could see what lead up to the problem by just looking at the sequence of symbols on the screen.

Which also reminds me of the UCSD p-system running off 8" floppies on a Terak, where the video memory was used as scratch space by the compiler. The combination of patterns on the screen and the sounds of the floppy seeking were quite expressive in their ability to communicate the progress of your compile.

G.

Reply to
Gavin Scott

Hardly. Your statement made no sense in context. Do you think every transition in a synchronous machine is clocked? It doesn't matter whether or not there is "asynchronous" logic (I'm assuming you consider "domino logic" to be "asynchronous") within the synchronous machine, it is still a synchronous machine.

IOW, it's you who is the troll.

Reply to
krw

But it is you who thinks that an argument about nomenclature of the whole device, namely "it is still a synchronous machine", is in any way pertinent to addressing a claim about "every internal state transition", with an emphasis on the "every".

I.e. you apparently don't know what the words "every" and "all" mean.

Which is far worse than being a troll.

Phil

--
Marijuana is indeed a dangerous drug.  
It causes governments to wage war against their own people.
-- Dave Seaman (sci.math, 19 Mar 2009)
Reply to
Phil Carmody

Well, I won't attempt to classify you, but will merely explain why you are completely wrong. At least as far as the thread goes; I am not going to waste time on a "who said what" flame war.

A system is built up of multiple components, each of which may be systems, so it makes very little difference what level we discuss.

If a set of entirely synchronous components is connected by logic which has a non-deterministic aspect, then the system is potentially asynchronous. ECC handled by interrupt (whether in firmware or the operating system) is merely one classic example of such logic, and remember that ECC can be (and should be!) applied to logic pathways as well as memory.

Since the early 1960s, almost all 'general purpose' computers have relied on such logic; since the 1980s, almost all embedded devices have. A few of them will go to gret trouble to provide apparent synchronicity, but it is almost always limited and at least some asynchonicity will be visible to at least some programs.

Regards, Nick Maclaren.

Reply to
nmm1

Writing TSRs weren't really that hard:

It was even possible to make them do substantial tasks and still work for years on a very diverse population of PCs. My best example is probably the network slave printer TSR I wrote some day around 1987:

Totally interrupt driven, mostly from the timer tick but also serial/parallel port ready, network card packet receive etc.

It used double buffering towards the network server so that it could print one buffer while receiving the next, had a 256-byte local stack buffer, and still weighted in at less than 1700 bytes in total resident size. This was partly done by making it self-relocating so that only the parts that needed to be there would stay on.

It kept on working without a single modification when Dos was replaced by Win3.X.

I wrote it from scratch in a single 5-hour sitting. After correcting a couple of syntax typo complaints from the assembler, it just worked, and kept on working for many years. :-)

Unfortunately, I've never been able to repeat that particular experience. :-(

Been there, done that, usually in the form of direct memory access to the screen buffer.

Terje

--
- 
"almost all programming can be viewed as an exercise in caching"
Reply to
Terje Mathisen

I expect authors, artists and carpenters could make very similar remarks about their day-to-day activities. Almost any task has bits that are boring and bits that require creativity. Good practitioners know which is which. Good engineers show creativity in their design and are ruthlessly boring in the implementation.

Very few programming languages permit much syntactic artistry, so all code ends up *looking* "ordinary". If the design is right and has sub-divided the problem into well-defined and straightforward programming tasks, the code will look simple. If the design is ill-defined, the code will look random. If the design calls for bucketfuls of state and complex decisions, the code will look hopelessly complex.

Reply to
Ken Hagan

You're hilarious when you're on the run.

Phil

--
Marijuana is indeed a dangerous drug.  
It causes governments to wage war against their own people.
-- Dave Seaman (sci.math, 19 Mar 2009)
Reply to
Phil Carmody

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.