filling remaining array elements with fixed value

So did I, at Helsinki University of Technology.

Even the Elliott 5 track code used the same letter and figure shift characters as the common teleprinter.

In the 60's, it was customary that each computer manufacturer ffet it obliged to have own character coding. Elliott was one of them, as well as e.g. IBM and Sperry Rand Univac.

--

-TV
Reply to
Tauno Voipio
Loading thread data ...

I have a 32-bit VMWare virtual machine that was created nine years ago that is still in use today. It has transitioned from VMWare version 5 through versions 6, 7, 8, 9, and 10 with no issues, and its host PC has transitioned from a 32-bit processor and 32-bit OS to a 64-bit processor and 64-bit OS with no issues.

--
John W. Temples, III
Reply to
John Temples

That is an impressive example of the upwards compatibility of the x86 architecture -- but still only within that architecture family. With Moore'law slowing down or perhaps stagnating, I'm not at all sure that x86 will still be in common use in the 2040's. Maybe it will, maybe it won't.

--
Niklas Holsti 
Tidorum Ltd 
niklas holsti tidorum fi 
      .      @       .
Reply to
Niklas Holsti

Even if it won't, chances are that that there will be x86 emulators (and associated PC hardware) for whatever platform is mainstream in 2040, just like today where you have (very accurate) emulators for computers that became obsolete decades ago (like the Commodore 64).

Reply to
Dombo

I wouldn't worry about it too much - you'll just get wrinkles.

Technically, the x86 "architecture" is smoke and mirrors now. Current x86 cpus don't execute x86 instructions at all ... they dynamically translate x86 code into an internal RISC-like instruction set [which varies by generation]. In terms of their micro-architecture, current x86 cpus are massively OoO, load/store, complex RISC machines having many hundreds of registers.

For the past 20 years - since the Pentium Pro - x86 cpus haven't implemented the x86 program model, they have *emulated* it.

I doubt x86 will ever completely disappear. If we do get to the point that there aren't OTS cpus that natively "run" x86 code, there will be FPGA cores, software emulators and x86->? binary converters for migrating software to whatever is the new dominant architecture.

George

Reply to
George Neuner

Hi David,

[Mo time for a "thorough" reply -- >>> Or you and he can continue to keep one foot nailed to the floor by

Here, I was referring to the environments they put in place to develop/support the "product's development", not the actual resulting products.

E.g., I recall one project released as bz2 archives -- before bz2 was in "widespread" use. So, before I can even get started *looking* at it, I have to build and install a bz2 executable.

Why? Would a gzip'ed tarball have been *that* much bigger? Would the bz2 tarball save hours/minutes of download time? Gobs of disk space (assuming you *don't* unpack it)? Some reason you can't release a gzip tarball alongside the bz2? (this is what they did -- AFTER I complained)

Is there a reason you have to use odemake, pmake, bmake, etc.? Or, worse yet, some completely *hacked* build system (look, for example, at Jaluna's u=build system :< )? If so, do you bother to share the reasons *why* you REQUIRE it? Or, was it just a whim? NIH? Or, worse, an "experiment"?

Is there a reason your documentation can't be a regular man page? Why an info file? Why a set of web pages? A latex document? I.e., what *else* do I have to build in order to build the entire complement of your deliverables?

Or, a perl script that emulates what two lines of sed(1) could accomplish?

Granted, the "younger generation" is more geared towards newer tools (e.g., perl in lieu of awk) than those of us with longer "histories" would have employed in many similar roles. But, often the choices appear arbitrary -- as if someone was using this "project" as a chance to "play" (because he didn't have to answer to a PHB!)

[I suspect "play"/experimentation is the underlying basis for many of these "decisions". And, even if they prove to have been *bad* choices, there's little inclination to go back and "fix" things: "Gee, we should have just used make instead of this bizarre set of hand-crafted scripts..."]

The other aspect of FOSS that is annoying is its "incompleteness". Many are simply "ideas" and not full-fledged "products". Like adding "reverse" to a vehicle but never quite sorting out the fact that velocity backwards should be at a much slower rate (gearbox/tranny) than *forward*!

"We'll fix that in the next release! (apologies to anyone who backed into a wall due to our current oversight)"

How did you *test* your "product"? (Ooops!) Did you forget that little detail? Or, was it all just ad hoc testing "on the fly" as you were writing the code? How can *I* verify that it does what you

*think* it does? How do I know *how* to test it? How to know what the weaknesses in its algorithms are likely to be (without studying them in detail -- with the COPIOUS documentation that you've probably forgotten to include!)

Which features have you incompletely implemented (before being lured off to add yet more *incomplete* features)? Which things should I

*not* expect to work? Which should I avoid *using* (if I am just a CONSUMER of your product) and which should I concentrate on testing and improving (if I am a fellow developer)?

How do I even know what it is *supposed* to do (formally) if you haven't documented that? (and, how did YOU know how to *test* it if you didn't have a formal statement of how it was supposed to work?)

Are the "default options" for the application accurately indicated in the documentation? Or, do they actually *differ* from what the documentation states? Are there *other* options that are not documented? Is this omission intentional (perhaps they are obsolescent or not completely implemented) or accidental?

IME, there are very few FOSS projects that satisfy these criteria.

Where did I claim they were written by hobbyists? My only reference to that term was in the context of CPU emulators: "Can you find a 68040 emulator? 99000? 32000? Z380? etc. (*other* than "hobbyist" attempts)" So, *can* you find one that is actively maintained by a "capable" community? One on whose efforts you would be willing to rest the future of your project? Can you find a formal definition of the contract that it *claims* to make with the "hosted applications"?

[20+ years after MS introduced Windows, can Wine *yet* emulate EVERYTHING that you could do on a "286"? Is there a list that lets a potential user evaluate whether it is even worth *trying* their favorite app? Any guarantee that even if it *looks* like it is working that they won't later discover some feature in the app that *won't* work properly?]

What annoys me about most FOSS is that most don't treat their "output" as a genuine (supportable) *product*.

[You hear developers lament that all these "problems" IN THEIR DAY JOBS are the result of pressures from PHB's, marketing, etc. Yet, when these same developers operate in an environment where those pressures are nonexistent, the same problems manifest! I.e., the common issue is the very developers who tried to lay the blame on their bosses! :< Yeah, I get it. Documentation and testing aren't fun. It should only be required for medical/safety -- let Wild West Software prevail everywhere else in the name of "time to market"...]

Yeah, I know... documentation and testing are "no fun". But, presumably, you *want* people to use your "product" (even if it is FREE) so wouldn't you want to facilitate that? I'm pretty sure folks don't want to throw lots of time and effort into something only to see it *not* used!

Granted, the "development" issue that I initially discussed is a tough one -- how can I *expect* all FOSS developers to "settle" on a common/shared set of tools/approach? While this is common

*within* an organization, it would be ridiculous to expect Company A to use the same tools and process that Company B uses! And, griping about it would be presumptuous.

OTOH, it's fair to gripe when Company (Organization) X does things in a way that is needlessly more complicated or dependent (on a larger base of tools) than it needs to be.

When I started my current set of (FOSS) projects, I was almost in a state of panic over the "requirements" it would impose on others. Too many tools, too much specialized equipment, skillsets, etc.

After fretting about this for quite some time -- constantly trying to eliminate another "dependency" -- I finally realized the "minimum" is "a lot" and that's just the way it is!

OTOH, it is highly unlikely that some *one* will need to undertake all of these activities. So, the requirements can be spread out over a *group* -- even if it is a "local" group -- instead of a single developer.

*You* don't have to be fluent in all four of the languages used in the system (not counting application specific languages). Nor do you have to be savvy with the various hardware design, fab and test tools. Or, the tools used to prepare developer, user and run-time documentation.

Just some subset of those. And, *I* will endeavor to pick good tools that adequately address their respective needs so you aren't forced to use two *different* tools (e.g., languages) for the same class of "task". As such, you only have to invest (time and/or money) in a *minimal* set of tools.

[Which is very different from "chasing FOSS" in general]
Reply to
Don Y

This is true of other languages, as well. E.g., foo =- 2;

Every language (tool) you rely upon in a project creates a dependency on that tool and the staff who must be able to *use* it. Try taking someone accustomed to (i.e., recent grad) C11 and sit them down in front of a C89 codebase WITH C89 TOOLS (because your industry requires tools to be formally certified prior to use). They will spend countless hours wondering why their "perfect" code is throwing compiler errors.

And, once you've clued them in to the reason, they'll be fuming /sotto voce/ each time they stumble on another "gotcha". I.e., the tool is "fighting them" -- they're not a "happy camper".

Where will C++, Java, Python, Perl, Ruby, etc. be "a few years hence"? Where will the folks who can efficiently *develop* with them be?

(e.g., 6502 ASM programmers are still in demand -- because no one

*writes* 6502 ASM nowadays, "mainstream")
Reply to
Don Y

Oh, that could be interesting. :-)

I've actually used those early C compilers (a long time ago).

This is why it's so important to learn the concepts first and the language second. With that grounding, it's a lot easier to adapt to new languages.

I'll admit I didn't realise the 6502 was still around - I last used it back in the BBC Model B days (ie: my school days). At the time I seem to remember preferring the Z80 (can't remember why) although some classmates seemed to prefer the 6502 architecture. [*]

What is the 6502 still used in ?

Simon.

[*] Of course, these days, schoolchildren are more likely to be comparing social media platforms instead of computer architectures. :-)
--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

I think it's still in use as a core in ultra high-volume, low-cost, full-custom chips used in toys...

-- Grant Edwards grant.b.edwards Yow! My EARS are GONE!! at gmail.com

Reply to
Grant Edwards

Hi Simon,

[apologies for forgett>>> No, but it's been through a incompatible language revision.

You have an odd definition of "interesting"! :> I would use a phrase more along the lines of " annoying!"

Granted, I picked on an obscure feature. But, there are lots of assumptions that (we all) make, subtly, in our development environments.

E.g., nowadays, a "kid" (new grad) might be startled to encounter

16b int's. And, at a complete *loss* to understand why his code is compiling yet *crashing* -- until this is made evident to him. [I've had to support lots of legacy codebases over the years so have learned not to take anything for granted from "the" tools :< ]

Yup. But, even there, we exhibit "biases" that we aren't even aware of in how we think things "should be". E.g., I often write self-modifying code. And, when I find myself working in an architecture (or a particular implementation) that doesn't allow me to do that, I feel stymied. As if I should be *entitled* to do that...

It's the essence of the Moto v Intel argument (in that time frame). Lots of registers or *few* registers (with a "fast access" bank of memory). I have a particular fondness for the Z80/180/etc. but admit it is a helluva kludge. For its generation, it seemed to have "the right number" of registers -- you weren't continually doing loads and stores from A/B, etc.

The 6502, OTOH, is a tiny, relatively clean architecture (if you're discussing "programmable calculators" :> )

The folks that have inquired most in recent memory have been split between military and (very) high-volume consumer goods. E.g., put the processor on a tiny die with whatever other mixed mode stuff you need and have a true "single chip" solution to problem.

Sadly, probably true!

Reply to
Don Y

The Z80 had a decent simple interface to external hardware, and many of the software writers were actually hardware engineers.

Many Z80 operations looked good until you tried to use them, e.g. "you can do a print function with one instruction", or trying to use the index registers when pointing to linked list nodes.

Z80 had "conventional" stacks and none of the page-zero tricks/limitations.

Reply to
Tom Gardner

Moving from C99 back to C90 for some small changes to an old project recently was a bit of a pain. Serious work would quickly be annoying.

Agreed.

I preferred the 6502 - mainly because the BBC had a decent assembler and a reliable disk drive, while my old Spectrum (with a Z80A) had no tools at all (hand assembling - and I even had to write BASIC code to interpret the hex code and put it into memory) and an unreliable tape recorder. Debugging machine code on the spectrum was mainly a matter of listen to the noises made by the power supply, and if it hung then everything had to be typed in again on a chewing-gum keyboard. It taught me the value of simple, careful and correct coding!

The Z80 was a more powerful architecture, with a number of 16-bit operations and many more registers than the 6502 - though the 6502 code do more operations per clock cycle.

Reply to
David Brown

In those days it was more like less clock cycles per operation; an instruction took 2-7 clocks on the 6502 (and many more on the Z80).

Reply to
Dombo

Yes, I meant more MIPS/MHz - or a greater fraction of an instruction per clock cycle. The 6502 had a pipelined architecture with instructions overlapping with each other, which was unusual at the time for processors of that size.

Reply to
David Brown

A Z80 would require 4-23 (?) clocks per instruction. E.g., 4 clocks required the operand(s) to be "implied" (like "INC r8") or encoded directly in that one-byte opcode. Towards the other extreme, something like a CALL required 10 (4+3+3) to fetch the instruction (including the target address) and 6 more (3+3) to push the PC.

What you really want to look at is how much "work per memory dollar" (or similar metric). E.g., you could clock a Z80 at a higher rate which effectively compensated for the fact that it took 4 clocks to "fetch an opcode".

By contrast, the 6502 (and 68xx family from Motogorilla) ran with a slower overall clock -- and "wasted" half the period of the memory access. So, put a given speed memory in the design and diddle the CPU "frequencies" so that the memory was being used at its capacity. *Then*, see which is doing more work!

With memory speeds (UV EPROM) in the ~450ns range, this effectively limited the "single clock per reference" architectures to ~1MHz. By contrast, the Z80 could run at ~3MHz for the same memory dollars.

So, a "direct CALL" (CALL ) on a 3MHz Z80 would require a bit over 5us. The same sort of instruction on a 1MHz 6502 would require 6 "clocks" -- 6us.

[A friend was in the Moto camp back then and we would continuously be having these discussions -- trying to adjust our respective metrics to reflect this inherent "oscillator difference". It didn't take long to learn that creating reliable benchmarks is a waste of time -- as you would tend to solve problems differently based on the hardware at your disposal and its assets/limitations. E.g., with the 6502's "zero page", you though really hard about what you crammed into those precious locations! The Z80, having nothing comparable, "freed" you to worry about *other* issues]
Reply to
Don Y

One of the best things the Z80 had going for it was the separate

64K "I/O space". You could be really sloppy in your address decoding without worrying that you were throwing away precious parts of the "program/data memory" space. [The other best thing was using the alternate register set for very low latency IRQ's!]

The index registers were intended to reference structs. People often failed to realize this and tried to use them in the way that HL (and BC/DE to a lesser extent) was used. The extra penalty brought about by the support for the and the "escape" prefix made this impractical.

OTOH, you could bring them to bear for incredibly structured codign that would be a nightmare if you were constantly forced to explicitly perform "address arithmetic" on a "base register" PUSH HL ; preserve pointer to struct LD DE,offset ; offset of member ADD HL,DE ; r.hl points to member; r.de is trash LD E,(HL) ; get low byte of member INC HL ; point to high byte LD D,(HL) ; get high byte POP HL ; recover pointer to struct vs. LD E,(IX+memberL) LD D,(IX+memberH) This made the code easier to understand and discouraged microoptimizing algorithms and struct layouts (so, for example, you could "walk through it" sequentially instead of "random access")

Reply to
Don Y

Separate I/O space is very nice for hardware, and for assembly programming - but it can be a real pain for C where memory-mapped I/O is more natural (especially for bigger blocks of data, such as buffers).

That was a nice feature. I remember you could also swap AF and A'F' with a single instruction, which was handy for intermediate data in calculations.

I vaguely remember doing some "object oriented" assembly programming using IX and IY to point to objects - but it was a very long time ago, and just hobby stuff.

Reply to
David Brown

That's what I initially thought, and is valid if most of the "work" is "in" a single struct. But it wasn't the dominant factor in "my" programs.

Overall, it was a right royal pain finding the right struct in the first place, and inefficient if the work once there was minimal.

For example, given a liked-list of nodes/structs where IX+2 contained a pointer to the next node/struct in the linked list. You couldn't move to the next node using LD IX,(IX+2) so IIRC IX had to moved into HL via the stack(!), incremented, then moved back via the stack.

That was appalling when traversing a linked list to find the relevant node - much easier just to keep everything in the HL register.

The 6800 was much better in that respect, the 6809 was even better.

Reply to
Tom Gardner

The Z80 was an 8 bit processor. There were very few 16b operations (other than immediates, push/pop, etc.).

PUSH HL ;preserve HL (in case it is important) LD L,(IX+offsetL) ; get low byte of pointer to next node LD H,(IX+offsetH) ; and high byte into r.DE EX SP,HL ; restore r.HL, ToS -> next node POP IX ; r.IX points to next node

The only lost item is the pointer to the current node. But, there's a lot of "activity" in those few instructions

; r.HL points to current node INC HL INC HL ; r.HL points to member referencing next node LD E,(HL) INC HL LD D,(HL) ; r.DE points to next node EX DE,HL ; r.HL points to next node

This forfeits r.DE's original contents.

However, this forces you to do any other references into that "node" relative to HL:

EX DE,HL ; r.DE -> current node LD HL,offset(MEMBER) ADD HL,DE ; r.HL -> member in this node; r.de -> current node

This tends to cause you to redefine the layouts of your structs to avoid the 16b ADD. E.g., you would, instead, *walk* r.HL (or r.DE, etc.) through the struct *consuming* its members in whatever order your algorithm requires then, "conveniently" ending up with r.HL pointing at the member that serves to link to the next node...

I.e., moving the pointer was cheaper than computing a *new* value for that pointer.

With the (relatively) big register set, you could do a lot without having to reload pointers (from immediate data). When I was writing Z*80 code, my statement comments usually were preoccupied with keeping track of what was in each register, ToS, etc. Tedious but often saved a fair bit of memory accesses.

Of course, if you actually have an *array* of nodes (i.e., not a linked list), then it is easier to:

PUSH DE LD DE,sizeof(NODE) ADD IX,DE POP DE

This, of course, being more expensive than

PUSH DE LD DE,sizeof(NODE) ADD HL,DE POP DE

*but*, the former preserves the ability to do indexed references (instead of having to keep diddling with HL)

The 6502/68xx required "persistent memory" to reside at 0xFFFX. And, to use zero page, you also had to decode *that* address range (which was typically to R/W memory, not "persistent" memory/ROM). Given the sizes of memory devices at that time, you either had to complicate the address decoder *or* "waste" pieces of the address space.

The 65816 was a much nicer processor -- but relatively *late* onto the scene.

*All* of these processors were total *dogs* when it came to HLL's! :<
Reply to
Don Y

Yup, the earlier 6800 was better in this respect! I was unpleasantly surprised by the Z80 - after having used it I thought the 8080 instruction set was better designed.

The trouble was that you *needed* a larger register set because of the "poor" instructions/addressing modes.

The 6800 worked very nicely with fewer registers since they were closer to being orthogonal general purpose registers. By comparison, the Z80 forced lots of register shuffling.

At that time the memory accesses weren't a problem since for many processors (99000 famously) the instruction timing was the same as memory access timing. That changed later, of course.

I'll disagree, for embedded systems at least. The code emitted by ?WhiteSmith's? C compiler for the Z80 was perfectly respectable. The only graunch I remember was i/o to a computed address having to be done by constructing the code on the stack, then executing it.

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.