Portable Assembly - Page 4

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Re: Portable Assembly
Hi George,

Snow melt, yet?  :>  (105+ here, all week)

On 6/5/2017 9:24 PM, George Neuner wrote:
Quoted text here. Click to load it

Of course.  When you design a language, you seek to optimize some set
of *many* (often conflicting) design criteria.  Do you want it to be
portable?  Deterministic?  Require minimal keystrokes?  Lead to
pronounceable code?  etc.

Most "programmers" like to consider themselves "artists" -- in the same
vein as authors/novelists.  The fewest constraints on the way we practice
our craft (art?).  Imagine if all novels *had* to be written composed entirely
of simple sentences built in a subject-predicate order.  Or, if every subject
had to be proper noun, etc.

Alternatively, you can try to constrain the programmer (protect him from
himself) and *hope* he's compliant.

Of course, the range of applications is also significant.  A language
intended for scripting has a different set of goals than one that can
effectively implement an OS, etc.

Quoted text here. Click to load it

Yes.  But this moves the languages design choices on the axis
AWAY from (inherent) "portability".

You can write "portable" code, in C -- but, it requires a conscious decision
to do so (and a fair bit of practice to do so WELL).

And, what does it mean to claim some piece of code is "portable"?  That
it produces the same results without concern for resource usage, execution
speed, code size, etc.?  (who decided that THOSE criteria were more/less
important?)

For example, My BigDecimal package will tailor itself to the size of the
largest integer data type in the target.  But, this is often NOT what you
would want (it tends to make larger BigDecimals more space efficient), esp
on smaller machines!  E.g., on a 16b machine, there might be support for
ulonglong's that my code will exploit... but, if the inherent data type
is "ushort" and all CPU operations are geared towards that size data,
there will be countless "helper routines" invoked just to let my code
use these "unnaturally large" data types, even if they do so inefficiently
(and the code could have just as easily tailor itself to a data size closer
to ushort)

Quoted text here. Click to load it

And, more contortions from programmers trying to "work-around" those
checks ("Yeah, I *want* to dereference NULL!  I want to 'jump 0x0000'")

Quoted text here. Click to load it

IME, that's the only way to get the degree of checking you *want*
(i.e., you, as an individual programmer on a specific project coding
a particular algorithm).  But, that voids many tools designed to protect
you from these evils.

Quoted text here. Click to load it

I'd considered coding my current project in C++ *just* for the
better type-checking.

E.g., I want new types to be syntactically treated as different
types, not just aliases:

typedef long int handle_t;        // reference for an OS object
typedef handle_t file_handle_t;   // reference for an OS *file* object
typedef handle_t lamp_handle_t;   // reference for an OS lamp object

extern turn_on(lamp_handle_t theLamp);
extern unlink(file_handle_t theFile);

main()
{
    file_handle_t aFile;
    lamp_handle_t aLamp;

    ...  // initialization

    turn_on( (lamp_handle_t) aFile);
    unlink( (file_handle_t) aLamp);
}

should raise eyebrows!

Add the potential source of ambiguity that the IDL can introduce
and its too easy for an "uncooperative" developer to craft code
that is way too cryptic and prone to errors.

Imagine poking bytes into a buffer called "stack_frame" and
then trying to wedge it "under" a function invocation...
sure, you can MAKE it work.  But, will you know WHY it works,
next week??

Re: Portable Assembly
On 06/06/17 16:03, Don Y wrote:

Quoted text here. Click to load it

You can get that in C - put your types in structs.  It's a pain for
arithmetic types, but works fine for handles.

typedef struct { long int h; } handle_t;
typedef struct { handle_t fh; } file_handle_t;
typedef struct { handle_t lh; } lamp_handle_t;

Now "turn_on" will not accept a lamp_handle_t object.


Re: Portable Assembly
On 6.6.17 17:58, David Brown wrote:
Quoted text here. Click to load it


Don seems to invent objects without objects.

--  

-TV


Re: Portable Assembly
wrote:

Quoted text here. Click to load it

45 and raining.  It has rained at least some nearly every day for the
last 2 weeks.  Great for pollen counts, bad for mold spores.



Quoted text here. Click to load it

But that's the thing ... programming in and of itself is a skill that
can be taught, but software "engineering" *IS* an art.

It is exactly analogous to sculpting or piano playing ... almost
anyone can learn to wield hammer and chisel, or to play notes on keys
- but only some can produce beauty in statue or music.


Quoted text here. Click to load it

Iambic meter?  Limerick?  Words that rhyme with "orange".


Quoted text here. Click to load it

Yes.  And my preference for a *general* purpose language is to default
to protecting the programmer, but to selectively permit use of more
dangerous constructs in marked "unsafe" code.


Quoted text here. Click to load it

Absolutely.  A scripting or domain specific language often does not
need to be Turing powerful (or even anywhere close).



Quoted text here. Click to load it

Lisp is safe by default, and even without type declarations a good
compiler can produce quite good code through extensive type/value
propogation analyses.

But by selectively adding declarations and local annotations, a Lisp
programmer can improve performance - sometimes significantly.  In many
cases, carefully tuned Lisp code can approach C performance.  Type
annotation in Lisp effectively tells the compiler "trust me, I know
what I'm doing" and lets the compiler elide runtime checks that it
couldn't otherwise eliminate, and sometimes switch to (smaller,faster)
untagged data representations.

There's nothing special about Lisp that makes it particularly amenable
to that kind of tuning - Lisp simply can benefit more than some other
languages because it uses tagged data and defaults to perform (almost)
all type checking at runtime.

[Aside: BiBOP still *effectively* tags data even where tags are not
explicitly stored.  In BiBOP systems the type of a value is deducible
from its address in memory.]


Modern type inferencing allows a "safer than C" language to use C-like
raw data representations, and to rely *mostly* on static type checking
without the need for a whole lot of type declarations and casting. But
type inferencing has limitations: with current methods it is not
possible to completely eliminate the need for declarations.  And as
with Lisp, most type inferencing systems can be assisted to generate
better code by selective use of annotations.

In any case, regardless of what type system is used, it isn't possible
to completely eliminate runtime checking IFF you want a language to be
safe: e.g., pointers issues aside, there's no way to statically
guarantee that range reduction will produce a value that can be safely
stored into a type having a smaller (bit) width.  No matter how
sophicated the compiler becomes, there always will be cases where the
programmer knows better and should be able to override it.  


But even with these limitations, there are languages that are useful
now and do far more of what you want than does C.


YMMV,
George

Re: Portable Assembly
On 6/6/2017 1:42 PM, George Neuner wrote:
Quoted text here. Click to load it

Yeah, until it eases up a bit and all that stuff comes into BLOOM!  :<
We're paying the price for a Spring that came 4 weeks early...

Quoted text here. Click to load it

Exactly.  But, there are a sh*tload of folks who THINK themselves
"artists" that really should see how the rest of the world views
their "art"!  :>

The problem is that you have to *design* systems for with these
folks in mind as their likely "maintainers" and/or "evolvers".

So, even if you're "divinely inspired", you have to manage to either
put in place a framework that effectively guides (constrains!) their
future work to remain consistent with that design...

Or, leave copious notes and HOPE they read them, understand them and
take them to heart...

Or, create mechanisms (tools) that cripple the budding artistry they
(think) possess!  :>

Quoted text here. Click to load it

Yes.  Now, imagine The David needing a 21st century "update".
Michelangelo is "unavailable" for the job.  <grin>

Do you find the current "contemporary master" (of that art form)
and hire him/her to perform the task?  Or, some run-of-the-mill
guy from Sculptors University, Class of 2016??

If you're only concerned with The David *and* have the resources that
such an asset would command, you can probably afford to have the
current Master tackle the job.

OTOH, if you've got a boatload of similar jobs (The David, The Rita,
The Bob, The Bethany, The Harold, The Gretchen, etc.), that one artist
may decide he's tired of being asked to "tweek" the works of past
artists and want a commission of his own!  Or, simply not have time
enough in his schedule to get to all of them at the pace you desire!

Quoted text here. Click to load it

And, count on the DISCIPLINE of all these would-be Michelangelos to
understand (and admit!) their own personal limitations prior to enabling
those constructs?

Like telling a 16 year old "keep it under 35MPH"...  <grin>

Quoted text here. Click to load it

What you (ideally) want, is to be able to "set a knob" on the 'side' of
the language to limit its "potential for misuse".  But, to do so in a
way that the practitioner doesn't feel intimidated/chastened at its
apparent "setting".

How do you tell a (as yet, unseen!) developer "you're not capable of
safely using pointers to functions?  Or, recursive algorithms?  Or,
self-modifying code?  Or, ...  But, *I* am!"  :>

(Returning to "portability"...)

Even if I can craft something that is portable under some set of
conditions/criteria that I deem appropriate -- often by leveraging
particular features of the language of a given implementation
thereof -- how do I know the next guy will understand those issues?
How do I know he won't *break* that aspect (portability) -- and
only belatedly discover his error (two years down the road when the
code base moves to a Big Endian 36b processor)?

It's similar to trying to ensure "appropriate" documentation
accompanies each FUTURE change to the system -- who decides
what is "appropriate"?  (Ans:  the guy further into the future who
can't sort out the changes made by his predecessor!)

Quoted text here. Click to load it

I.e., "trust the programmer" -- except in those cases where you can't!  :>

Quoted text here. Click to load it

But it still requires the programmer to know what he's doing; the compiler
takes its cues from the programmer's actions!

How often have you seen a bare int used as a pointer?  Or, vice versa?
("Yeah, I know what you *mean* -- but that's not what you *coded*!")

Quoted text here. Click to load it

Exactly.  Hence the contradictory issues at play:
- enable the competent
- protect the incompetent

Quoted text here. Click to load it

But, when designing (or choosing!) a language, one of the dimensions
in your decision matrix has to be availability of that language AND in
the existing skillsets of its practitioners.

Being "better" (in whatever set of criteria) isn't enough to ensure
acceptance or adoption (witness the Betamax).

ASM saw widespread use -- not because it was the BEST tool for the
job but, rather, because it was (essentially) the ONLY game in town
(in the early embedded world).  Amusing that we didn't repeat the same
evolution of languages that was the case in the "mainframe" world
(despite having comparable computational resources to those
ANCIENT machines!).

The (early) languages that we settled on were simple to implement
on the development platforms and with the target resources.   Its
only as targets have become more resource-rich that we're exploring
richer execution environments (and the attendant consequences of
that for the developer).

Re: Portable Assembly
wrote:

Quoted text here. Click to load it

Or adopt a throw-away mentality: replace rather than maintain.  

That basically is the idea behind the whole agile/devops/SaaS
movement: if it doesn't work today, no problem - there will be a new
release tomorrow [or sooner].


Quoted text here. Click to load it

You know what they say: one decent Lisp programmer is worth 10,000
python monkeys.


Quoted text here. Click to load it

No problem: robots and 3-D printers will take care of that.  Just read
an article that predicts AI will best humans at *everything* within 50
years.



Quoted text here. Click to load it

For the less experienced, fear, uncertainty and doubt are better
counter-motivators than is any amount of discipline.   When a person
believes (correctly or not) that something is hard to learn or hard to
use, he or she usually will avoid trying it for as long as possible.

The basic problem with C is that some of its hard to master concepts
are dangled right in the faces of new programmers.


For almost any non-system application, you can do without (explicit
source level) pointer arithmetic.  But pointers and the address
operator are fundamental to function argument passing and returning
values (note: not "value return"), and it's effectively impossible to
program in C without using them.

This pushes newbies to learn about pointers, machine addressing and
memory management before many are ready.  There is plenty else to
learn without *simultaneously* being burdoned with issues of object
location.

Learning about pointers then invariably leads to learning about
arithmetic on pointers because they are covered together in most
tutorials.

Keep in mind that the majority of people learning and using C (or C++)
today have no prior experience with hardware or even with programming
in assembler.  If C isn't their 1st (non-scripting) language then most
likely their prior experiences were with "safe", high level, GC'd
languages that do not expose object addressing: e.g., Java, Scheme,
Python, etc. ... the commonly used "teaching" languages.

For general application programming, there is no need for a language
to provide mutable pointers: initialized references, together with
array (or stream) indexing and struct/object member access are
sufficient for virtually any non-system programming use.  This has
been studied extensively and there is considerable literature on the
subject.

[Note also I am talking about what a programmer is permitted to do at
the source code level ... what a compiler does to implement object
addressing under the hood is beside the point.]


<frown>

Mutable pointers are just the tip of the iceberg: I could write a
treatise on the difficulties / frustrations of the *average*
programmer with respect to manual memory management, limited precision
floating point, differing logical "views" of the same data,
parallelism, etc. ...
... and how C's "defects" with regard to safe *application*
programming conspire to add to their misery.

But this already is long and off the "portability" topic.



Quoted text here. Click to load it

Look at Racket's suite of teaching and extension languages.  They all
are implemented over the same core language (an extended Scheme), but
they leverage the flexibility of the core langauge to offer different
syntaxes, different semantics, etc.    

In the case of the teaching languages, there is reduced functionality,
combined with more newbie friendly debugging output, etc.

http://racket-lang.org/
https://docs.racket-lang.org/htdp-langs/index.html

And, yeah, the programmer can change which language is in use with a
simple "#lang <_>" directive, but the point here is the flexibility of
the system to provide (more or less) what you are asking for.



Quoted text here. Click to load it

You don't, and there is little you can do about it.  You can try to be
helpful - e.g., with documentation - but you can't be responsible for
what the next person will do.

No software truly is portable except that which runs on an abstract
virtual machine.  As long as the virtual machine can be realized on a
particular base platform, the software that runs on the VM is
"portable" to that platform.


Quoted text here. Click to load it

Again, you are only responsible for what you do.



Quoted text here. Click to load it

The modern concept of availability is very different than when you had
to wait for a company to provide a turnkey solution, or engineer
something yourself from scratch.  Now, if the main distribution
doesn't run on your platform, you are likely to find source that you
can port yourself (if you are able), or if there's any significant
user base, you may find that somebody else already has done it.

Tutorials, reference materials, etc. are a different matter, but the
simpler and more uniform the syntax and semantics, the easier the
language is to learn and to master.

question: why in C is *p.q == p->q  
                  but *p   != p  
              and  p.q != p->q

followup: given coincidental addresses and modulo a cast,  
          how is it that *p can == *p.q

Shit like this makes a student's head explode.


In Pascal, the pointer dereference operator '^' and the record
(struct) member access operator '.'  were separate and always used
consistently.  The type system guarantees that p and p^ and p^.q  can
never, ever be the same object.

This visual and logical consistency made Pascal easier to learn.  And
not any less functional.

My favorite dead horse - Modula 3 - takes a similar approach.  Modula
3 is both a competent bare metal system language AND a safe OO
application language.  It does a whole lot more than (extended) Pascal
- yet it isn't that much harder to learn.

It is possible to learn Modula 3 incrementally: leaving advanced
subjects such as where objects are located in memory and when it's
safe to delete() them - until you absolutely need to know.  

And if you stick to writing user applications in the safe subset of
the language, you may never need to learn it: Modula 3 uses GC by
default.


Quoted text here. Click to load it

Unfortunately.



There never was any C compiler that ran on any really tiny machine.
Ritchies' technotes on the development of C stated that the original
1972 PDP-11 compiler had to run in ~6KB (all that was left after
loading Unix), required several passes, and really was not usable
until the machine was given a hard disk.  Note also that that 1st
compiler implemented only a subset of K&R1.

K&R1 - as described in the book - was 1st implemented in 1977 and I
have never seen any numbers on the size of that compiler.


The smallest K&R1 compiler I can remember that *ran* on an 8-bit micro
was circa 1983.  It was a few hundred KB of code.  It ran in 48KB
using overlays, needed 2 floppy drives or a hard disk, and required 2
compile passes per source file and a final link pass.  

It was quite functional (if glacially slow), and included program code
overlay support and emulated single precision FP (in VAX format IIRC).
Although it targeted a 16-bit virtual machine with 6 16-bit registers,
it produced native 8-bit code : i.e. the "16-bit VM" program was not
interpreted, but was emulated by 8-bit code.

As part of the pro package (and available separately for personal use)
there also was a bytecode compiler that allowed packing much larger
applications (or their data) into memory.  It had all the same
features as the native code compiler, but produced interpreted code
that ran much slower.  You could use both native and interpreted code
in the same application via overlays.


There existed various subset C compilers that could run in less than
48KB, but most of them were no more than expensive learning toys.



But even by the standard of "the compiler could run on the machine",
there were languages better suited than C for application programming.

Consider that in the late 70's there already were decent 8-bit
implementations of BASIC, BCPL Logo, SNOBOL, etc.  (Extended) Pascal,
Smalltalk, SNOBOL4, etc. became available in the early 80's for both 8
and 16-bit systems.  But C really wasn't useable on any micro prior to
~1985 when reasonably<?> priced hard disks appeared.

Undoubtedly, AT&T giving away Unix to colleges from 1975..1979 meant
that students in that time frame would have gained some familiarity
with C.   16-bit micros powerful enough to really be characterized as
useful "development" systems popped out in the early 80's as these
students would have been graduating (or shortly thereafter).

But they were extremely expensive: tens of thousands of dollars for a
usable system.  You'd have to mortage your home to afford one, which
is not something the newly working with looming college loans would do
lightly.  And sans hard disk (more $$$), you'd manage only one or two
compiles a day.

Turbo Pascal was the 1st really useable [in the modern sense]
developement system.  It did not need a hard disk and it hit the
market before commodity hard disks were widely available.


The question is not why C was adopted for system programming, or for
cross development from a capable system to a smaller target.  Rather
the question is why it was so widely adopted for ALL kinds of
programming on ALL platforms given that were many other reasonable
choices available.


YMMV. I remain perplexed.
George

Re: Portable Assembly
On 08.6.2017 ?. 13:38, George Neuner wrote:
Quoted text here. Click to load it

My take on that is it happened because people needed a low level
language, some sort of assembler - and the widest spread CPU was
the x86 with a register model for which no sane person would consider
programming larger pieces of code.
I am sure there have been people who have done
it but they can't have been exactly sane :) (i.e. have been insane in
a way most people would have envied them for their insanity).
So C made x86 usable - and the combination (C+x86) is the main factor
which led to the absurd situation we have today, where code which
used to take kilobytes of memory takes gigabytes (not because of the
inefficiency of compilers, just because of where most programmers
have been led to).

Dimiter

======================================================
Dimiter Popoff, TGI             http://www.tgi-sci.com
======================================================
http://www.flickr.com/photos/didi_tgi/



Re: Portable Assembly
wrote:

Quoted text here. Click to load it

PL/M-80 and PL/M-86 were quite reasonable intermediate languages.

The same also applies to BLISS for PDP-10/PDP-11/VAX/Alpha and
recently some Intel HW.  

The problem why these languages did not become popular was that the
hardware vendors did want to make money by compiler sales.  

Some HW companies wanting to boost their HW sales did give away
compilers and development software for free and that way boost their
HW sale.


Re: Portable Assembly
snipped-for-privacy@downunder.com wrote:
Quoted text here. Click to load it

PL/M was fairly hard to maintain in. A couple lines of C would
replace a page of PL/M, in cases.

Quoted text here. Click to load it

Gates identified massive cognitive dissonance against the idea
of selling software en masse and set the tools price quite low.

"Hardware vendors" like IBM had OS/360, where the O/S cost
more than the machine.

People still don't want to pay for software.

Quoted text here. Click to load it

That's more generally true of chip vendors, who use the tools as
an enabler for sales.

--  
Les Cargill



Re: Portable Assembly
Dimiter_Popoff wrote:
Quoted text here. Click to load it

I doubt that, but there's something to be said for architectures
that limit the complexity available to the financial/investor
classes.

The trouble with large swaths of assembly is that organizations
aren't stable enough to support maintainers long enough to keep things
running.

The game here is funding, not function.

Quoted text here. Click to load it

Most of the people who need gigs of memory aren't at least native C  
speakers.

It took the execrable web protocols and "relational databases"
to make things utterly reek of doom. These are fine for
toy programs to get you through a course, but no fun at all
for Real Work(tm).

Behold the $400 , WiFi enabled juicer; Juicero.

Quoted text here. Click to load it
--  
Les Cargill

Re: Portable Assembly
On 08/06/17 11:38, George Neuner wrote:
Quoted text here. Click to load it

Whitesmiths? IIRC the symbol table size became a limiting
factor during linking, so linking became multipass :(


Quoted text here. Click to load it

I always found that remarkable, since Algol-60 compiler ran in
4kwords of 2 instructions/word.


Quoted text here. Click to load it

I'll debate Smalltalk :) Apple's implementation (pre L Peter
Deutsch's JIT) was glacially slow. I know: it is still running
on my fat Mac downstairs :)


Quoted text here. Click to load it

Yes indeed.

Fortunately The New Generation has seen the light, for better
and for worse.

But then if you make it possible to program in English,
you will find that people cannot think and express
themselves in English.


Re: Portable Assembly
On 8.6.17 16:50, Tom Gardner wrote:
Quoted text here. Click to load it

Must be. It run on CP/M machines.

Quoted text here. Click to load it

You mean Elliott 803 / 503?

It had also an overlay structure. If the program grew above a
certain limit, it was dumped out in an intermediate format, and
the operator needed to feed in the second compiler pass paper
tape and the intermediate code ('owncode') to get the final
run code.

--  

-TV

Re: Portable Assembly
On 08/06/17 15:38, Tauno Voipio wrote:
Quoted text here. Click to load it


Yes and yes.

I saw a running 803 a couple of weeks ago, and discussed
the circuit diagrams with the staff member there.


Re: Portable Assembly
On Thu, 8 Jun 2017 14:50:06 +0100, Tom Gardner

Quoted text here. Click to load it

No, it was the Aztec compiler from Manx.

I'm not aware that Whitesmith ever ran on an 8-bit machine.  The
versions I remember were for CP/M-86 and PC/MS-DOS.  My (maybe faulty)
recollection is that Whitesmith was enormous: needing at least 512KB
and a hard disk to be useful.

I remember at one time using Microsoft's C compiler on 1.2MB floppies
and needing half a dozen disk swaps to compile "hello world!".


Quoted text here. Click to load it

Must have been written in assembler - I would have loved to have seen
that.  


Quoted text here. Click to load it

I agree that Apple's version was slow - I maybe never saw the version
with JIT - but ParcPlace Smalltalk ran very well on a FatMac.

I had a Smalltalk for my Apple IIe. It needed 128KB so required a IIe
or a II with language card.  It used a text based browser and ran
quite acceptably for small programs.  Unfortunately, the version I had
was not able to produce a separate executable.


Unfortunately, after too many moves, I no longer have very much of the
early stuff.  I never figured on it becoming valuable.

George

Re: Portable Assembly
On 09/06/17 19:14, George Neuner wrote:
Quoted text here. Click to load it

You probably still can. Certainly the 803 was playing
tunes a month ago.

http://www.tnmoc.org/news/notes-museum/iris-atc-has-hiccup-and-elliott-803-store-fault-returns




Quoted text here. Click to load it

I never saw PP Smalltalk on a fat mac. L Peter Deutsch's JIT
was a significant improvement.

I moved onto Smalltalk/V on a PC, a Tek Smalltalk machine,
and then Objective-C.

Later I was surprised to find that both Tek and HP had
embedded Smalltalk in some of their instruments.


Quoted text here. Click to load it

I'm collecting a bit now; I was surprised the fat mac


Re: Portable Assembly
On 10/06/17 04:14, George Neuner wrote:
Quoted text here. Click to load it

I built my first software product like that, a personal filing system.
I was very glad when we got a 5MB disk drive, and didn't have to swap
disks any more. It was even better when, a few months later (1983) we
got MS-DOS 2, with mkdir/rmdir, so not all files were in the root
directory any more.

Re: Portable Assembly
On 6/8/2017 3:38 AM, George Neuner wrote:
Quoted text here. Click to load it

I think those are just enablers for PHB's who are afraid to THINK
about what they want (in a product/design) and, instead, want to be shown
what they DON'T want.

I encountered a woman who was looking for a "mobility scooter" a week or
two ago.  I showed her *one* and she jumped at the opportunity.  I
quickly countered with a recommendation that some OTHER form of "transport"
might be better for her:
     "The scooter has a wide turning radius.  If you head down a hallway
     (i.e., in your home) and want to turn around, you'll have to either
     continue in the current direction until you encounter a wider space
     that will accommodate the large turning radius *or* travel backwards
     retracing your steps.  A powerchair will give you a smaller turning
     radius.  An electric wheelchair tighter still!"
She was insistent on the scooter.  Fearing that she was clinging to it
as the sole CONCRETE example available, I told her that I also had
examples of each of the other options available.

[I was fearful of getting into a situation where I refurbished one
"transport device", sent her home with it -- only to find her returning
a week later complaining of its limitations, and wanting to "try another
option"]

In this case, she had clearly considered the options and come to the
conclusion that the scooter was best suited to her needs:  the chair
options tend to be controlled by a joystick interface whereas the
scooter has a tiller (handlebars) and "speed setting".  For her,
the tremor in her hands made the fine motor skills required to interact
with the joystick impractical.  So, while the scooter was less
maneuverable (in the abstract sense), it was more CONTROLABLE in her
particular case.  She'd actively considered the options instead of
needing to "see" each of them (to discover each of their shortcomings).

Quoted text here. Click to load it

Yeah, Winston told me that... 40 years ago!  :>

Quoted text here. Click to load it

Or, will think they are "above average" and, thus, qualified to KNOW
how to use/do it!

Quoted text here. Click to load it

I think the problem is that the "trickier" aspects aren't really
labeled as such.

I know most folks would rather tackle a multiplication problem than
an equivalent one of division.  But, they've learned (from experience)
of the relative costs/perils of each.  It's not like there is a
big red flag on the chapter entitled "division" that warns of Dragons!

Quoted text here. Click to load it

But, if you'd a formal education in CS, it would be trivial to
semantically map the mechanisms to value and reference concepts.
And, thinking of "reference" in terms of an indication of WHERE
it is!  etc.

Similarly, many of the "inconsistencies" (to noobs) in the language
could easily be explained with "common sense":
- why aren't strings/arrays passed by value?  (think about how
   ANYTHING is passed by value; the answer should then be obvious)
- the whole notion of references being IN/OUT's
- gee, const can ensure an IN can't be used as an OUT!
etc.

I think the bigger problem is that folks are (apparently) taught
"keystrokes" instead of "concepts":  type THIS to do THAT.

Quoted text here. Click to load it

Then approach the topics more incrementally.  Instead of introducing
the variety of data types (including arrays), introduce the basic
ones.  Then, discuss passing arguments -- and how they are COPIED into
a stack frame.

This can NATURALLY lead to the fact that you can only "return" one
datum; which the caller would then have to explicitly assign to
<whatever>.  "Gee, wouldn't it be nice if we could simply POINT to
the things that we want the function (subroutine) to operate on?"

Then, how you can use references to economize on the overhead of passing
large objects (like strings/arrays) to functions.

Etc.

I just think the teaching approach is crippled.  Its driven by industry
with the goal of getting folks who can crank out code, regardless of
quality or comprehension.

Quoted text here. Click to load it

But you can still expose a student to the concepts of the underlying
machine, regardless of language.  Introduce a hypothetical machine...
something with, say, memory and a computation unit.  Treat memory
as a set of addressable "locations", etc.  My first "computer texts"
all presented a conceptual model of a "computer system" -- even though
the languages discussed (e.g., FORTRAN) hid much of that from the
casual user.

Instead, there's an emphasis on idioms and tricks that aren't portable
and confuse the issue(s).  Its like teaching a student driver about the
infotainment system in the vehicle instead of how the brake and accelerator
operate.

Quoted text here. Click to load it

But then you force the developer to pick different languages for
different aspects of a problem.  How many folks are comfortable
with this "application specific" approach to *a* problem's solution?

E.g., my OS is coded in C and ASM.  Most of the core services are
written in C (so I can provide performance guarantees) with my bogus
IDL to handle RPC/IPC.  The RDBMS server is accessed using SQL.
And, "applications" are written in my modified-Limbo.

This (hopefully) "works" because most folks will only be involved
with *one* of these layers.  And, folks who are "sufficiently motivated"
to make their additions/modifications *work* can resort to cribbing
from the existing parts of the design -- as "examples" of how they
*could* do things ("Hey, this works; why not just copy it?")

OTOH, if someone had set out to tackle the whole problem in a single
language/style...   <shrug>

Quoted text here. Click to load it

I'm sure you've worked in environments where the implementation
was "dictated" by what appeared to be arbitrary constraints:
will use this language, these tools, this process, etc.  IME,
programmers *chaffe* at such constraints.  Almost as if they were
personal affronts ("*I* know the best way to tackle the problem
that *I* have been assigned!").  Imagine how content they'd be
knowing they were being told to eat at the "kiddie table".

I designed a little serial protocol that lets me daisy-chain
messages through simple "motes".  The protocol had to be simple
and low overhead as the motes are intended to be *really*
crippled devices -- at best, coded in C (on a multitasking
*executive*, not even a full-fledged OS) and, more likely,
in ASM.

When I went to code the "host" side of the protocol, my first
approach was to use Limbo -- this should make it more maintainable
by those who follow (goal is to reduce the requirements imposed
on future developers as much as possible).

But, I was almost literally grinding my teeth as I was forced to
build message packets in "byte arrays" with constant juggling
of array indices, etc.  (no support for pointers).  I eventually
"rationalized" that this could be viewed as a "core service"
(communications) and, thus, suitable for coding along the same
lines as the other services:  in C!  :>

An hour later, the code is working and (to me) infinitely more
intuitive than a bunch of "array slices" and "casts".

Quoted text here. Click to load it

Of course!  My approach is to exploit laziness and greed.  Leave
bits of code that are RIPE for using as the basis for new services
("templates", of sorts).  And, let the developer feel he can do
whatever he wants -- if he's willing to bear the eventual cost
for those design decisions (which might include users opting not
to deploy his enhancements!)

Quoted text here. Click to load it

But, you can use the same lazy/greedy motivators there, as well.
E.g., my gesture recognizer builds the documentation for the
gesture from the mathematical model of the gesture.  This
relieves the developer from that task, ensures the documentation
is ALWAYS in sync with the implementation *and* makes it trivial
to add new gestures by lowering the effort required.

Quoted text here. Click to load it

That works for vanilla implementations.  It leads to all designs
looking like all others ("Lets use a PC for this!").  This is
fine *if* that's consistent with your product/project goals.
But, if not, you're SoL.

Or, faced with a tool porting/development task that exceeds the
complexity of your initial problem.

Quoted text here. Click to load it

But C is lousy for its use of graphemes/glyphs.  You'd think
K&R were paraplegics given how stingy they are with keystrokes!

Or, supremely lazy!  (or, worse, think *us* that lazy!)

[I guess it coul dbe worse; they could have forced all
identifiers to be single character!]

Quoted text here. Click to load it

The same is largely true of Ada.  But, with Ada, you end up knowing
an encyclopaedic language that, in most cases, is overkill and affords
little for nominal projects.

An advantage of ASM was that there were *relatively* few operators
and addressing modes, etc.  Even complex instructions could be reliably
(*and* mechanically) "decoded".  You didn't find yourself wondering
if something was a constant pointer to variable data, or a variable
pointer to constant data, or a constant pointer to constant data, or...

And, ASM syntax tended to be more "fixed form".  There wasn't as much
poetic license to how you expressed particular constructs.

E.g., I instinctively write "&array[0]" instead of "array" (depending on
the use).

Quoted text here. Click to load it

Doesn't have to run *on* a tiny machine.  It just had to generate code
that could run on a tiny machine!

E.g., we used an 11 to write our i4004 code; the idea of even something
as crude as an assembler running *ON* an i4004 was laughable!

Quoted text here. Click to load it

It wasn't uncommon for early *assemblers* to require multiple passes.

I built some small CP/M based development systems for an employer
many years ago.  To save a few bucks, he opted to deploy most of them
(mine being the exception!   :> ) with a single 1.4M floppy.  The
folks using them were ecstatic as they were so much faster than the
ZDS boxes we'd used up to then (hard sectored floppies, etc.).

But, had the boss *watched* folks using them and counted the amount
of time LOST swapping floppies (esp when you wanted to make a backup
of a floppy!), he'd have realized how foolhardy his "disk economy"
had been!

Quoted text here. Click to load it

Whitesmith's and Manx?

JRT Pascal ($19.95!) ran on small CP/M boxes.  IIRC, there was an M2 that
also ran, there.  And, MS had a BASIC compiler.

Quoted text here. Click to load it

But you didn't have to rely on having a home system to write code.
Just like most folks don't *rely* on having home internet to access
the web, email, etc.

If you're still in school, there's little to prevent you from using
their tools for a "personal project".  Ditto if employed.  The only
caveat being "not on company time".

Quoted text here. Click to load it

Look at them, individually.  And, at the types of products that
were being developed in that time frame.

You could code most algorithms *in* BASIC.  But, if forced into a
single-threaded environment, most REAL projects would fall apart
(cuz the processor would be too slow to get around to polling
everything AND doing meaningful work).  I wrote a little BASIC
compiler that targeted the 647180 (one of the earliest SoC's).

It was useless for product development.  But, great for throwing
together dog-n-pony's for clients.  Allow multiple "program
counters" to walk through ONE executable and you've got an effective
multitasking environment (though with few RT guarantees).  Slap
*one* PLCC in a wirewrap socket with some misc signal conditioning/IO
logic and show the client a mockup of a final product in a couple
of weeks.

[Then, explain why it was going to take several MONTHS to go from
that to a *real* product!  :> ]

SNOBOL is really only useful for text processing.  Try implementing
Bresenham's algorithm in it -- or any other DDA.  This sort of thing
highlights the differences between "mainframe" applications and
"embedded" applications.

Ditto Pascal.  How much benefit is there in controlling a motor
that requires high level math and flagrant automatic type conversion?

Smalltalk?  You *do* know how much RAM cost in the early 80's??

Much embedded coding could (today) be done with as crippled a
framework as PL/M.  What you really want to do is give the developer
some syntactic freedom (e.g., infix notation for expressions)
and relieve him of the minutiae of setting up stack frames,
tracking binary points, etc.

C goes a long way towards that goal without favoring a particular
application domain.  And, because its relatively easy to "visualize"
what is happening "behind the code", its easy to deploy applications
coded in it in multiple different environments.

[By contrast, think about how I tackled the multitasking BASIC
implementation and how I'd have to code *for* that implementation
to avoid "unexpected artifacts"]

Quoted text here. Click to load it


Re: Portable Assembly
wrote:

Quoted text here. Click to load it

IME most people [read "clients"] don't really know what they want
until they see what they don't want.

Most people go into a software development effort with a reasonable
idea of what it should do ... subject to revision if they are allowed
to think about it ... but absolutely no idea what it should look like
until they see - and reject - several demos.

The entire field of "Requirements Analysis" would not exist if people
knew what they wanted up front and could articulate it to the
developer.



Quoted text here. Click to load it

But only a small fraction of "developers" have any formal CS, CE, or
CSE education.  In general, the best you can expect is that some of
them may have a certificate from a programming course.


Quoted text here. Click to load it

That's true ... but then you get perfectly reasonable questions like
"why aren't parameters marked as IN or OUT?", and have to dance around
the fact that the developers of the language were techno-snobs who
didn't expect that clueless people ever would be trying to use it.

Or "how do I ensure that an OUT can't be used as an IN?"  Hmmm???


Quoted text here. Click to load it

There is a element of that.  But also there is the fact that many who
can DO cannot effectively teach.

I knew someone who was taking a C programming course, 2 nights a week
at a local college.  After (almost) every class, he would come to me
with questions and confusions about the subject matter.  He remarked
on several occasions that I was able to teach him more in 10 minutes
than he learned in a 90 minute lecture.


Quoted text here. Click to load it

A what frame?  

I once mentioned "stack" in a response to a question posted in another
forum.  The poster had proudly announced that he was a senior in a CS
program working on a midterm project.  He had no clue that "stacks"
existed other than as abstract notions, didn't know the CPU had one,
and didn't understand why it was needed or how his code was faulty for
(ab)using it.

So much for "CS" programs.


Quoted text here. Click to load it

Huh?  I saw once in a textbook that <insert_language> functions can
return more than one object.  Why is this language so lame?



Quoted text here. Click to load it

You and I have had this discussion before [at least in part].  

CS programs don't teach programming - they teach "computer science".
For the most part CS students simply are expected to know.  

CSE programs are somewhat better because they [purport to] teach
project management: selection and use of tool chains, etc.  But that
can be approached largely in the abstract as well.

Many schools are now requiring that a basic programming course be
taken by all students, regardless of major.  But this is relatively
recent, and the language de choix varies widely.



Quoted text here. Click to load it

That's covered in a separate course: "Computer Architecture 106".  It
is only offered Monday morning at 8am, and it costs another 3 credits.


Quoted text here. Click to load it

Every intro computer text introduces the hypothetical machine ... and
spends 6-10 pages laboriously stretching out the 2 sentence decription
you gave above.  If you're lucky there will be an illustration of an
array of memory cells.

Beyond that, you are into specialty texts.



Quoted text here. Click to load it

Go ask this question in a Lisp forum where writing a little DSL to
address some knotty aspect of a problem is par for the course.


Quoted text here. Click to load it

What does CLIPS use?

By my count you are using 6 different languages ... 4 or 5 of which
you can virtually count on the next maintainer not knowing.

What would you have done differently if C were not available for
writing your applications?  How exactly would that have impacted your
development?
  

Quoted text here. Click to load it

Above you complained about people being taught /"keystrokes" instead
of "concepts":  type THIS to do THAT./  and something about how that
led to no understanding of the subject.



Quoted text here. Click to load it

It would be a f_ing nightmare.  That's precisely *why* you *want* to
use a mix of languages: often the best tool is a special purpose
domain language.



Quoted text here. Click to load it

If the tool is Racket, it supports creating, using and ad-mixing any
special purpose domain languages you are able to come up with.

<grin>

Racket isn't the only such versatile tool ... it's just the one I
happened to have at hand.


Quoted text here. Click to load it

Yeah ... well the world is going that way.  My electric toothbrush is
a Raspberry PI running Linux.



Quoted text here. Click to load it

Depends on the chip.  Modern x86_64 chips can have instructions up to
15 bytes (120 bits) long. [No actual instruction *is* that long, but
that is the maximum the decoder will accept.]



Quoted text here. Click to load it

Cross compiling is cheating!!!

In most cases, it takes more resources to develop a program than to
run it ... so if you have a capable machine for development, why do
need a *small* compiler?

A small runtime footprint is a different issue, but *most* languages
[even GC'd ones] are capable of operating with a small footprint.

Once upon a time, I created a Scheme-like GC'd language that could do
a hell of a lot in 8KB total for the compiler, runtime, a reasonably
complex user program and its data.


Quoted text here. Click to load it

My point exactly.  In any case, you wouldn't write for the i4004 in a
compiled language.  Pro'ly not for the i8008 either, although I have
heard claims that that was possible.


Quoted text here. Click to load it

But we aren't talking about *embedded* applications ... we're talking
about ALL KINDS of applications on ALL KINDS of machines.

You view everything through the embedded lens.


Quoted text here. Click to load it

I don't even understand this.


Quoted text here. Click to load it

Yes, I do.

I also know that I had a Smalltalk development system that ran on my
Apple IIe.  Unfortunately, it was a "personal" edition that was not
able to create standalone executables ... there was a "professional"
version that could, but it was too expensive for me ... so I don't
know how small a 6502 Smalltalk program could have been.

I also had a Lisp and a Prolog for the IIe.  No, they did not run in
4KB, but they were far from useless on an 8-bit machine.

George

Re: Portable Assembly
On 6/9/2017 7:14 PM, George Neuner wrote:
Quoted text here. Click to load it

I've typically only found that to be the case when clients (often
using "their own" money) can't decide *if* they want to enter a
particular market.  They want to see something to gauge their own
reaction to it:  is it an exciting product or just another warmed over
stale idea.

I used to make wooden mockups of devices just to "talk around".
Then foamcore.  Then, just 3D CAD sketches.

But, how things work was always conveyed in prose.  No need to see
the power light illuminate when the power switch was toggled.  If
you can't imagine how a user will interact with a device, then
you shouldn't be developing that device!

The only "expensive" dog-and-pony's were cases where the underlying
technology was unproven.  Typically mechanisms that weren't known
to behave as "envisioned" without some sort of reassurances (far from a
clinical *proof*).  I don't have an ME background so can never vouch
for mechanical designs; if the client needs reassurance, the ME has
to provide it *or* invest in building a real mechanism (which often
just "looks pretty" without any associated driving electronics)

Quoted text here. Click to load it

That's just a failure of imagination.  A good spec (or manual) should
allow a developer or potential user to imagine actually using the
device before anything has been reified.  Its expensive building
space shuttles just to figure out what it should look like!  :>

Quoted text here. Click to load it

IMO, the problem with the agile approach is that there is too much
temptation to cling to whatever you've already implemented.  And, if
you've not thoroughly specified its behavior and characterized its
operation, you've got a black box with unknown contents -- that you
will now convince yourself does what it "should" (without having
designed it with knowledge of that "should").

So, you end up on the wrong initial trajectory and don't discover
the problem until you've baked lots of "compensations" into the
design.

[The hardest thing to do is convince yourself to start over]

Quoted text here. Click to load it

You've said that in the past, but I can't wrap my head around it.
It's like claiming very few doctors have taken any BIOLOGY courses!
Or, that a baker doesn't understand the basic chemistries involved.

Quoted text here. Click to load it

That's a shortcoming of the language's syntax.  But, doesn't prevent
you from annotating the parameters as such.

My IDL requires formal specification because it has to know how to marshal
and unmarshal on each end.

Quoted text here. Click to load it

Of course!  SWMBO has been learning that lesson with her artwork.
Taking a course from a "great artist" doesn't mean you'll end up
learning anything or improving YOUR skillset.

Quoted text here. Click to load it

But I suspect you had a previous relationship with said individual.
So, knew how to "relate" concepts to him/her.

Many of SWMBO's (female) artist-friends seem to have trouble grok'ing
perspective.  They read books, take courses, etc. and still can't seem
to warp their head around the idea.

I can sit down with them one-on-one and convey the concept and "mechanisms"
in a matter of minutes:  "Wow!  This is EASY!!"  But, I'm not trying to sell
a (fat!) book or sign folks up for hours of coursework, etc.  And, I know
how to pitch the ideas to each person individually, based on my prior knowledge
of their backgrounds, etc.

Quoted text here. Click to load it

<frown>  As time passes, I am becoming more convinced of the quality of
my education.  This was "freshman-level" coursework:  S-machines, lambda
calculus, petri nets, formal grammars, etc.

[My best friend from school recounted taking some graduate level
courses at Northwestern.  First day of the *graduate* level AI
course, a fellow student walked in with the textbook under his
arm.  My friend asked to look at it.  After thumbing through
a few pages, he handed it back:  "I already took this course...
as a FRESHMAN!"]

If I had "free time", I guess it would be interesting to see just what
modern teaching is like, in this field.

Quoted text here. Click to load it

Limbo makes extensive use of tuples as return values.  So, silly
not to take advantage of that directly.  (changes the syntax of how you'd
otherwise use a function in an expression but the benefits outweigh the
costs, typ).

Quoted text here. Click to load it

I guess I don't understand the difference.

In my mind, "programming" is the plebian skillset.
     programming : computer science :: ditch-digging : landscaping
I.e., ANYONE can learn to "program".  It can be taught as a rote skill.
Just like anyone can be taught to reheat a batch of ready-made cookie
dough to "bake cookies".

The CS aspect of my (EE) degree showed me the consequences of different
machine architectures, the value of certain characteristics in the design
of a language, the duality of recursion/iteration, etc.  E.g., when I
designed my first CPU, the idea of having an "execution unit" started
by the decode of one opcode and CONTINUING while other opcodes were
fetched and executed wasn't novel; I'd already seen it done on 1960's
hardware.

[And, if the CPU *hardware* can do two -- or more -- things at once, then
the idea of a *program* doing two or more things at once is a no-brainer!
"Multitasking?  meh..."]

Quoted text here. Click to load it

This was an aspect of "software development" that was NOT stressed
in my curriculum.  Nor was "how to use a soldering iron" in the
EE portion thereof (the focus was more towards theory with the
understanding that you could "pick up" the practical skills relatively
easily, outside of the classroom)

Quoted text here. Click to load it

I know every EE was required to take some set of "software" courses.
Having attended an engineering school, I suspect that was true of
virtually every "major".  Even 40 years ago, it was hard to imagine
any engineering career that wouldn't require that capability.

[OTOH, I wouldn't trust one of the ME's to design a programming
language anymore than I'd trust an EE/CS to design a *bridge*!]

Quoted text here. Click to load it

I just can't imagine how you could explain "programming" a machine to a
person without that person first understanding how the machine works.
Its not like trying to teach someone to *drive* while remaining
ignorant of the fact that there are many small explosions happening
each second, under the hood!

[How would you teach a car mechanic to perform repairs if he didn't
understand what the components he was replacing *did* or how they
interacted with the other components?]

Quoted text here. Click to load it

My first courses (pre-college) went to great lengths to explain the hardware
of the machine, DASD's vs., SASD's, components of access times, overlapped
I/O, instruction formats (in a generic sense -- PC's hadn't been invented,
yet), binary-decimal conversion, etc.  But, then again, these were new ideas
at the time, not old saws.

Quoted text here. Click to load it

Its hard to consider CLIPS's "language" to be a real "programming language"
(e.g., Turing complete -- though it probably *is*, but with ghastly syntax!).
Its bears the same sort of relationship that SQL has to RDBMS, SNOBOL to
string processing, etc.  Its primarily concerned with asserting and retracting
facts based on patterns of recognized facts.

While you *can* code an "action" routine in it's "native" language, I
find it easier to invoke an external routine (C) that uses the API
exported by CLIPS to do all the work.  In my case, it would be difficult
to code an "action routine" entirely in CLIPS and be able to access
the rest of the system via the service-based interfaces I've implemented.

Quoted text here. Click to load it

Yes.  But I'm not designing a typical application; rather, a *system*
of applications, services, OS, etc.  I wouldn't expect one language to
EFFICIENTLY tackle all (and, I'd have to build all of those components
from scratch if I wanted complete control over their own implementation
languages (I have no desire to write an RDBMS just so I can AVOID using
SQL).

Quoted text here. Click to load it

The applications are written in Limbo.  I'd considered other scripting
languages for that role -- LOTS of other languages! -- but Limbo already
had much of the support I needed to layer onto the "structure" of my
system.  Did I want to invent a language and a hosting VM (to make it
easy to migrate applications at run-time)?  Add multithreading hooks
to an existing language?  etc.

[I was disappointed with most language choices as they all tend to
rely heavily on punctuation and other symbols that aren't "voiced"
when reading the code]

C just gives me lots of bang for the buck.  I could implement all of this
on a bunch of 8b processors -- writing interpreters to allow more complex
machines to APPEAR to run on the simpler hardware, creating virtual address
spaces to exceed the limits of those tiny processors, etc.   But, all that
would come at a huge performance cost.  Easier just to *buy* faster
processors and run code written in more abstract languages.

Quoted text here. Click to load it

There's a difference between the types of people involved.  I don't
expect anyone from "People's Software Institute #234B" to be writing
anything beyond application layer scripts.  So, they only need to
understand the scripting language and the range of services available
to them.  They don't have to worry about how I've implemented each
of these services.  Or, how I move their application from processor
node 3 to node 78 without corrupting any data -- or, without their
even KNOWING that they've been moved!

Likewise, someone writing a new service (in C) need not be concerned with
the scripting language.  Interfacing to it can be done by copying an
interface for an existing service.  And, interfacing to the OS can as
easily mimic the code from a similar service.

You obviously have to understand the CONCEPT of "multiplication" in
order to avail yourself of it.  But, do you care if it's implemented
in a purely combinatorial fashion?  Or, iteratively with a bunch of CSA's?
Or, by tiny elves living in a hollow tree?

In my case, you have to understand that each function/subroutine invocation
just *appears* to be a subroutine/function invocation.  That, in reality,
it can be running code on another processor in another building -- concurrent
with what you are NOW doing (this is a significant conceptual difference
between traditional "programming" where you consider everything to be a
series of operations -- even in a multithreaded environment!).

You also have to understand that your "program" can abend or be aborted
at any time.  And, that persistent data has *structure* (imposed by
the DBMS) instead of being just BLOBs.  And, that agents/clients have
capabilities that are finer-grained than "permissions" in conventional
systems.

But, you don't have to understand how any of these things are implemented
in order to use them correctly.

Quoted text here. Click to load it

But that complicates the design (and maintenance) effort(s) -- by requiring
staff with those skillsets to remain available.  Imagine if you had to
have a VLSI person on hand all the time in case the silicon in your CPU
needed to be changed...

Quoted text here. Click to load it

I suspect my electric toothbrush has a small MCU at its heart.

Quoted text here. Click to load it

But the means by which the "source" is converted to the "binary" is
well defined.  Different EA modes require different data to be present
in the instruction byte stream -- and, in predefined places relative to
the start of the instruction (or specific locations in memory).

And, SUB behaved essentially the same as ADD -- with the same range of
options available, etc.

[You might have to remember that certain instructions expected certain
parameters to be implicitly present in specific registers, etc.]

Quoted text here. Click to load it

Because not all development machines were particularly capable.

My first project was i4004 based, developed on an 11.

The newer version of the same product was i8085 hosted and developed on
an MDS800.  IIRC, the MDS800 was *8080* based and limited to 64KB of memory
(no fancy paging, bank switching, etc.)  I think a second 8080 ran
the I/O's.  So, building an object image was lots of passes, lots
of "egg scrambling" (the floppies always sounded like they were
grinding themselves to death)

I.e., if we'd opted to replace the EPROM in our product with SRAM
(or DRAM) and add some floppies, the product could have hosted the
tools.

Quoted text here. Click to load it

I have a C compiler that targets the 8080, hosted on CP/M.  Likewise, a
Pascal compiler and a BASIC compiler (and I think an M2 compiler) all
hosted on that 8085 CP/M machine.

The problem with HLL's on small machines is the helper routines and
standard libraries can quickly eat up ALL of your address space!

I designed several z180-based products in C -- but the (bizarre!)
bank switching capabilities of the processor would let me do things like
stack the object code for different libraries in the BANK section
and essentially do "far" calls through a bank-switching intermediary
that the compiler would automatically invoke for me.

By cleverly designing the memory map, you could have large DATA
and large CODE -- at the expense of lengthened call/return times
(of course, the interrupt system had to remain accessible at
all times so you worked hard to keep that tiny lest you waste
address space catering to it).

Quoted text here. Click to load it

Sure we are!  This is C.A.E!  :>  If we're talking about all
applications, then are we also dragging big mainframes into the mix?
Where's mention of PL/1 and the other big iron running it?

Quoted text here. Click to load it

Motor control is a *relatively* simple algorithm.  No *need* for complex
data types, automatic type casts, etc.  And, what you really want is
deterministic behavior; you want to know that a particular set of
"instructions" (in a HLL?) will execute in a particular, predictable time
frame without worrying about some run-time support mechanism (e.g., GC)
kicking in and confounding the expected behavior.

[Or, having to take explicit measures to avoid this because of the choice
of HLL]

Quoted text here. Click to load it

As I said, I id a lot with 8b hardware.  But, you often didn't have a lot
of resources "to spare" with that hardware.

I recall going through an 8085 design and counting the number of
subroutine invocations (CALL's) for each specific subroutine.
Then, replacing the CALLs to the most frequently accessed subroutine
with "restart" instructions (RST) -- essentially a one-byte CALL
that vectored through a specific hard-coded address in the memory
map.  I.e., each such replacement trimmed *2* bytes from the size of
the executable.  JUST TWO!

We did that for seven of the eight possible RST's.  (RST 0 is hard to
cheaply use as it doubles as the RESET entry point).  The goal being to
trim a few score bytes out of the executable so we could eliminate
*one* 2KB EPROM from the BoM (because we didn't need the entire
EPROM, just a few score bytes of it -- so why pay for a $50 (!!)
chip if you only need a tiny piece of it?  And, why pay for ANY of
it if you can replace 3-byte instructions with 1-byte instructions??)


Re: Portable Assembly
wrote:

Quoted text here. Click to load it

Comparitively few bakers actually can tell you the reason why yeast
makes dough rise, or why you need to add salt to make things taste
sweet.  It's enough for many people to know that something works -
they don't have a need to know how or why.


WRT "developers":

A whole lot of "applications" are written by people in profressions
unrelated to software development.  The become "developers" de facto
when their programs get passed around and used by others.

Consider all the scientists, mathematicians, statisticians, etc., who
write data analysis programs in the course of their work.

Consider all the data entry clerks / "accidental" database admins who
end up having to learn SQL and form coding to do their jobs.

Consider the frustrated office workers who study VBscript or
Powershell on their lunch hour and start automating their manual
processes to be more productive.

 : < more examples elided - use your imagination >

Some of these "non-professional" programs end up being very effective
and reliable.  The better ones frequently are passed around, modified,
extended, and eventually are coaxed into new uses that the original
developer never dreamed of.


Then consider the legions of (semi)professional coders who maybe took
a few programming courses, or who learned on their own, and went to
work writing, e.g., web applications, Android apps, etc.


It has been estimated that over 90% of all software today is written
by people who have no formal CS/CE/CSE or IS/IT education, and 40% of
all programmers are employed primarily to do something other than
software development.

Note: programming courses  !=  CS/CE/CSE education


Quoted text here. Click to load it

In this case, yes.  But I also had some prior teaching experience.

I rarely have much trouble explaining complicated subjects to others.
As you have noted in the past, it is largely a matter of finding
common ground with a student and drawing appropriate analogies.


Quoted text here. Click to load it

Only sort of.  Programming is fundamental to computer *engineering*,
but that is a different discipline.

Computer "science" is concerned with

 - computational methods,  
 - language semantics,
 - ways to bridge the semantic gap between languages and methods,
 - design and study of algorithms,  
 - design of better programming languages [for some "better"]
 - ...

Programming per se really is not a requirement for a lot of it.  A
good foundation of math and logic is more necessary.


Quoted text here. Click to load it

Exactly!  If you can't learn to solder on your own, you don't belong
here.  CS regards programming in the same way.



Quoted text here. Click to load it

Take a browse through some classics:

 - Abelson, Sussman & Sussman, "Structure and Interpretation of
   Computer Programs"  aka SICP

 - Friedman, Wand & Haynes, "Essentials of Programming Languages"
   aka EOPL

There are many printings of each of these.  I happen to have SICP 2nd
Ed and EOPL 8th Ed on my shelf.


Both were - and are still - widely used in undergrad CS programs.

SICP doesn't mention any concrete machine representation until page
491, and then a hypothetical machine is considered with respect to
emulating its behavior.

EOPL doesn't refer to any concrete machine at all.


Quoted text here. Click to load it

Write in BrainF_ck ... that'll fix them.

Very few languages have been deliberately designed to be read.  The
very idea has negative connotations because the example everyone jumps
to is COBOL - which was too verbose.  

It's also true that reading and writing effort are inversely related,
and programmers always seem to want to type fewer characters - hence
the proliferation of languages whose code looks suspiciously like line
noise.

I don't know about you, but I haven't seen a teletype connected to a
computer since about 1972.


Quoted text here. Click to load it

Rabbits are best for multiplication.


Quoted text here. Click to load it

Which is one of the unspoken points of those I books mentioned above:
that (quite a lot of) programming is an exercise in logic that is
machine independent.

Obviously I am extrapolating and paraphrasing, and the authors did not
have device programming in mind when they wrote the books.  

Nevertheless, there is lot of truth in it: identifying required
functionality, designing program logic, evaluating and choosing
algorithms, etc. ... all may be *guided* in situ by specific knowledge
of the target machine, but they are skills which are independent of
it.

YMMV,
George

Site Timeline