Portable Assembly - Page 2

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Re: Portable Assembly

Quoted text here. Click to load it

I've sometimes wondered what kind of development systems were used for
those early 1980s home computers. Unreliable, slow and small storage
media would've made it pretty awful to do development on target
systems. I've read Commodore used a VAX for ROM development so they
probably had a cross assembler there but other than that, not much idea.


Re: Portable Assembly
On 5/31/2017 12:19 AM, Anssi Saari wrote:
Quoted text here. Click to load it

You forget that those computers were typically small and ran small
applications.

In the early 80's, we regularly developed products using CP/M-hosted
tools on generic Z80 machines, RIO-based tools on "Z-boxes", motogorilla's
tools on Exormacs, etc.  None were much better than a 64K 8b machine
with one or two 1.4MB floppies.  Even earlier, the MDS-800 systems
and ISIS-II, etc.

Your development *style* changes with the capabilities of the tools
available.  E.g., in the 70's, I could turn the crank *twice* in an
8 hour shift -- edit, assemble, link, burn ROMs, debug.  So, each
iteration *had* to bring you closer to a finished product.  You
couldn't afford to just try the "I wonder if THIS is the problem"
game that seems so common, today ("Heck, I can just try rebuilding
everything and see if it NOW works...")

But, that doesn't necessarily limit you to the size of a final executable
(ever hear of overlays?) or the overall complexity of the product.



Re: Portable Assembly
On 31.5.17 10:19, Anssi Saari wrote:
Quoted text here. Click to load it


I used an Intel MDS and a Data General Eclipse to bootstrap a
Z80-based CP/M computer (self made). After that, the CP/M
system could be used to create the code, though the 8 inch
floppies were quite small for the task.

--  

-Tauno Voipio


Re: Portable Assembly
On 5/27/2017 2:52 PM, Theo Markettos wrote:
Quoted text here. Click to load it

For embedded systems (before we called them that), yes.  There were
few compilers that were really worth the media they were delivered
on -- and few meant to generate code for bare iron.

Quoted text here. Click to load it

Speaking from the standpoint of the *arcade* game industry, games were
developed on hardware specific to that particular game (trying, where possible,
to leverage as much of a previous design as possible -- for reasons of
economy).

Most games were coded from scratch in ASM; very little "lifted" from Game X
to act as a basis for Game Y (this slowly changed, over time -- but, mainly
in terms of core services... runtime executives predating "real" OS's).

Often, the hardware was *very* specific to the game (e.g., a vector graphic
display didn't draw vectors in a frame buffer but, rather, directly controlled
the deflection amplifiers -- X & Y -- of the monitor to move the "beam" around
the display tube in a particular path).  As such, the "display I/O" wasn't
really portable in an economic sense -- no reason to make a Z80 version of
a 6502-based game with that same wonky display hardware.  E.g., Atari had a
vector graphic display system (basically, a programmable display controller)
that could ONLY draw curves -- because curves were so hard to draw with a
typical vector graphic processor!  (You'd note that every "line segment" on
the display was actually a curve of a particular radius)

Also, games taxed their hardware to the limit.  There typically wasn't an "idle
task" that burned excess CPU cycles; all cycles were used to make the game
"do more" (players are demanding).  The hardware was designed to leverage
whatever features the host CPU (often more than one CPU for different aspects
of the game -- e.g., "sound" was its own processor, etc.) to the greatest
advantage.  E.g., 680x processors were a delight to interface to a frame buffer
as the bus timing directly lent itself to "display controller gets access
to the frame buffer for THIS half clock cycle... and the CPU gets access
for the OTHER half cycle" (no wait states as would be the case with a processor
having variable bus cycle timings (e.g., Z80).

Many manufacturers invested in full custom chips to add value (and make the
games harder to counterfeit).

A port of a game to another processor (and perhaps entire hardware platform)
typically meant rewriting the entire game, from scratch.  But, 1980's games
(arcade pieces) weren't terribly big -- tens of KB of executables.  Note
that any graphics for the game were directly portable (many of the driving
games and some of the Japanese pseudo-3D games had HUGE image ROMs that
were displayed by dedicated hardware -- under the control of the host CPU).

In practical terms, these were small enough projects that *seeing* one that
already works (that YOU coded or someone at your firm/affiliate coded) was
the biggest hurdle to overcome; you know how the game world operates,
the algorithms for the "robots", what the effects should look like, etc.

If you look at emulations of these games (e.g., MAME), you will see that they
aren't literal copies but, rather, just intended to make you THINK you're
playing the original game (because the timing of the algorithms in the
emulations isn't the same as that in the original game).  E.g., the host
(application) typically synchronized its actions to the position of the "beam"
repainting the display from the frame buffer (in the case of a raster game;
similar concepts for vector games) to minimize visual artifacts (like "object
tearing") and provide other visual features ("OK, the beam has passed this
portion of the display, we can now go in and alter it in preparation for
its next pass, through")

In a sense, the games were small systems, by today's standards.  Indeed, many
could be *emulated* on SoC's, today -- for far less money than their original
hardware and far less development time!

Re: Portable Assembly
On Mon, 29 May 2017 02:33:47 -0700, Don Y

Quoted text here. Click to load it

When did the "embedded system" term become popular ?

Of course, there were some military system (such as SAGE) that used
purpose built computers in the 1950s.

In the 1970s the PDP-11/34 was very popular as a single purpose
computer and the PDP-11/23 in the 1980's. After that 8080/Z80/6800
became popular as the low end processors.


Re: Portable Assembly
On 5/29/2017 1:46 PM, snipped-for-privacy@downunder.com wrote:
Quoted text here. Click to load it

No idea.  I was "surprised" when told that this is what I did
for a living (and HAD been doing all along!).

I now tell people that I design "computers that don't LOOK like
computers" (cuz everyone thinks they KNOW what a "computer" looks
like!) "things that you know have a computer *in* them but
don't look like the stereotype you think of..."

Quoted text here. Click to load it

11's were used a lot as they were reasonably affordable and widely
available (along with folks who could code for them).  E.g., the Therac
was 11-based.

The i4004 was the first real chance to put "smarts" into something
that didn't also have a big, noisey box attached.  I recall thinking
the i8080 (and 85) were pure luxury coming from that more crippled
world ("Oooh!  Kilobytes of memory!!!")

Re: Portable Assembly
On 5/27/2017 2:31 PM, Dimiter_Popoff wrote:
Quoted text here. Click to load it

+1

The *concepts*/design are what you are trying to reuse, not
the *code*.

OTOH, we see increasing numbers of designs migrating into
software that would previously have been done with hardware
as the costs of processors falls and capabilities rise.
This makes it economical to leverage the higher levels of
integration available in an MCU over that of "discretes"
or, worse, a *specific* "custom".

E.g., I can design an electronic tape rule in hardware or
software in roughly the same amount of effort.  But, the software
version will be more mutable, in the long term, and leverage
a single "raw part number" (the unprogrammed MCU) in the MRP
system.

OToOH, we are seeing levels of complexity now -- even in SoC's -- that
make "big" projects much more commonplace.  I'd hate to have to
recode a cell-phone for a different choice of processor if I'd not
made plans for that contingency in the first place!


Re: Portable Assembly
On 28.5.2017 ?. 01:52, Don Y wrote:
Quoted text here. Click to load it

Well of course, it is where all of us here have been moving to
last 25 years or so (for me, since the HC11 days).

Quoted text here. Click to load it

Yes of course, but porting does not necessarily mean porting to
another CPU architecture, typically you will reuse the code on
the same one - and modify just some peripheral interactions etc.
sort of thing.

Quoted text here. Click to load it

Well phones do not have the flash as part of the SoC, I said
"in the MCU flash", meaning on the same chip. This is what I regard
as a "small" thingie, can't see what it will have to do to take
up more that 3-4 months of my time as long as I know what I want
to program.
Anything where external disks and/or "disks" are involved is in the
other category of course.

Dimiter





Re: Portable Assembly
On 5/27/2017 4:14 PM, Dimiter_Popoff wrote:
Quoted text here. Click to load it

Yes, but you can't always be sure of that.  I've seen many
products "squirm" when the platform they adopted for early
versions suddenly became unavailable -- or, too costly -- to
support new versions/revisions of the product.  This is probably
one of the most maddening positions to be in:  having *a* product
and facing a huge re-development just to come up with the NEXT
product in its evolution.

Quoted text here. Click to load it

But you can pick devices with *megabytes* of on-board (on-chip) FLASH,
nowadays:
    <https://www.microchip.com/wwwproducts/en/ATSAM4SD32C>
It seems fairly obvious that more real-estate will find its way into
devices' "memory" allocations.

You could keep a small staff busy just tracking new offerings and
evaluating price/performance points for each.  I've discarded several
"finished" hardware designs for my current project because I *know* they'll
be obsolete before the rest of the designs are complete!

Instead, I concentrate on getting all of the software written for the
various applications on hardware that I *know* I won't be using
(I've a stack of a couple dozen identical x86 SBC's that I've been
repurposing for each of the application designs) just to allow me
to have "working prototypes" that the other applications can talk
to as THEY are being developed.

As most of the design effort is in OS and application software -- with
a little bit of specialized hardware I/O development -- the choice of
processor is largely boring (so, why make it NOW?)

Quoted text here. Click to load it


Re: Portable Assembly
On 28.5.2017 ?. 03:14, Don Y wrote:
Quoted text here. Click to load it

Of course you can't be sure what other people will do. We can't
really be sure what we'll do ourselves.... :-)

Quoted text here. Click to load it

Exactly this situation forced my hand to create vpa (virtual processor
assembly language). I had several megabytes of good sources written in
68k assembly and the 68k line was going to an end. Sure I could have
used it for another few years but it was obvious I had to move
forward so I did.

Quoted text here. Click to load it

Well this part is closer to a "big thing" but not there yet. 160k RAM
is by no means much nowadays, try buffering a 100 Mbps Ethernet link
on that for example. It is still for stuff you can do within a few
months if you know what you want to do. The high level language will
take care of clogging up the 2M flash if anything :D.

Although I must say that my devices (MPC5200b based) have 2M flash
and can boot a fully functional dps off it... including most of the
MCA software. It takes a disk to get the complete functionality
but much of it - OS, windows, shell and all commands etc.
fit in (just between 100 and 200k are used for "BIOS" purposes,
the rest of the 2M is a "disk").
This with 64M RAM of course - and no wallpapers, you need a proper
disk for that :-).

However I doubt anyone writing in C could fit a tenth of that
in 2M flash which is why it is there; it is just way more than really
necessary for the RAM that part has if programming would not be
done at high level.

Dimiter

------------------------------------------------------
Dimiter Popoff, TGI             http://www.tgi-sci.com
------------------------------------------------------
http://www.flickr.com/photos/didi_tgi/







Re: Portable Assembly
On 5/27/2017 5:38 PM, Dimiter_Popoff wrote:
Quoted text here. Click to load it

Its one of the overwhelming reasons I opt for HLL's -- I can *buy* a
tool that will convert my code to the target of my choosing.  :>

Quoted text here. Click to load it

I don't think you realize just how clever modern compilers have become.
I've taken portions of old ASM projects and tried to code them in HLL's
to see what the "penalty" would be.  It was alarming to see how much
cleverer they have become (over the course of many decades!).

Of course, you need to be working on a processor that is suitable to
their use to truly benefit from their cleverness -- I doubt an 8x300
compiler would beat my ASM code!  :>

Quoted text here. Click to load it

I use the FLASH solely for initial POST, a secure netboot protocol, a
*tiny* RTOS and "fail safe/secure" hooks to ensure a "mindless" device
can't get into -- or remain in -- an unsafe state (including the field
devices likely tethered to it).

[A device may not be easily accessible to a human user!]

Once "enough" of the hardware is known to be functional, a second level
boot drags in more diagnostics and a more functional protocol stack.

A third level boot drags in the *real* OS and real network stack.

After that, other aspects of the "environment" can be loaded and,
finally, the "applications".

[Of course, any of these steps can fail/timeout and leave me with
a device with *just* the functionality of the FLASH]

Even this level of functionality deferral isn't enough to keep
me from having to "customize" the FLASH in each type of device
(cuz they have different I/O complements).  So, big incentive
to come up with a more universal set of "peripherals" just to
cut down on the number of different designs.

Quoted text here. Click to load it


Re: Portable Assembly
Am 27.05.2017 um 23:31 schrieb Dimiter_Popoff:
Quoted text here. Click to load it

So, then, what is a portable assembler?

One major variable of a processor architecture is the number of
registers, and what you can do with them. On one side of the spectrum,
we have PICs or 6502 with pretty much no registers, on the other side,
there's things like x86_64 or ARM64 with plenty 64-bit registers. Using
an abstraction like C to let the compiler handle the distinction (which
register to use, when to spill) sounds like a pretty good idea to me. If
you were more close to assembler, you'd either limit yourself to an
unuseful subset that works everywhere, or to a set that works only in
one or two places.


  Stefan

Re: Portable Assembly
On 28.5.2017 ?. 19:45, Stefan Reuther wrote:
Quoted text here. Click to load it

One which is not tied to a particular architecture, rather to an
idealized machine model.
It makes sense to use this assuming that processors evolve towards
better, larger register sets - which has been the case last few
decades. It would be impractical to try to assemble something written
once for say 68k and then assemble it for a 6502 - perhaps doable but
insane.

Quoted text here. Click to load it

Using a phrase book is of course a good idea if you want to conduct
a quick conversation.
It is a terrible idea if you try to use the language for years and
choose to stay confined within the phrases you have in the book.

 > If you were more close to assembler, you'd either limit yourself to an
 > unuseful subset that works everywhere, or to a set that works only in
 > one or two places.

Like I said before, there is no point to write code which can work
on any processor ever made. I have no time to waste on that, I just need
my code to be working on what is the best silicon available. This
used to be 68k, now it is power. You have to program with some
constraints - e.g. knowing that the "assembler" (which in reality
is more a compiler) may use r3-r4 as it wishes and not preserve
them on a per line basis etc.
Since the only person who could make a comparison between a HLL and
my vpa is me, I can say it has made me orders of magnitude more
efficient. Obviously you can take my word for that or ignore it,
I can only say what I know.

Dimiter

------------------------------------------------------
Dimiter Popoff, TGI             http://www.tgi-sci.com
------------------------------------------------------
http://www.flickr.com/photos/didi_tgi/




Re: Portable Assembly
Am 28.05.2017 um 19:47 schrieb Dimiter_Popoff:
Quoted text here. Click to load it

So, to what *is* it tied then? What is its *concrete* machine model?

Quoted text here. Click to load it

Doable and not insane with C.

Actually, you can programm the 6502 in C++17.

Quoted text here. Click to load it

My point being: if you work on assembler level, that is: registers,
you'll not have anything more than a phrase book. A C compiler can use
knowledge from one phrase^Wstatement and carry it into the next, and it
can use grammar to generate not only "a = b + c" and "x = y * z", but
also "a = b + (y*z)".

Quoted text here. Click to load it

I am not an expert in either of these two architectures, but 68k has 8
data + 8 address registers whereas Power has 32 GPRs. If you work on a
virtual pseudo-assembler level you probably ignore most of your Power.

A classic compiler will happily use as many registers as it finds useful.

The only possible gripe with C would be that it has no easy way to write
a memory cell by number. But a simple macro fixes that.


  Stefan

Re: Portable Assembly
On 5/29/2017 9:43 AM, Stefan Reuther wrote:
Quoted text here. Click to load it

"Only" gripe?

Every language choice makes implicit tradeoffs in abstraction management.
The sorts of data types and the operations that can be performed on them
are baked into the underlying assumptions of the language.

What C construct maps to the NS16032's native *bit* array instructions?
Or, the test-and-set capability present in many architectures?  Or,
x86 BCD data types?  Support for 12 or 60 bit integers?  24b floats?
How is the PSW exposed?  Why pointers in some languages and not others?

Why do we have to *worry* about atomic operations in the language in
a different way than on the underlying hardware?  Why doesn't the language
explicitly acknowledge the idea of multiple tasks, foreground/background,
etc.?

Folks designing languages make the 90-10 (%) decisions and hope the
10 aren't unduly burdened by the wins afforded to the 90.  Or, that
the applications addressed by the 10 can tolerate the contortions
they must endure as a necessary cost to gain *any* of the benefits
granted to the 90.


Re: Portable Assembly
Op Mon, 29 May 2017 18:43:01 +0200 schreef Stefan Reuther  =

Quoted text here. Click to load it




Unless it uses a push/pop architecture like Java bytecode, which can get=
  =

'assembled' to any number of registers.


-- =

(Remove the obvious prefix to reply privately.)
Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/

Re: Portable Assembly
Quoted text here. Click to load it

 1) It's what Unicorns use when writing code to run the automation
    equipment used by Elves to mass-produce cookies inside hollow trees.

 2) It's a trigger phrase that indicates the person using it is
    delusional and is about to lure you into a time-sink of
    relativistic proportions.

If I were you, I'd either smile politely and change the topic or just
turn and run.  

--  
Grant

Re: Portable Assembly
On 29.5.2017 ?. 04:59, Grant Edwards wrote:
Quoted text here. Click to load it

Oh smile as much as you want. Then try to match 10% of what I have made
and try to smile again.


Re: Portable Assembly
Dimiter_Popoff wrote:
Quoted text here. Click to load it

Not so much. Perhaps Fortran plus say, LINPACK.

Quoted text here. Click to load it

"I am returning this tobacconist; it  is scratched."  - Monty Python.

It has been a long time since C presented a serious constraint
in performance for me.

Quoted text here. Click to load it

Mostly, I've seen the source code outlast the company for
which it was written :)

I would personally view "megabytes of source" as an opportunity to
infuse a system with better ideas through a total rewrite. I
understand that this view is rarely shared; people prefer the
arbitrage of technical debt.


Quoted text here. Click to load it

L'il MCU projects are essentially disposable. Too many
heresies.

Quoted text here. Click to load it

--  
Les Cargill


Re: Portable Assembly
On 5/28/2017 8:04 PM, Les Cargill wrote:
Quoted text here. Click to load it

Or, the platform on which it was originally intended to run!

OTOH, there are many "regulated" industries where change is
NOT seen as "good".  Where even trivial changes can have huge
associated costs (e.g., formal validation, reestablishing
performance and reliability data, etc.)

[I've seen products that required the manufacturer to scour the
"used equipment" markets in order to build more devices simply
because the *new* equipment on which the design was based was no
longer being sold!]

[[I've a friend here who hordes big, antique (Sun) iron because
his enterprise systems *run* on old SPARCservers and the cost of
replacing/redesigning the software to run on new/commodity hardware
and software is simply too far beyond the company's means!]]

Quoted text here. Click to load it

I've never seen this done, successfully.  The "second system"
effect seems to sabotage these attempts -- even for veteran
developers!  Instead of reimplementing the *same* system,
they let feeping creaturism take over.  The more developers,
the more "pet features" try to weasel their way into the
new design.

As each *seems* like a tiny little change, no one ever approaches
any of them with a serious evaluation of their impact(s) on the
overall system.  And, everyone is chagrined at how much *harder*
it is to actually fold these changes into the new design -- because
the new design was conceived with the OLD design in mind (i.e., WITHOUT
these additions -- wasn't that the whole point of this effort?).

Meanwhile, your (existing) market is waiting on the new release of the
OLD product (with or without the new features) instead of a truly NEW
product.

And, your competitors are focused on their implementations of "better"
products (no one wants to play "catch-up"; they all aim to "leap-frog").

Save your new designs for new products!

Site Timeline