Portable Assembly - Page 5

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Re: Portable Assembly
On 6/10/2017 6:01 PM, George Neuner wrote:
Quoted text here. Click to load it

I guess different experiences.  Growing up, I learned these sorts
of things by asking countless questions of the vendors we frequented.
Yeast vs. baking soda as leavening agent; baking soda vs. powder;
vs. adding cream of tartar; cake flour vs. bread flour; white sugar
vs. brown sugar; vege shortening vs. butter (vs. oleo/oil); sugar
as a "wet" ingredient; etc.

Our favorite baker was a weekly visit.  He'd take me in the back room
(much to the chagrin of other customers) and show me the various bits
of equipment, what he was making at the time, his "tricks" to eek a
bit more life out of something approaching its "best by" date, etc.

[I wish I'd pestered him, more, to learn about donuts and, esp, bagels
as he made the *best* of both!  OTOH, probably too many details for
a youngster to commit to memory...]

The unfortunate thing (re: US style of measurement by volume) is that
you don't have as fine control over some of the ingredients (e.g.,
what proportion of "other ingredients" per "egg unit")

[I've debated purchasing a scale just to weigh eggs!  Not to tweek
the amount of other ingredients proportionately but, rather, to
select a "set" of eggs closest to a target weight for a particular
set of "other ingredients".  Instead, I do that "by feel", presently
(one of the aspects of my Rx's that makes them "non-portable -- the
other being my deliberate failure to upgrade the written Rx's as I
improve upon them.  Leaves folks wondering why things never come
out "as good" when THEY make them...  <grin>]

Quoted text here. Click to load it

And these folks tend to use languages (and tools) that are tailored to
those sorts of "applications".  Hence the reason I include a scripting
language in my design; no desire to force folks to understand data types,
overflow, mathematical precision, etc.

     "I have a room that is 13 ft, 2-1/4 inches by 18 ft, 3-3/8 inches.
     Roughly how many 10cm x 10cm tiles will it take to cover the floor?"

Why should the user have to normalize to some particular unit of measure?
All he wants, at the end, is a dimensionless *count*.

[I was recently musing over the number of SOIC8 devices that could fit
on the surface of a sphere having a radius equal to the average distance
of Pluto from the Sun (idea came from a novel I was reading).  And, how
much that SOIC8 collection would *weigh*...]

Quoted text here. Click to load it

Exactly.  I had a lady friend many years ago to whom I'd always explain
computer-related issues (more typ operational ones than theoretical ones)
using "kitchen analogies".  In a playful mood, one day, she chided me for
the misogynistic examples.  So, I started explaining things in terms
of salacious "bedroom activities".  Didn't take long for her to request
a return to the kitchen analogies!  :>

Quoted text here. Click to load it

Petri nets, lamda calculus, S-machines, etc.

But, to become *practical*, these ideas have to eventually be bound
to concrete representations.  You need ways of recording algorithms
and verifying that they do, in fact, meet their desired goals.

I know no one who makes a living dealing in abstractions, entirely.
Even my physics friends have lives beyond a blackboard.

Quoted text here. Click to load it

But you can't examine algorithms and characterize their behaviors,
costs, etc. without being able to reify them.  You can't just
magically invent an abstract language that supports:
      solve_homework_problem(identifier)

Quoted text here. Click to load it

All written long after I'd graduated.  :>  Most (all?) of my college
CS courses didn't have "bound textbooks".  Instead, we had collections
of handouts coupled with notes that formed our "texts".  In some cases,
the handouts were "bound" (e.g., a cheap "perfect binding" paperback)
for convenience as the instructors were writing the texts *from*
their teachings.

Sussman taught one of my favorite courses and I'm chagrined that
all I have to show for it are the handouts and my notes -- it would
have been nicer to have a lengthier text that I could explore at
my leisure (esp after the fact).

The books that I have on the subject predate my time in college
(I attended classes at a local colleges at night and on weekends
while I was in Jr High and High School).  Many of the terms used
in them have long since gone out of style (e.g., DASD, VTOC, etc.)
I still have my flowcharting template and some FORTRAN coding forms
for punched cards... I suspect *somewhere* these are still used!  :>

Other texts from that period are amusing to examine to see how
terminology and approaches to problems have changed.  "Real-time"
being one of the most maligned terms!  (e.g., Caxton's book)

Quoted text here. Click to load it

Janus (Consistent System) was equally verbose.  Its what I think of when
I'm writing SQL  :<  An 80 column display was dreadfully inadequate!

Quoted text here. Click to load it

Yes, but if you're expecting to exchange code snippets with folks
who can't *see*, the imprecision of "speaking" a program's contents
is fraught with opportunity for screwups -- even among "professionals"
who know where certain punctuation are *implied*.

Try dictating "Hello World" to a newbie over the phone...

I actually considered altering the expression syntax to deliberately
render parens unnecessary (and illegal).  I.e., if an expression
can have two different meanings with/without parens, then ONLY the
meaning without parens would be supported.

But, this added lots of superfluous statements just to meet that
goal *and* quickly overloads STM as you try to keep track of
which "component statements" you've already encountered:
     area = (width_feet+(width_inches/12))*(length_feet+(length_inches/12)
becomes:
     width = width_feet + width_inches/12
     length = length_feet + length_inches/12
     area = length * width
[Imagine you were, instead, computing the *perimeter* of a 6 walled room!]

Quoted text here. Click to load it

Actually, I have one  :>

Quoted text here. Click to load it

Or, Adders and log tables!  (bad childhood joke)

Quoted text here. Click to load it

But I see programming (C.A.E) as having moved far beyond the sorts of
algorithms you would run on a desktop, mainframe, etc.  Its no longer
just about this operator in combination with these arguments yields
this result.

When I was younger, I'd frequently use "changing a flat tire" as an
example to coax folks into describing a "familiar" algorithm.  It
was especially helpful at pointing out all the little details that
are so easy to forget (omit) that can render an implementation
ineffective, buggy, etc.
     "Wonderful!  Where did you get the spare tire from?"
     "The trunk!"
     "And, how did you get it out of the trunk?"
     "Ah, I see... 'I *opened* the trunk!'"
     "And, you did this while seated behind the wheel?"
     "Oh, OK.  'I got out of the car and OPENED the trunk'"
     "While you were driving down the road?"
     "Grrr... 'I pulled over to the shoulder and stopped the car; then got out'"
     "And got hit by a passing vehicle?"

Now, its not just about the language and the target hardware but, also, the
execution environment, OS, etc.

Why are people surprised to discover that it's possible for <something> to
see partial results of <something else's> actions?  (i.e., the need for
atomic operations)  Or, to be frustrated that such problems are so hard
to track down?

(In a multithreaded environment,) we all know that the time between
execution of instruction N and instruction N+1 can vary -- from whatever
the "instruction rate" of the underlying machine happens to be up to
the time it takes to service all threads at this, and higher, priority...
up to "indefinite".  Yet, how many folks are consciously aware of that
as they write code?

A "programmer" can beat on a printf() statement until he manages to stumble
on the correct combination of format specifiers, flags, arguments, etc.
But, will it ever occur to him that the printf() can fail, at RUNtime?
Or, the NEXT printf() might fail while this one didn't?

How many "programmers" know how much stack to allocate to each thread?
How do they decide -- wait for a stack fence to be breached and then
increase the number and try again?  Are they ever *sure* that they've
got the correct, "worst case" value?

I.e., there are just too many details of successful program deployment
that don't work when you get away from the rich and tame "classroom
environment".  This is especially true as we move towards scenarios
where things "talk to" each other, more (for folks who aren't prepared
to deal with a malloc/printf *failing*, how do they address "network
programming"?  Or, RPC/RMI?  etc.)

Its easy to see how someone can coax a piece of code to work in a
desktop setting -- and fall flat on their face when exposed to a
less friendly environment (i.e., The Real World).

[Cookies tonight (while its below 100F) and build a new machine to replace
this one.  Replace toilets tomorrow (replaced flange in master bath today).]

Re: Portable Assembly
On Sun, 11 Jun 2017 00:39:41 -0700, Don Y

Quoted text here. Click to load it


Reading about Dyson spheres are we?  So how many trillion-trillion
devices would it take?


Quoted text here. Click to load it

To a 1st approximation, you can.  E.g., given just an equation, you
can count the arithmetic operations and approximate the number of
operand reads and result writes.

Certain analyses are very sensitive to the language being considered.
E.g., you'll get different results from analyzing an algorithm
expressed in C vs the same algorithm expressed in assembler because
the assembler version exposes low level minutia that is hidden by the
C version.


Quoted text here. Click to load it

You can invent it ... you just [currently] can't implement it.

And you probably even can get a patent on it since the USPTO no longer
requires working prototypes.



Quoted text here. Click to load it

SICP and EOPL both were being written during the time I was in grad
school.  I had some courses with Mitch Wand and I'm sure I was used as
a guinea pig for EOPL.

I acquired them later because they subsequently became famous as
foundation material for legions of CS students.


Quoted text here. Click to load it

I'm not *that* far behind you.  Many of my courses did have books, but
quite a few of those books were early (1st or 2nd) editions.

I have a 1st edition on denotational semantics that is pre-press and
contains inserts of hand drawn illustrations.


Quoted text here. Click to load it

I met once Gerry Sussman at a seminar.  Never had the opportunity to
take one of his classes.


Quoted text here. Click to load it

I have the Fortran IV manual my father used when he was in grad
school.  <grin>



Quoted text here. Click to load it

Indentation sensitive syntax (I-expressions) is a recurring idea to
rid the world of parentheses.

Given the popularity of Python, iexprs may eventually find a future.
OTOH, many people - me included - are philosophically opposed to the
idea of significant whitespace.  

If you want syntax visualization, use a structure editor.


Quoted text here. Click to load it

???  For what definition of "STM"?

Transactional memory - if that's what you mean - shouldn't require
refactoring code in that way.


Quoted text here. Click to load it

As long as you don't dismiss desktops and servers, etc.  

[Mainframes and minis as distinct concepts are mostly passe.  Super
and cluster computers, however, are very important].

Despite the current IoT and BYO device fads, devices are not all there
are.  Judging from some in the computer press, you'd think the legions
of office workers in the world would need nothing more than iPads and
Kinkos.  That isn't even close to being true.



Quoted text here. Click to load it

Again: CS is about computation and language theory, not about systems
engineering.

I got into it  while ago with a VC guy I met at a party.  He wouldn't
(let companies he was backing) hire anyone more than 5 years out of
school because he thought their skills were out of date.

I told him I would hesitate to hire anyone *less* than 5 years out of
school because most new graduates don't have any skills and need time
to acquire them.  I also said something about how the average new CS
grad would struggle to implement a way out of a wet paper bag.

Obviously there is a component of this that is industry specific, but
few (if any) industries change so fast that skills learned 5 years ago
are useless today.  For me, it was a scary look into the (lack of)
mind of modern business.  

YMMV,
George

Re: Portable Assembly
On 6/12/2017 8:27 PM, George Neuner wrote:
Quoted text here. Click to load it

Matrioshka Brain -- "concentric" Dyson spheres each powered by the waste
heat of the innermore spheres.

I didn't do the math as I couldn't figure out what a good representative
weight for a "wired" SOIC SoC might be...

Quoted text here. Click to load it

Yes, but only for evaluating *relative* costs/merits of algorithms.
It assumes you can "value" the costs/performance of the different
operators in some "intuitive" manner.

This doesn't always hold.  E.g., a more traditionally costly operation
might be "native" while the *expected* traditional operation has to be
approximated or emulated.

Quoted text here. Click to load it

Sure you can!  You just have to find someone sufficiently motivated to
apply their meatware to the problem!  There's nothing specifying the
*time* that the implementation needs to take to perform the operation!

Quoted text here. Click to load it

Having not seen SICP, its possible the notes for GS's class found
their way into it -- or, at least, *shaped* it.

Quoted text here. Click to load it

Its not just *when* you got your education but what the folks teaching
opted to use as their "teaching materials".  Most of my "CS" professors
obviously considered themselves "budding authors" as each seemed unable
to find a suitable text from which to teach and opted, instead, to
write their own.

OTOH, all my *other* classes (including the "EE" ones) had *real*
textbooks.

Quoted text here. Click to load it

Unfortunately, I never realized the sorts of folks I was surrounded by,
at the time.  It was "just school", in my mind.

Quoted text here. Click to load it

Still doesn't work without *vision*!

Quoted text here. Click to load it

How many nested levels of parens can you keep track of if I'm dictating
the code to you over the phone and your eyes are closed?  Will I be disciplined
enough to remember to alert you to the presence of every punctuation mark
(e.g., paren)?  Will you be agile enough to notice when I miss one?

Quoted text here. Click to load it

"Mainframe" is a colloquial overloading to reference "big machines"
that have their own dedicated homes.  The data center servicing your
bank is a mainframe -- despite the fact that it might be built of
hundreds of blade servers, etc.

"Desktop" is the sort of "appliance" that a normal user relates
to when you say "computer".  He *won't* think of his phone even
though he knows its one.

He certainly won't think of his microwave oven, furnace, doorbell,
etc.

Quoted text here. Click to load it

I think it depends on the "pedigree".  When I was hired at my first
job, the boss said, outright, "I don't expect you to be productive,
today.  I hired you for 'tomorrow'; if I wanted someone to be productive
today, I'd have hired from the other side of the river -- and planned on
mothballing him next year!"

 From the few folks that I interact with, I have learned to see his point.
Most don't know anything about the "history" of their technology or the
gyrations as it "experimented" with different things.  They see "The Cloud"
as something new and exciting -- and don't see the parallels to
"time sharing", centralized computing, etc. that the industry routinely
bounces through.  Or, think it amazingly clever to turn a PC into an
X terminal ("Um, would you like to see some REAL ones??  You know, the
idea that you're PILFERING?")

Employers/clients want to know if you've done THIS before (amusing
if its a cutting edge project -- that NO ONE has done before!) as
if that somehow makes you MORE qualified to solve their problem(s).
I guess they don't expect people to LEARN...

Quoted text here. Click to load it

A lady friend once told me "Management is easy!  No one wants to take
risks or make decisions so, if YOU will, they'll gladly hide behind you!"

/Pro bono/ day tomorrow.  Last sub 100F day for at least 10 days (103 on Wed
climbing linearly to 115 next Mon with a LOW of 82F) so I'm hoping to get my
*ss out of here bright and early in the morning!  <frown>

45 and raining, you say...  :>

Re: Portable Assembly
On Tuesday, June 13, 2017 at 1:42:17 AM UTC-4, Don Y wrote:
Quoted text here. Click to load it
[]
Quoted text here. Click to load it

I'm jumping in late here so forgive me if you covered this.

Algorithmic analysis is generally order of magnitude (the familiar
Big O notation) and independent of hardware implementation.

Quoted text here. Click to load it

I'm not quite sure what you are saying here, Don.
What's the difference between "native" and *expected*?

Is it that you *expected* the system to have a floating point
multiply, but the "native" hardware does not so it is emulated?
  
[]
first:
Quoted text here. Click to load it
second:
Quoted text here. Click to load it
third:
Quoted text here. Click to load it

I'm confused here too, Don, unless the quotation levels are off.
Is it you that said the first and third comments above?
They seem contradictory. (or else you are referencing
different contexts?)


[lots of other interesting stuff deleted]

ed

Re: Portable Assembly
Hi Ed,

On 6/13/2017 2:24 PM, Ed Prochak wrote:

Quoted text here. Click to load it

Correct.  But, it's only "order of" assessments.  I.e., is this
a constant time algorithm?  Linear time?  Quadratic?  Exponential?
etc.

There's a lot of handwaving in O() evaluations of algorithms.
What's the relative cost of multiplication vs. addition operators?
Division?  etc.

With O() you're just trying to evaluate the relative merits of one
approach over another in gross terms.

Quoted text here. Click to load it

Or, exactly the reverse:  that it had the more complex operator
but not the "simpler" (expected) one.

We *expect* integer operations to be cheap.  We expect logical
operators to be <= additive operators <= multiplication, etc.
But, that's not always the case.

E.g., having a "multiply-and-accumulate" instruction (common in DSP)
can eliminate the need for an "add" opcode (i.e., multiplication is as
"expensive" as addition).

I've designed (specialty) CPU's that had hardware to support direct
(native) implementation of DDA's.  But, trying to perform a simple
"logical" operation would require a page of code (because there were no
logical operators so they'd have to be emulated).

Atari (?) made a processor that could only draw arcs -- never straight
lines (despite the fact that line segments SHOULD be easier).

Limbo initially took the approach of having five "base" data types:
- byte
- int ("long")
- big ("long long")
- real ("double")
- string
(These are supported directly by the underlying VM)  No pointer types.
No shorts, short-reals (floats), etc.  If you want something beyond
an integer, you go balls out and get a double!

The haughtiness of always relying on "gold" instead of "lead" proved
impractical, in he real world.  So, there are now things like native
support for Q-format -- and beyond (i.e., you can effectively declare the
value of the rightmost bit AND a maximum value to representable by
that particular "fixed" type):

     hourlyassessment: type fixed(0.1, 40.0);
     timespentworking, timeinmeetings: hourlyassessment;

     LETTER: con 11.0;
     DPI: con 400;
     inches: type fixed(1/DPI, LETTER);
     topmargin: inches;

Likewise, the later inclusion of REFERENCES to functions as a compromise
in the "no pointers" mentality.  (you don't *need* references as you
can map functions to integer identifiers and then use big "case"
statements (the equivalent of "switch") to invoke one of N functions
as indicated by that identifier; just far less efficiently (enough
so that you'd make a change to the LANGUAGE to support it?)

While not strictly on-topic, it's vindication that "rough" approximations
of the cost of operators can often be far enough astray that you need to
refine the costs "in practice".  I.e., if "multiplication" could just
be considered  to have *a* cost, then there's be no need for all those
numerical data types.

So... you can use O-notation to compare the relative costs of different
algorithms on SOME SET of preconceived available operators and data types.
You have some implicit preconceived notion of what the "real" machine is
like.

But, mapping this to a real implementation is fraught with opportunities
to come to the wrong conclusions (e.g., if you think arcs are expensive
as being built from multiple chords, you'll favor algorithms that minimize
the number of chords used)

Quoted text here. Click to load it

Yes.


Read them again; the subject changes:

1.  You can't invent that magical language that allows you to solve
homework assignments with a single operator  <grin>
2.  You can INVENT it, but can't IMPLEMENT it (i.e., its just a conceptual
language that doesn't run on any REAL machine)
3.  You *can* IMPLEMENT it; find a flunky to do the work FOR you!
(tongue firmly in cheek)


Re: Portable Assembly
AT Wednesday 14 June 2017 12:57, Don Y wrote:

Quoted text here. Click to load it

I found many of such evaluation grossly wrong. We had some iterative  
solutions praised as taking much fewer iterative steps than others. But  
nobody took into account that each step was much more complicated and took  
much more CPU time, than one of the solutions that took more steps.
And quite often the simpler solution worked for the more general case  
whereas the "better" worked only under limited conditions.

--  
Reinhardt


Re: Portable Assembly
On 6/13/2017 10:26 PM, Reinhardt Behm wrote:
Quoted text here. Click to load it

That doesn't invalidate the *idea* of modeling algorithms using some
set of "abstract operation costs".

But, it points to the fact that the real world eventually intervenes;
and, its a "bitch".  :>  Approximations have to eventually give way to
REAL data!

For example, I rely heavily on the (paged) MMU in the way my RTOS
handles "memory objects".  I'll map a page into another processes
address space instead of bcopy()-ing the contents of the page
across the protection boundary.  A win, right?  "Constant time"
operation (instead of a linear time operation governed by the
amount of data to be bcopied).

But, there's a boat-load of overhead in remapping pages -- including
a trap to the OS to actually do the work.  So, what you'd *think* to be
a more efficient way of handling the data actually is *less* efficient.

(Unless, of course, you can escalate the amount of data involved
to help absorb the cost of that overhead)

OTOH, there are often "other concerns" that can bias an implementation
in favor of (what appears to be) a less efficient solution.

E.g., as I work in a true multiprocessing environment, I want to
ensure a caller can't muck with an argument for which it has provided
a "reference" to the called function WHILE the called function is
(potentially) using it.  So, the MMU lets me mark the page as
immutable (even though it *should* be mutable) UNTIL the called
function has completed.

[This allows the caller to *access* the contents of the page
concurrently with the called function -- but, lets the OS
intervene if the caller tries to alter the contents prematurely.
The "write lock" has value that couldn't be implemented with the
bcopy() approach -- think of large objects -- and is "cheaper"
to implement once you've already decided to take the hit for
manipulating the page tables]

The "real machine" fixes values to all those constants (K) in the
O()-evaluation.  And, can muck with the decision making process
in certain sets of constraints.


Re: Portable Assembly
On Wednesday, June 14, 2017 at 2:58:42 AM UTC-4, Don Y wrote:
Quoted text here. Click to load it
[]
Quoted text here. Click to load it

Exactly right!

The BIG-O notation too often is depicted as O() when it is
really O(n) where n is the input set size. What get lost sometimes
is people treat O(n) as a comparison at a given set size. and that
is the error. (I don't think you fall into this error, Don.)

BIG-O analysis allows you do do some testing at a data set size n1
and then make a rough estimate of the run time for n2 > n1.
this can be done for the hardware and processor instruction set.
Its purpose is to avoid the naive estimate:
 "I ran this on a data set of 100 and took 0.5 microseconds,
  so on the production run of 1,000,000 it should only less
  than a minute (50 seconds)."

Hopefully, folks here are not that naive to depend on just O(n).
Your point is very important and worth repeating:
 the analysis of algorithms can be more precise when it can
take into account the features of the implementation environment.
(I hope I phrased that close to what you meant.)

have a great day
  ed

Re: Portable Assembly
Hi Ed,

On 6/19/2017 6:48 AM, Ed Prochak wrote:
Quoted text here. Click to load it


It's helpful for evaluating the *relative* costs of different
algorithms where you assign some abstract cost to particular
classes of operations.  It is particularly effective in a
classroom setting -- where newbies haven't yet learned to THINK
in terms of the costs of their "solutions" (algorithms).

For example, one way to convert a binary number to an equivalent
decimal value is to count the binary value down towards zero (i.e.,
using a "binary subtract 1") while simultaneously counting the
decimal value *up* (i.e., using a "decimal add 1").  Clearly, the
algorithm executes in O(n) time (n being the magnitude of the number
being converted).

Other algorithms might perform an operation (or set of operations)
for each bit of the argument.  So, they operate in O(logs(n)) time
(i.e., a 32b value takes twice as long to process as a 16b value).

This is great for a "schoolbook" understanding of the algorithms
and an idea of how the "work" required varies with the input
value and/or range of supported values (i.e., an algorithm may
work well with a certain set of test values -- yet STUN the
developer when applied to *other* values).

But, when it comes to deciding which approach to use in a particular
application, you need to know what the "typical values" (and worst
case) to be processed are likely to be.  And, what the costs of the
operations required in each case (the various 'k' that have been
elided from the discussion).

If you're converting small numbers, the cost of a pair of counters
can be much less than the cost of a "machine" that knows how to
process a bit at a time.  So, while (in general) the counter approach
looks incredibly naive (stupid), it can, in fact be the best
approach in a particular set of conditions!

Quoted text here. Click to load it

If O(n), you'd expect it to be done in 5ms (0.5us*1,000,000/100).

By thinking in terms of HOW the algorithm experiences its costs,
you can better evaluate the types of operations (implementations)
you'd like to favor/avoid.  If you know that you are intending to
deploy on a target that has a particular set of characteristics
for its operations, you might opt for a different algorithm
to avoid the more expensive operators and exploit the cheaper ones.

Many years ago, I wrote a little piece of code to exhaustively
probe a state machine (with no buried state) to build a state
transition table empirically with only indirect observation
of the "next state" logic (i.e., by looking at the "current state"
and applying various input vectors and then tabulating the
resulting next state).

[This is actually a delightfully interesting problem to solve!
Hint:  once you've applied an input vector and clocked the machine,
you've accumulated knowledge of how that state handles that input
vector -- but, you're no longer in that original "current state".
And, you still have other input vectors to evaluate for it!]

How do you evaluate the efficacy of the algorithm that "walks"
the FSM?  How do you determine how long it will take to map
a FSM of a given maximum complexity (measured by number of
states and input vector size)?  All you know, /a priori/ is
the time that it takes to apply a set of inputs to the machine
and observe that *one* state transition...

[Exercise left for the reader.]

Quoted text here. Click to load it

The more interesting cases are O(1), O(n^2), etc.  And, how to
downgrade the cost of an algorithm that *appears*, on its surface,
to be of a higher order than it actually needs to be.

Quoted text here. Click to load it

Too late -- 119F today.


Re: Portable Assembly
On Monday, June 19, 2017 at 7:45:25 PM UTC-4, Don Y wrote:
Quoted text here. Click to load it
[Lots of Don's good advice left out for a little socializing]

Quoted text here. Click to load it

Oh crap, I slipped my units.

Thanks Don.
[]
Quoted text here. Click to load it

Wow, it cooled off a bit here, back to the 70's.
Well try to stay cool.
ed

Re: Portable Assembly
On 6/20/2017 3:07 PM, Ed Prochak wrote:

Quoted text here. Click to load it

Our highs aren't expected to fall below 110 for at least a week.
Night-time lows are 80+.

Hopefully this will pass before Monsoon starts (in a few weeks)

Re: Portable Assembly
On 06/06/17 22:42, George Neuner wrote:

Quoted text here. Click to load it

Modern C++ has quite powerful type inference now, with C++11 "auto".
This allows you to have complex types, encoding lots of information in
compile-time checkable types, while often being able to use "auto" or
"decltype" to avoid messy and hard to maintain source code.

The next step up is "concepts", which are a sort of meta-type.  A
"Number" concept, for example, might describe a type that has arithmetic
operations.  Instead of writing your code specifying exactly which
concrete types are used, you describe the properties you need your types
to have.

As you say, however, concrete type declarations cannot be eliminated -
in C++ they are essential when importing or exporting functions and data
between units.

Quoted text here. Click to load it

Yes.  You can do /some/ of the checking at compile-time, but not all of
it.  And a sophisticated whole-program optimiser can eliminate some of
the logical run-time checks, but not all of them.

Quoted text here. Click to load it


Re: Portable Assembly
On 17-06-06 07:24 , George Neuner wrote:
Quoted text here. Click to load it

     [snip]

Quoted text here. Click to load it

None of the Ada *type* checks are done at runtime. Only *value* checks  
are done at runtime.

Quoted text here. Click to load it

All Ada compilers I have seen have an option to disable runtime checks.

Quoted text here. Click to load it

There are other, more proof-oriented tools that can be used for that,  
for example the CodePeer tool from AdaCore, or the SPARK toolset.

It is not uncommon for real Ada programs to be proven exception-free  
with such tools, which means that it is safe to turn off the runtime checks.

--  
Niklas Holsti
Tidorum Ltd
We've slightly trimmed the long signature. Click to see the full one.
Re: Portable Assembly
On Wed, 7 Jun 2017 00:30:40 +0300, Niklas Holsti

Quoted text here. Click to load it

Note that I said "type/value", not simply "type".

In any case, it's a distinction without a difference.  The value
checks that need to be performed at runtime are due mainly to use of
differing types that have overlapping range compatibility.

The remaining uses of runtime checks are due to I/O where input values
may be inconsistent with the types involved.


Quoted text here. Click to load it

Yes.  And if you were following the discussion, you would have noticed
that that comment was directed not at Ada, but toward runtime checking
in a hypothetical "safer" C.

George

Re: Portable Assembly
On 05/06/17 16:39, Mike Perkins wrote:
Quoted text here. Click to load it

The LLVM "assembly" is intended as an intermediary language.  Front-end  
tools like clang (a C, C++ and Objective-C compiler) generate LLVM  
assembly.  Middle-end tools like optimisers and linkers "play" with it.  
and back-end tools translate it into target-specific assembly.  Each  
level can do a wide variety of optimisations.  The aim is that the whole  
LLVM system can be more modular and more easily ported to new  
architectures and new languages than a traditional multi-language  
multi-target compiler (such as gcc).  So LLVM assembly is not an  
assembly language you would learn or code in - it's the glue holding the  
whole system together.

Quoted text here. Click to load it

Well, yes - of course C is the sensible option here.  Depending on the  
exact type of code and the targets, Ada, C++, and Forth might also be  
viable options.  But since there is no such thing as "portable  
assembly", it's a poor choice :-)  However, the thread has lead to some  
interesting discussions, IMHO.

Site Timeline