MCU mimicking a SPI flash slave - Page 4

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Re: MCU mimicking a SPI flash slave
David Brown wrote on 6/18/2017 9:15 PM:
Quoted text here. Click to load it

I don't know the "new" C, I don't work with it.  What improved?


Quoted text here. Click to load it

Like what?  It is not really "limited".  The GA144 assembly language is...  
well, assembly language.  Would you compare the assembly language of an X86  
to JAVA or C?


Quoted text here. Click to load it

You keep talking in vague terms, saying the MARC4 was more "powerful".  
Address space is not the power of the language.  It is the hardware  
limitation of the CPU.  The GA144 was designed with a different philosophy.  
I would say for a different purpose, but it was not designed for *any*  
purpose.  Chuck designed it as an experiment while exploring the space of  
minimal hardware processors.  The capapbilities come from the high speed of  
each processor and the comms capability.

I compare the GA144 to FPGAs more than to ARMs.  The CPUs are small, very  
fast and plentiful (relatively), like the LUTs in an FPGA.  Communications  
are very fast and the processor can automatically halt for synchronization  
with the other processor.  Letting a processor sit idle is a power  
advantage, not a speed disadvantage.  Processing speed is plentiful in the  
GA144 so it does not need to be optimized.  Like the XMOS using it requires  
some adjustment in your thinking... a lot more than the XMOS in fact.


Quoted text here. Click to load it

Perhaps, but I would still emphasize the issue that MCUs in general and  
FPGAs in general cover a lot of territory.  XMOS only excels in a fairly  
small region.  The GA144 is optimal for a microscopically small region.


Quoted text here. Click to load it

If the XMOS price were better, I would say they would be much more worth  
learning.


Quoted text here. Click to load it

I can't tell you how many people think FPGAs are complicated to design,  
power hungry and expensive.  All three of these are not true.

My only complaints are they tend to be in very fine pitch BGA packages (many  
with very high pin counts), only a few smaller devices are available and  
they don't integrate much analog.  I'd like to see a small FPGA rolled with  
a small MCU (ARM CM4) with all the standard peripherals an MCU normally  
includes, brownout, ADC/DAC, etc.  They could have done this affordably a  
decade ago if they wanted, but the FPGA companies have a particular business  
model that does not include this market.  Lattice and Microsemi aren't as  
committed to the mainstream FPGA market and so offer some limited products  
that differ.


Quoted text here. Click to load it

I have a laptop with a 17 inch screen and the fonts are smaller than my old  
desktop with a 17 inch monitor because the pixels are smaller (HD vs. 1280  
horizontal resolution).  The windows stuff for adjusting the size of fonts  
and such don't work properly across lots of apps.  Even my Excalibur  
calculator is very hard to read.

Bottom line is don't bad mouth an API because it doesn't *please* you.  Your  
tastes aren't the only standard.


Quoted text here. Click to load it

No, it is functional, not just illustrating.  It is in the *language*, not  
just the editor.  It's all integrated, not in the way the tools in a GUI are  
integrated, but in the way your heart, lungs and brain are integrated.


Quoted text here. Click to load it

That's what the color does.


Quoted text here. Click to load it

I wouldn't know C++.


Quoted text here. Click to load it

Yeah, Charles Moore isn't in the business of supporting language standards.  
He created Color Forth for himself and has shared it with others.  GA is  
using it to support their products and they are the best source for  
information now.


Quoted text here. Click to load it

You mentioned optimizing compilers, what was your point in bringing it up?  
Optimizations are not in any language that I'm aware of.  You seem to think  
there is something lacking in the Forth language, but you don't say what  
that would be.


Quoted text here. Click to load it

I don't know what is meant by "limited practicality for modern programming".  
  By griping about the use of primary colors and large fonts, I consider  
that throwing a tantrum.  How about discussing something important and useful?

--  

Rick C

Re: MCU mimicking a SPI flash slave
On 19/06/17 06:54, rickman wrote:
Quoted text here. Click to load it

Well, starting from pre-K&R C and moving to "ANSI" C89/C90, it got
prototypes, proper structs, const, volatile, multiple different sized
types, etc.  I am sure you are very familiar with this C - but my point
is that even though the history of C is old like that of Forth, even at
that point 25+ years ago C had moved on and improved significantly as a
language, compared to its original version.

Some embedded developers still stick to that old language, rather than
moving on to C99 with inline, booleans, specifically sized types, line
comments, mixing code and declarations, and a few other useful bits and
pieces.  Again, C99 is a much better language.

C11 is the current version, but does not add much that was not already
common in implementations.  Static assertions are /very/ useful, and the
atomic types have possibilities but I think are too little, too late.

Quoted text here. Click to load it

The size of the memories (data space, code space and stack space) is the
most obvious limitation.

Quoted text here. Click to load it

True - I was not clear in distinguishing the language from the hardware
here.  I meant the hardware in this case.

Quoted text here. Click to load it

Minimal systems can be interesting for theory, but are rarely of any use
in practice.

Quoted text here. Click to load it

I agree with the principle - as I say, the GA144 has some interesting
ideas and technology.  But you need more power in each cpu to do
something useful.  If you want to use animal power to draw a plough, you
want a horse.  An army of ants might have a theoretically greater total
strength and a better total-power to food cost ratio, but it is still
hopeless as a solution.

Quoted text here. Click to load it

Fair enough.


Also true.


That certainly /was/ the case.  But yes, for a good while now there have
been cheap and low power FPGAs available.  As for complicated to design
- well, I guess it's easy when you know how.  But you do have to know
what you are doing.  Tools are better, introductory videos are better,
etc. - there are lots more learning resources than in the "old" days.
And once you know (at least roughly) what you are doing, the modern
tools and computers make the job a good deal faster than before.  I
remember some 20 years ago working with a large PLD - place and route
took about 8 hours, and debugging the design was done by pulling a
couple of internal signals out to spare pins and re-doing the place and
route.  (By that stage of the project it was too late to think about
alternative chips.)

Quoted text here. Click to load it

Variety of choices is always nice.  I agree that devices like those
could have a wide range of uses.

Quoted text here. Click to load it

Surely you don't use that laptop for normal work?  A laptop is okay for
when you need a portable office, but I have three large monitors in my
office.  And most of my development is done on Linux, where font scaling
works most of the time.  (Though personally, I like small fonts with
lots of text on the screen - my eyes are fine for that, when I have my
contacts in.  Without them, I can't focus further than my nose!).

Quoted text here. Click to load it

My tastes are the most important standard for /me/, and the one that
affects my impression when I look at a tool.  Of course I realise that
other people have other tastes.  And perhaps some people have poor
eyesight and a job that requires them to work entirely on a small
laptop.  But I find it hard to accept that an IDE should be designed
solely on the basis of being clear to someone with bad eyesight who
works with a tiny old monitor.  The colorForth stuff seems to be
designed by and /for/ a single person - Chuck Moore.  That's fine for
him for a personal project, but it is highly unlikely to be a good way
to make tools for more general use.

Quoted text here. Click to load it

No, it is syntax highlighting.

There is a 4 bit "colour token" attached to each symbol.  These
distinguish between variables, comments, word definitions, etc.  There
is /nothing/ that this gives you compared to, say, $ prefixes for
variables (like PHP), _t suffixes for types (common convention in C),
etc., with colour syntax highlighting.  The only difference is that the
editor hides the token.  So when you have both var_foo and word_foo,
they are both displayed as "foo" in different colours rather than
"var_foo" and "word_foo" in different colours.

That is all there is to it.


Quoted text here. Click to load it

The colour doesn't do it - the language makes a clearer distinction
between compile-time and run-time, and the colour helps you see that.
You had the same distinction in Forth without colouring.

Having a separation here is both a good thing and a bad thing, in
comparison to the way C handles it, and the way C++ handles it.  There
is room in the world for many models of language.

Quoted text here. Click to load it

Without going into details, you are probably aware that in C you
sometimes need a "real" constant.  For example, you can't make a
file-level array definition unless the size is absolutely fixed:

int xs[16];

That's allowed.  But you can't write this:

int square(int x) { return x * x; }

int xs[square(4)];

"square(4)" is not a constant in C terms.  However, you would expect a
compiler to calculate the value at compile time (assuming it can see the
definition of the "square" function) for the purposes of code optimisation.

In C++11 onwards, you can write:

constexpr int square(int x) { return x * x; }
int xs[square(4)];

This tells the compiler that it can calculate "square" at compile time
if the parameters are known at compile time, but still allows the
function to be used as a run-time function if the parameters are not
known at compile time.    

Quoted text here. Click to load it

With all due respect to Chuck Moore and his creations, this is not a way
to conduct a professional business.

Quoted text here. Click to load it

I gave a list somewhere in another post.  But my key "missing features"
from Forth are good static checking, typing, methods of working with
data of different sizes, safe ways to define and use structures, and
ways to modularise the program.

For example, take the "FLOOR5" function from the Wikipedia page:

: FLOOR5 ( n -- n' )   DUP 6 < IF DROP 5 ELSE 1 - THEN ;


The C version is:

int floor5(int v) {
  return (v < 6) ? 5 : (v - 1);
}


Suppose the Forth programmer accidentally writes:

: FLOOR5 ( n -- n' )   6 < IF DROP 5 ELSE 1 - THEN ;

It's an easy mistake to miss, and you've made a perfectly valid Forth
word definition that will be accepted by the system.  But now the
comment does not match the usage.  It would be entirely possible for the
language to provide a formalised and standardised way of specifying
input and output parameters in a way that most cases could be
automatically checked by the tools.  Conventionalised comments are /way/
out of date as an aid to automated correctness checking.

And then suppose you want this function to work with 32-bit values -
regardless of the width of a cell on the target machine.  Or 64-bit
values on a 16-bit cell system.


(If you have good answers here, maybe you will change my mind - at least
a little!)

Quoted text here. Click to load it

See above for a simple example.

But I am not griping about the use of colour - I am mocking the idea
that adding colour to the IDE is a big innovation in the language.



Re: MCU mimicking a SPI flash slave
Quoted text here. Click to load it
<SNIP>
Quoted text here. Click to load it

A less obvious limitation that goes right to the heart of the
parallel processing that is claimed, is processor connectivity.
I explored parallelism (with the parallel prime sieve) and
the fixed rectangular grid is absolutely bonkers for any serious
application where calculation power is needed.
In this case I wanted to have two pipelines that come together.
It starts as a puzzle, then it turns out to be hardly possible.

Two crossing pipeline have to pass through one processor.
If there is any structure to the data, the one processor would
fail the processing power to make that possible.
On transputers you would have hypercube arrangements such that there
is no need to do that. On top of that, it would be easy.
You just define two unrelated pass-through processes.

A definitive measure for the quality of the GA144 would be a
bitcoin calculator. That is the ratio between the cost of the
electricity consumed and the value of the bitcoins generated.
 t would be *bad*.

<SNIP>
Quoted text here. Click to load it

Indeed.

<SNIP>

Quoted text here. Click to load it

100% agreed. Adding colour is equivalent to a prefix character.
Then if you want to Vim can add the colour for you based on the
prefix character.

Notations can be important innovations as Newton and Leibniz showed.
The real big innovation they made was the differential calculus.
If there is something underlying Colorforth it is tagged objects,
hardly spectacular.

Groetjes Albert
Quoted text here. Click to load it
--  
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
We've slightly trimmed the long signature. Click to see the full one.
Re: MCU mimicking a SPI flash slave
On 19/06/17 12:54, Albert van der Horst wrote:
Quoted text here. Click to load it

You make some very good points here.  The comparison has been made with
FPGAs.  A great deal of the work (the physical hardware, and also the
development tool's work) in an FPGA is about connections - moving
signals between different nodes on the device.  Imagine an FPGA where
each node (block of LUTs, registers, etc.) could only communicate
directly with its immediate 2D neighbours.

The GA144 might work okay for problems that naturally fit a 2D grid that
happens to fit the dimensions of the chip (8 x 18, I think).  But it
will be poor on anything else.

In comparison, on the XMOS any virtual cpu (hardware thread) can connect
directly to any other - either with a "permanent" channel (existing for
the lifetime of the program) or created temporarily as needed.  Exactly
the same software system is used whether you are communicating between
hardware threads on the same core, threads on different cores on the
same chip, or threads of different cores on different XMOS chips.
Clearly there are latency and bandwidth differences, but the logic is
the same.

Quoted text here. Click to load it


Re: MCU mimicking a SPI flash slave
Albert van der Horst wrote on 6/19/2017 6:54 AM:
Quoted text here. Click to load it

You can also do that with the GA144.


Quoted text here. Click to load it

Bad compared to FPGAs where the design is optimized at a very low level or  
compared to a custom ASIC which is optimized from the ground up for this  
application.  I don't believe any other devices are currently used to mine  
bitcoin.  Software became obsolete some time back.


Quoted text here. Click to load it

Does that prefix character not take the place of a Forth word?

--  

Rick C

Re: MCU mimicking a SPI flash slave
David Brown wrote on 6/19/2017 4:30 AM:
Quoted text here. Click to load it

As has Forth.  The 2012 standard is an improvement over the previous  
version, which is an improvement over the previous version to that and the  
initial ANSI version was an improvement over the multiple flavors of Forth  
prior to that for the standardization if nothing else.


Quoted text here. Click to load it

I think the real issue is you are very familiar with C while totally  
unfamiliar with Forth.


Quoted text here. Click to load it

As I said, that is not a language issue, that is a device issue.  But you  
completely blow it when you talk about the "stack" limitation.  Stacks don't  
need to be MBs.  It's that simple.  You are thinking in C and the other  
algol derived languages, not Forth.


Quoted text here. Click to load it

That comment would seem to indicate you are very familiar with minimal  
systems.  I suspect the opposite is true.  I find minimal CPUs to be *very*  
useful in FPGA designs allowing a "fast" processor to be implemented in even  
very small amounts of logic.


Quoted text here. Click to load it

You should save the poor analogs for other conversations.  An army of ants  
can move a large pile of sand overnight by themselves.  The horse will just  
complain it doesn't have enough food and water moving the sand only if you  
connect it to the right equipment and spend your day cracking the whip.


Quoted text here. Click to load it

20 years ago maybe.


Quoted text here. Click to load it

MCUs are no different.  A newbie will do a hack job.  I once provided some  
assistance to a programmer who needed to spin an FPGA design for his  
company.  They wouldn't hire me to do it because they wanted to develop the  
ability in house.  With minimal assistance (and I mean minimal) he first  
wrote a "hello, world" program for the FPGA.  He then went on to write his  
application.

The only tricky parts of programming FPGAs is when you need to optimize  
either speed or capacity or worse, both!  But I believe the exact same thing  
is true about MCUs.  Most projects can be done by beginners and indeed *are*  
done by beginners.  That has been my experience.  In fact, that is the whole  
reason for the development and use of the various tools for programming,  
making them usable by programmers with lesser skills, enabling a larger  
labor pool at a lower price.

The only magic in FPGA design is the willingness to wade into the waters and  
get your feet wet.


Quoted text here. Click to load it

Devices are not often made for broad markets.  FPGAs in particular are  
designed for the comms market and everyone else is along for the ride.  As I  
mentioned, Lattice and Microsemi are branching out a bit to address either a  
different market (Lattice ice40 parts are aimed at the cell phone market) or  
a broader market (Microsemi devices), but with limited success.


Quoted text here. Click to load it

At this point my laptop is my only computer (other than a five year old  
netbook used for extreme emergencies).  Its only problem is it's a Lenovo  
piece of crap.  I don't have an office as much as I am portable.  I don't  
spend more than half a week in one place running.  I just wish I could find  
a reasonably priced attached oscilloscope.  The model I'd like to have is  
$1,500.  I'd like to spend more like $500 and then I'd be totally portable.


Quoted text here. Click to load it

There you go with the extremes again.  Colorforth isn't designed "solely"  
for people with bad eyesight.  It is designed to be as useful as possible.  
It is clear you have not learned enough about it to know what is good and  
what is bad.  You took one quick look at it and turned away.


Quoted text here. Click to load it

You just said it is more than syntax highlighting.  It is like type  
definitions in other languages.  It is built into the language which won't  
work without it.  That's the part you aren't getting.  Compare Colorforth to  
ANSI Forth and you will see what I mean.


Quoted text here. Click to load it

If you don't understand, learn how it is done in ANSI Forth and then tell me  
if THAT is part of the language or not.


Quoted text here. Click to load it

Here is a perfect example of why you think Forth has not evolved.  There is  
nothing in even the earliest Forth that precludes this computation from  
being done at compile time.  So how do you improve on perfection?  <grin>


Quoted text here. Click to load it

Whatever.  He isn't running a business by promoting Forth.  As I've said, he  
wrote Forth for himself and others like the ideas he has come up with and  
learned them.

GA is a business that uses ColorForth.  ANSI Forth is a standard that is  
widely used.

Is Arduino a standard?  No, yet it is widely used even in business.  
Standards are useful in some cases, in other cases not needed.


Quoted text here. Click to load it

This has been discussed before and some have experimented with writing code  
to do this.  But it is only easy in simple examples like this one.  Forth is  
a very flexible and powerful language which can make it hard to implement  
this for all cases.


Quoted text here. Click to load it

Just as in other languages, like ADA and VHDL (both strongly typed) you  
would need to write different code.

I'm not interested in changing your mind, only in showing you your  
misunderstandings about Forth.  I'm not actually the right person for the  
job being a relative amateur with Forth, so I crossposted to the Forth group  
so others could do a better job.  That may bring in some wild cards however  
as discussions in the Forth group often go awry.


Quoted text here. Click to load it

You still don't understand the issue.  It isn't about the IDE, it is the  
fact that the use of color replaces words in the language that change how  
the other words are interpreted.

The only downside is that by making it an integral part of the language it  
becomes hard to use for color blind programmers.  We can all live without  
color highlighting in an IDE, but in Colorforth it is not optional.

--  

Rick C

Re: MCU mimicking a SPI flash slave
On 19/06/17 15:19, rickman wrote:
Quoted text here. Click to load it

I have looked through the Forth 2012 standard.  Nothing much has changed
in the language - a few words added, a few words removed.  (Previous
revisions apparently had bigger changes, according to a list of
compatibility points.)

Quoted text here. Click to load it

I certainly can't claim to be unbiased - yes, I am very familiar with C
and very unfamiliar with Forth.  I am not /totally/ unfamiliar - I
understand the principles of the stacks and their manipulation, the way
words are defined, and can figure out what some very simple words do, at
least for arithmetic and basic stack operations.  And I am fine with
trying to get an understanding of how a language could be used even
though I don't understand the details.

Quoted text here. Click to load it

I program mostly on small microcontrollers.  These days, I see more
devices with something like 128K ram, but I have done more than my fair
share with 4K ram or less.  No, I am /not/ thinking megabytes of space.
 But a 10 cell stack is /very/ limited.  So is a 64 cell ram, and a 64
cell program rom - even taking into account the code space efficiency of
Forth.  I am not asking for MB here.

Quoted text here. Click to load it

If you have a specific limited task, then a small cpu can be very
useful.  Maybe you've got an FPGA connected to a DDR DIMM socket.  A
very small cpu might be the most convenient way to set up the memory
strobe delays and other parameters, letting the FPGA work with a cleaner
memory interface.  But that is a case of a small cpu helping out a
bigger system - it is not a case of using the small cpus alone.  It is a
different case altogether.

Quoted text here. Click to load it

There are good reasons we don't use masses of tiny cpus instead of a few
big ones - just as we don't use ants as workers.  It is not just a
matter of bias or unfamiliarity.

Quoted text here. Click to load it

A /lot/ less than 20 years ago.

Quoted text here. Click to load it

I will happily agree that FPGA design is not as hard as many people
think.  However, I do think it is harder to learn and harder to get
write than basic microcontroller programming.  The key difference is
that with microcontrollers, you are (mostly) doing one thing at a time
all in one place on the chip - with FPGAs, you are doing everything at
once but in separate parts of the chip.  I think the serial execution is
a more familiar model to people - we are used to doing one thing at a
time, but being able to do many different tasks at different times.  The
FPGA model is more like workers on a production line, and that takes
time to understand for an individual.

Quoted text here. Click to load it

I gave it several good looks.  I have also given Forth a good look over
a number of times in the past few decades.  It has some attractions, and
I would be happy if it were a practical choice for a lot of development.
 It is always better when there is a choice - of chips, tools,
languages, whatever.  But Forth just does not have what I need - not by
a long shot.  What you take to be animosity, ignorance or bias here is
perhaps as much a result of frustration and a feeling of disappointment
that Forth is not better.

Quoted text here. Click to load it

It tags that you see by colour instead of as symbols or letters.
Glorified syntax highlighting.

Quoted text here. Click to load it

Hey, I never claimed C was perfect!

Quoted text here. Click to load it

That's fair enough.  But it does mean that the GA144 is not a serious
choice for professional products.  (And yes, I know that not everything
made has to be serious and long lasting.)

Quoted text here. Click to load it

We do a fair amount of business taking people's bashed-together Arduino
prototypes and turning them into robust industrialised and professional
products.

Quoted text here. Click to load it

I appreciate the conversation, and have found this thread enlightening,
educational and interesting - even when we disagree.



Re: MCU mimicking a SPI flash slave
David Brown wrote on 6/19/2017 10:23 AM:
Quoted text here. Click to load it

I don't mean to be rude, but this shows your ignorance of Forth.  In Forth,  
nearly everything is about the words.


Quoted text here. Click to load it

When addressing the issues you raise with Forth none of these things are  
what Forth is about.

I don't know that you need to understand all the details of Forth to see  
it's power, but it would help if you understood how some parts of Forth work  
or at least could see how a significant app was written in Forth.  Try  
learning how an assembler is usually written in Forth.  This is easy to do  
as most Forths are provided with full source.


Quoted text here. Click to load it

Again, I don't mean to be rude, but saying a 10 cell stack is too small  
shows a lack of understanding of Forth.  You are working with your  
experience in other languages, not Forth.

I won't argue that 64 cells of RAM don't limit your applications, but the  
GA144 doesn't have 64 cells of RAM.  It has 144 * 64 cells.  External memory  
can be connected if needed.  I won't argue this is not a limitation, but it  
is not a brick wall.  Again, I suggest you stop comparing the GA144 to the  
other processors you have worked with and consider what *can* be done with  
it.  What do you know that *has* been done with the GA144?


Quoted text here. Click to load it

I really don't follow your point here.  I think a CPU would be a terrible  
way to control a DDR memory other than in a GA144 with 700 MIPS processors.  
I've never seen a CPU interface to DDR RAM without a hardware memory  
controller.  Maybe I'm just not understanding what you are saying.


Quoted text here. Click to load it

Reasons you can't explain?


Quoted text here. Click to load it

I designed a board almost a decade ago that was less than an inch wide and 4  
inches long that provided an analog/digital synchronized interface for an IP  
networking card.  It used a small, low power, low cost FPGA to do all the  
heavy lifting and made me well over a million dollars.  At the time I built  
the board, that chip was already some three or four years old.  So there is  
an example that was over 12 years ago.  Other FPGAs that fit the same  
criteria were from closer to 2000 or 17 years ago.  I didn't use them  
because I wanted to maximize the lifespan of the board.


Quoted text here. Click to load it

What you just described is what makes FPGAs so easy to use.  The serial  
execution in a processor to emulate parallel tasks is what makes CPUs so  
hard to use and supposedly what makes the XMOS so useful.  FPGAs make  
parallelism easy with literally no thinking as the language and the tools  
are designed from the ground up for that.

I like to say, whoever came up with the name for water wasn't a fish.  In  
FPGAs no one even thinks about the fact that parallelism is being used...  
unless they aren't fish, meaning software people can have some difficulty  
realizing they aren't on land anymore and going with the flow.


Quoted text here. Click to load it

I will say you have expressed your unhappiness with Forth without explaining  
what was lacking other than vague issues (like the look of Colorforth) and  
wanting it to be like other languages you are more used to.  If you want the  
other languages, what is missing from them that you are still looking?


Quoted text here. Click to load it

You can't get past the color highlighting.  It's not about the color.  It's  
about the fact that parts of the language have different uses.  Color  
highlighting in other languages are just a nicety of the editor.  The tokens  
in Colorforth are fundamental to the language.  The color is used to  
indicate what is what, but color is not the point.


Quoted text here. Click to load it

That you are not perfect goes without saying... <g>


Quoted text here. Click to load it

I agree, but not because of colorforth.  I think the chip has too many  
limitations along with the tools, but not because colorforth is inherently  
flawed, because it just doesn't cover enough of the development process.  
The rest of the toolset (if you can call it a toolset) is lacking in  
consistency and completeness.  If/when I work with the GA144 I will want to  
do some of my own work in organizing the tools.


Quoted text here. Click to load it

Yep, but they developed using the Arduino and they sell lots of them, likely  
a lot more than you sell of your industrialized products.


Quoted text here. Click to load it

We don't learn much if we just agree.  I'm glad we could disagree without  
making it an argument.  The other Forth users are much more experienced than  
I am.  They will likely have much better info although not too many actually  
use Colorforth.  Many have learned about it though to learn from it.

--  

Rick C

Re: MCU mimicking a SPI flash slave
On 20/06/17 05:57, rickman wrote:
Quoted text here. Click to load it
<snip>
Quoted text here. Click to load it

(I don't take it as rude - this has been a very civil thread, despite
differing opinions.)

Yes, I know Forth is all about the words.  But as far as I could tell,
Forth 2012 does not add many or remove many words - it makes little
change to what you can do with the language.

And - IMHO - to make Forth a good choice for a modern programming
language, it would need to do more than that.  As you say below,
however, that is not "what Forth is about".

Quoted text here. Click to load it
<snip>
Quoted text here. Click to load it

I think that we are actually mostly in agreement here, but using vague
terms so it looks like we are saying different things.  We agree, I
think that 10 stack cells and 64 cells RAM (which includes the user
program code, as far as I can tell) is very limited.  We agree that it
is possible to do bigger tasks by combining lots of small cpus together.
 And since the device is Turing complete, you can in theory do anything
you want on it - given enough time and external memory.

The smallest microcontroller I worked with had 2KB flash, 64 bytes
eeprom, a 3 entry return stack, and /no/ ram - just the 32 8-bit cpu
registers.  I programmed that in C.  It was a simple program, but it did
the job in hand.  So yes, I appreciate that sometimes "very limited" is
still big enough to be useful.  But that does not stop it being very
limited.

Quoted text here. Click to load it

I am probably just picking a bad example here - please forget it.  I was
simply trying to think of a case where your main work would be done in
fast FPGA logic, while you need a little "housekeeping" work done and a
small cpu makes that flexible and space efficient despite being slower.

Quoted text here. Click to load it

Amdahl's law is useful here.  Some tasks simply cannot be split into
smaller parallel parts.  You always reach a point where you cannot split
them more, and you always reach a point where the overhead of dividing
up the tasks and recombining the results costs more than the gains of
splitting it up.

Imagine, for example, a network router or filter.  Packets come in, get
checked or manipulated, and get passed out again.  It is reasonable to
split this up in parallel - 4 cpus at 1 GHz are likely to do as good a
job as 1 cpu at 4 GHz.  But what about 40 cpus at 100 MHz?  Now you are
going to get longer latencies, and have significant effort tracking the
packets and computing resources - even though you have the same
theoretical bandwidth.  400 cpus at 10 MHz?  That would be even worse.
If some data needs to be shared across the processing tasks, it is
likely to be hopeless with so many cpus.  And if you try to build the
thing out of 8051 chips, it will never be successful no matter how many
millions you use, if the devices don't have enough memory to hold a packet.

Or to pick a simple analogy - sometimes a rock is more useful than a
pile of sand.


Quoted text here. Click to load it

Again, I think our apparent disagreement is just a matter of using vague
terms that we each interpret slightly differently.

Quoted text here. Click to load it

You have been designing with FPGAs for decades - that can make it hard
to understand why other people may find them difficult.  I have done a
few CPLD/FPGA designs over the years - not many, but enough to be happy
with working with them.  For people used sequential programming,
however, they appear hard - you have to think in a completely different
way.  It is not so much that thinking in parallel is harder than
thinking in serial (though I believe it is), it is that it is /different/.

<snip>
Quoted text here. Click to load it

I am not sure exactly what you are asking here, but if we are going to
bring in other languages, I think perhaps that would be a topic for a
new thread some other time.  It could be a very interesting discussion
for comp.arch.embedded (less so for comp.lang.forth).  However, I feel
this thread is big enough as it is!

Quoted text here. Click to load it

Again, the tokens are nothing special.  In most languages, the role is
filled by keyboards, symbols or other features of the grammar - but
there is nothing here that is fundamentally different.

I haven't looked up a list of token types, but for the sake of argument
let's say that there is one indicating that something is a variable
shown in green, one indicating a word definition shown in red, and one
indicating a compile-time action shown in blue.  And you have a name
"foo" that exists in all these contexts.

You can show the different uses by displaying "foo" in different
colours.  You can store it in code memory using a 4 bit token tag.  You
could write it using keywords VAR, DEF and COMP before the identifier
"foo".  You could use symbols $, : and # before the identifier to show
the difference.  You could use other aspects of a language's grammar to
determine the difference.  You could use the position within the line of
the code file to make the difference.  You could simply say that the
same identifier cannot be used for different sorts of token, and the
token type is fixed when the identifier is created.

The existence of different kinds of tokens for different uses is (at
least) as old as programming languages.  Distinguishing them in
different ways is equally old.

Yes, the use of colour as a way to show this is not really relevant.
However, it is not /me/ that is fussing about it - look at the /name/ of
this "marvellous new" Forth.  It is called "colorFORTH".

Quoted text here. Click to load it

No, no - /C/ is not perfect.  But that does not mean /I/ am not :-)

Quoted text here. Click to load it

The people that come to us may use Arduino or Pi's for prototyping, but
it is the industrial versions they sell (otherwise there would be no
point coming to us!).  But no, we don't sell as many units as mass
produced cheap devices do.

Quoted text here. Click to load it


Re: MCU mimicking a SPI flash slave
David Brown wrote on 6/20/2017 6:36 AM:
Quoted text here. Click to load it

I feel I use a fairly passive voice in conversations like this one.  But  
sometimes people get torqued off about their reception of my rudeness.


Quoted text here. Click to load it

So far you have only identified one thing Forth does not do that you would  
like, it doesn't have fixed size data types.  What other important things is  
it lacking?

There is at least one Forth programmer here who agrees with you about the  
data sizes.  He feels many things in Forth should be nailed down rather than  
being left to the implementation.  But people are able to get work done  
efficiently in spite of this.

I will say pointing out this issue is making me think.  I can't think of a  
situation where this would actually create a problem.  To allow the code to  
run on a 16 bit system that variable would need to use a double data type  
(double size integer, not a floating point type).  It would then be a 64 bit  
type on a 32 bit system.  Would that create a problem?


Quoted text here. Click to load it

Yes, if you need more than a few k of RAM, the GA144 needs external RAM. But  
that can be accommodated.  The point is there is more than one way to skin a  
cat.  Thinking in terms of how other processors do a job and trying to make  
the GA144 do the same job in the same way won't work.  It has capabilities  
far beyond what people see in it.


Quoted text here. Click to load it

Sure, my app from 10 years ago would have been perfect to illustrate the  
utility of combining fast logic with a (relatively) slow CPU.  At one point  
we were facing a limit to the available gates in the FPGA and the solution  
would have been replacing the slower logic with a small stack CPU, but it  
didn't come to that.  I was able to push the utilization to around 90%  
without a problem.


Quoted text here. Click to load it

Amdahl's law doesn't apply.  Tasks aren't being split into "parallel" parts  
for the sake of being parallel any more than in an FPGA were every LUT and  
FF operates in parallel.  If you run out of speed in a GA144 CPU you can  
split the code between two or three CPUs.  If you run out of RAM you can  
split the task over several CPUs to use more RAM.


Quoted text here. Click to load it

You are not designing the code to effectively suit the chip.  In the GA144  
the comms channel allow data to be passed as easily as writing to memory.  
Break your task into small pieces that each do part of the task.  The  
packets work through the CPUs and out the other end.  Where is the problem?


Quoted text here. Click to load it

Concrete with sand is better than rock any day.


Quoted text here. Click to load it

I thought we were in agreement on this one.  Lattice and others started  
making small, low power, low cost FPGAs over 15 years ago.


Quoted text here. Click to load it

Different doesn't need to be hard. It is only hard if people won't allow  
themselves to learn something new.  That's my point.  Using FPGAs isn't  
hard, people make it hard by thinking it is the same as CPUs.  It's actually  
easier.


Quoted text here. Click to load it

The devil is in the details.  Making up examples won't cut it.  It's not  
about simple syntax highlighting.  The important stuff is when something is  
executed.  The fact that Forth can do this makes it very powerful.


Quoted text here. Click to load it

Who said it is a "marvellous[sic] new" Forth?


Quoted text here. Click to load it

So don't knock them.  I'd love to be producing things like Arduinos that  
sell themselves rather than things that I have to pound the pavement to find  
users for.

I know there are lots of people who will never like Forth.  It is more of a  
tooll than a language.  Its power lies in being very malleable allowing  
things to be done that are hard in other languages.  I'm not an expert Forth  
programmer, so I can't explain all the ways it works better than other  
languages.  The main thing I like is that it is interactive allowing me to  
interact with the hardware I build and construct interfaces from the bottom  
up testing as I go.  Some of the details of using it can be clumsy actually,  
but it is still very useful for what I do.

--  

Rick C

Re: MCU mimicking a SPI flash slave
Quoted text here. Click to load it

As a proper programmer, you also write tests for your programs:

t{ 3 floor5 -> 5 }t
t{ 5 floor5 -> 5 }t
t{ 6 floor5 -> 5 }t
t{ 9 floor5 -> 8 }t

When you run this through "gforth test/ttester.fs", you get right
away:

:2: Stack underflow
t{ 3 >>>floor5<<< -> 5 }t
Backtrace:
$7F062B481EF0 lit  

The backtrace points to one of the literals (6, 5, or 1, and actually
5 in this case), which is misleading; Gforth notices stack underflows
when the stack memory is accessed, and DROP does not access that
memory.  But anyway, once you get that error message, it's pretty easy
to find the error.

Even if you run it on a system that does not catch all stack
underflows (e.g., gforth-fast test/ttester.fs), you get

WRONG NUMBER OF RESULTS: t{ 3 floor5 -> 5 }t
WRONG NUMBER OF RESULTS: t{ 5 floor5 -> 5 }t
WRONG NUMBER OF RESULTS: t{ 6 floor5 -> 5 }t
WRONG NUMBER OF RESULTS: t{ 9 floor5 -> 8 }t

A static checker might say that the DROP and the - access a value that
is not present in the stack effect, so they would be a little more
precise at pinpointing the problem, but stack depth issues are easy
enough that nobody found it worthwhile to write such a checker yet.

BTW, I wrould write this function as:

: floor5 ( n1 -- n2 ) 1- 5 max ;

Quoted text here. Click to load it

No!  I have had lots of portability problems for C code when porting
between 32-bit and 64-bit systems, thanks to the integer type zoo of
C.  In Forth I have had very few such problems, thanks to the fact
that we only have cells and occasionally double-cells (and when you
get a double-cell program right on 32-bit systems, it also works on
64-bit systems).  If you want a FLOOR5 variant that works for integers
that don't fit in a cell, you write DFLOOR5.  And if it does not fit
in double cells (but would fit in 64 bits), you probably have the
wrong machine for what you are trying to do.  C did not acquire 64-bit
integer types until 32-bit machines were mainstream.

- anton
--  
M. Anton Ertl  http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
We've slightly trimmed the long signature. Click to see the full one.
Re: MCU mimicking a SPI flash slave
<SNIP>
Quoted text here. Click to load it

Interestingly, Java is supposed to be safe. I've seen dozens of
discussions of Euler problems of Java problems who had problems
with overflow and had wasted time debugging that.
(Once you've solved one you're entitled to write about how you did it).
Of course sometimes when you scale up a problem you get wrong
results in Forth caused by overflow too. That was never a time waster,
because that is the first thing to look at in such a case,
and it is easy to detect and correct in Forth.

Quoted text here. Click to load it

Groetjes Albert
--  
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
We've slightly trimmed the long signature. Click to see the full one.
Re: MCU mimicking a SPI flash slave
On 6/19/2017 2:39 PM, Albert van der Horst wrote:
Quoted text here. Click to load it

I'm impressed with Java due to the constant stream of updates because of  
security issues.

Forth of course has its problems too, overflow is highly possible due to  
no error checking so I'm in favor of 256 byte integers less overflow and  
rounding issues there.

--  
Cecil - k5nwa

Re: MCU mimicking a SPI flash slave
Quoted text here. Click to load it
<SNIP>
Quoted text here. Click to load it

A constant stream of security issues looks like a good reason to
stay away from a language.

Quoted text here. Click to load it

I've made a strong case that overflow is a problem in Java and not
in Forth.

256 byte integers would get you nowhere in projecteuler
where 2 minute computing time is the norm, sometimes hard to
stay under.

Quoted text here. Click to load it

Groetjes Albert
--  
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
We've slightly trimmed the long signature. Click to see the full one.
Re: MCU mimicking a SPI flash slave
On 6/20/2017 2:25 AM, Albert van der Horst wrote:
Quoted text here. Click to load it
I was being sarcastic in my whole post.

--  
Cecil - k5nwa

Re: MCU mimicking a SPI flash slave
snipped-for-privacy@cherry.spenarnc.xs4all.nl (Albert van der Horst) writes:
Quoted text here. Click to load it

Java has a BigInteger class, which seems ideal for dealing with big
integers.  I would not expect overflow problems when they use this
class.

Quoted text here. Click to load it

That may make the difference.  The Java programmer may not have
expected the overflow.

- anton
--  
M. Anton Ertl  http://www.complang.tuwien.ac.at/anton/home.html
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html
We've slightly trimmed the long signature. Click to see the full one.
Re: MCU mimicking a SPI flash slave
On 19/06/17 16:44, Anton Ertl wrote:
Quoted text here. Click to load it

I would be much happier to see the language supporting such static  
checks in some way (not as comments, but as part of the language), and  
tools doing the checking.  Spotting such errors during testing is better  
than spotting them when running the program, but spotting them during  
compilation is far better.  (Better still is spotting them while editing  
- IDEs for C usually do a fair amount of checking while you write the code.)

Quoted text here. Click to load it

Yes, I realise that is a more compact version (and it's on the Wikipedia  
page too).  It was just an example function to illustrate a possible  
stack error.

Quoted text here. Click to load it

And there you have illustrated my point, to some extent - C has  
progressed as a language, to include new features for more modern  
systems, such as support for 64-bit types.  Now I can use a C compiler  
for an 8-bit microcontroller and have 64-bit datatypes.  (OK, not all  
implementations have such support - but that is a quality of  
implementation issue, not a language failure.)

And it is perfectly possible to write C code that is portable across  
32-bit and 64-bit systems - precisely because of the zoo of integer  
types.  When you want an integer that is exactly 32-bit, you have  
int32_t.  When you want an unsigned integer that is exactly big enough  
to hold a pointer, you have uintptr_t.  When you want something that can  
hold at least 16 bits of data and should be as fast as possible, you  
have int_fast16_t (or just "int").  This requires a bit of discipline to  
avoid making assumptions about different sizes just because they are  
true on the platform you are working on at the time, but it is entirely  
possible.

On the other hand, you don't seem to be able to write a FLOOR5  
definition that will handle 32-bit values efficiently on both 16-bit  
cell systems and 32-bit cell systems.


Re: MCU mimicking a SPI flash slave
David Brown wrote on 6/19/2017 4:59 PM:
Quoted text here. Click to load it

Jeff Fox was a hard core Forth programmer and a graduate from the school of  
minimalistic programming.  I was asking about debugging stack errors and his  
reply was that stack errors show that the programmer can't count.  In other  
words catching such stack errors only require a programmer to count.  No  
fancy tools needed.

After that I stopped asking how to debug such errors and learned to count.  ;)

As Anton indicated, rather than relying on tools to catch such trivial  
errors, every word is tested thoroughly before using.  Such errors show up  
trivially and with no real effort... even if the programmer can't count.


Quoted text here. Click to load it

So you think for a language to be modern it has to have hard coded data sizes?

--  

Rick C

Re: MCU mimicking a SPI flash slave
On 20.6.2017 ?. 07:12, rickman wrote:
 >....
Quoted text here. Click to load it

I know nothing about Forth but this is an excellent point in general
you make here.

Programmers should use their ability to count not just for stack levels,
it is a lot more effective than working to delegate the counting of
this and that to a tool which is a lot less intelligent than the
programmer. Just let the tool do the heavy lifting exercises, counting
is not one of these.

Dimiter




Re: MCU mimicking a SPI flash slave
On 20/06/17 09:25, Dimiter_Popoff wrote:
Quoted text here. Click to load it

If you are going to use it for low-level programming and embedded
development, then yes.

It is fine to have more flexible or abstract types (like "number" or
"int") for general use, but if I can't say "write this data to this
address as a 16-bit operation, then read from that address as a 32-bit
value", then the language won't work for me.  If I can't say "this
structure is built from a 16-bit value, followed by an array of 7 8-bit
values, then 3 padding bytes, then a 32-bit value", then the language
won't work for me.

A modern language (especially for low-level and embedded work) should
let you be precise when you need to be, and loose and flexible when the
details don't matter and can be picked by the tool for efficient results.

Quoted text here. Click to load it

Counting is one of the tasks I expect a computer - and therefore a
programming language and a toolchain - to do well.  I expect the tool to
do the menial stuff and let the programmer get on with the thinking.



Site Timeline