MCU mimicking a SPI flash slave

Jeff Fox was a hard core Forth programmer and a graduate from the school of minimalistic programming. I was asking about debugging stack errors and his reply was that stack errors show that the programmer can't count. In other words catching such stack errors only require a programmer to count. No fancy tools needed.

After that I stopped asking how to debug such errors and learned to count. ;)

As Anton indicated, rather than relying on tools to catch such trivial errors, every word is tested thoroughly before using. Such errors show up trivially and with no real effort... even if the programmer can't count.

So you think for a language to be modern it has to have hard coded data sizes?

--

Rick C
Reply to
rickman
Loading thread data ...

I don't follow. What can the XMega E do that the PSOC devices can't in terms of peripherals?

Is an Ethernet interface an adequate indication of functionality? I think that is more than "simple" operations, no?

--

Rick C
Reply to
rickman

On 20.6.2017 ?. 07:12, rickman wrote: >....

I know nothing about Forth but this is an excellent point in general you make here.

Programmers should use their ability to count not just for stack levels, it is a lot more effective than working to delegate the counting of this and that to a tool which is a lot less intelligent than the programmer. Just let the tool do the heavy lifting exercises, counting is not one of these.

Dimiter

Reply to
Dimiter_Popoff

A constant stream of security issues looks like a good reason to stay away from a language.

I've made a strong case that overflow is a problem in Java and not in Forth.

256 byte integers would get you nowhere in projecteuler where 2 minute computing time is the norm, sometimes hard to stay under.

Groetjes Albert

--
Albert van der Horst, UTRECHT,THE NETHERLANDS 
Economic growth -- being exponential -- ultimately falters. 
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
Reply to
Albert van der Horst

It is a good while since I have looked at PSoC devices, and my comparison was with the older 8-bit and 16-bit core devices (not entirely unreasonable, since the XMega is an 8-bit device). The key difference is that the PSoC can have a couple of UARTs /or/ a few PWM timers /or/ a couple of SPI interfaces /or/ other digital interfaces with customisation. The XMegaE can have a couple of UARTs /and/ some PWM timers /and/ a couple of SPI interfaces /and/ a bit of customer hardware. Now do you see the point?

Now that I have looked at the PSoC website, I see that for their ARM Cortex devices, Cypress have figured this one out and have dedicated standard peripherals in additional to the programmable blocks - because, as I have said all along, these are /far/ more efficient.

OK, I admit to being impressed by that possibility. It is 10 Mb, needs external ram, and has little additional possibilities beyond simple UDP telegrams, but I am still impressed.

(For comparison, the XMOS can do a software 100 Mb NIC in about half of a cpu, letting you run lwip and network software in the other half. But if you really want Ethernet on an XMOS device, you are better off with the chips that have a dedicated hardware Ethernet interface.)

Reply to
David Brown

If you are going to use it for low-level programming and embedded development, then yes.

It is fine to have more flexible or abstract types (like "number" or "int") for general use, but if I can't say "write this data to this address as a 16-bit operation, then read from that address as a 32-bit value", then the language won't work for me. If I can't say "this structure is built from a 16-bit value, followed by an array of 7 8-bit values, then 3 padding bytes, then a 32-bit value", then the language won't work for me.

A modern language (especially for low-level and embedded work) should let you be precise when you need to be, and loose and flexible when the details don't matter and can be picked by the tool for efficient results.

Counting is one of the tasks I expect a computer - and therefore a programming language and a toolchain - to do well. I expect the tool to do the menial stuff and let the programmer get on with the thinking.

Reply to
David Brown

a

No. If you don't know which datum of which size and type is you are not up to programming. Delegating to the tool to track that means only more work for the programmer, sometimes a lot more while tracking what the tool got wrong _this_ time.

,

If "counting" is too much of a workload is too much for a person he is not in the right job as a programmer (hopefully "counting" is not taken literally here, it means more "counting up to ten"). Delegating simple tasks to the tool makes life harder, not easier, as I said above. Often a lot harder.

Controlled types, sizes etc. by the machine are meant for the user, not for the programmer. They are supposed to make these work, this *is* their job. Pretty much like a meal is supposed to be served nicely arranged on a plate for the consumer; however for the cook the ingredients are a lot more convenient in a raw form.

Dimiter

========================= ========================= ==== Dimiter Popoff, TGI

formatting link
========================= ========================= ====
formatting link

Reply to
dp

Er. No.

David is suggesting, correctly IMNSHO, that sometimes it is necessary for me to specify exactly what the tool has to achieve - and then to let the tool do it in any way it sees fit.

He gave a good example of that, which you snipped.

With types such as uint8_t, uint_fast8_t and uint_least8_t, modern C is a significant advance over K&R C.

I can only easily deal with three numbers: 0, 1, many :) All other numbers are a pain in the ass and I'm more than happy to delegate them to a tool.

Reply to
Tom Gardner

(I don't take it as rude - this has been a very civil thread, despite differing opinions.)

Yes, I know Forth is all about the words. But as far as I could tell, Forth 2012 does not add many or remove many words - it makes little change to what you can do with the language.

And - IMHO - to make Forth a good choice for a modern programming language, it would need to do more than that. As you say below, however, that is not "what Forth is about".

I think that we are actually mostly in agreement here, but using vague terms so it looks like we are saying different things. We agree, I think that 10 stack cells and 64 cells RAM (which includes the user program code, as far as I can tell) is very limited. We agree that it is possible to do bigger tasks by combining lots of small cpus together. And since the device is Turing complete, you can in theory do anything you want on it - given enough time and external memory.

The smallest microcontroller I worked with had 2KB flash, 64 bytes eeprom, a 3 entry return stack, and /no/ ram - just the 32 8-bit cpu registers. I programmed that in C. It was a simple program, but it did the job in hand. So yes, I appreciate that sometimes "very limited" is still big enough to be useful. But that does not stop it being very limited.

I am probably just picking a bad example here - please forget it. I was simply trying to think of a case where your main work would be done in fast FPGA logic, while you need a little "housekeeping" work done and a small cpu makes that flexible and space efficient despite being slower.

Amdahl's law is useful here. Some tasks simply cannot be split into smaller parallel parts. You always reach a point where you cannot split them more, and you always reach a point where the overhead of dividing up the tasks and recombining the results costs more than the gains of splitting it up.

Imagine, for example, a network router or filter. Packets come in, get checked or manipulated, and get passed out again. It is reasonable to split this up in parallel - 4 cpus at 1 GHz are likely to do as good a job as 1 cpu at 4 GHz. But what about 40 cpus at 100 MHz? Now you are going to get longer latencies, and have significant effort tracking the packets and computing resources - even though you have the same theoretical bandwidth. 400 cpus at 10 MHz? That would be even worse. If some data needs to be shared across the processing tasks, it is likely to be hopeless with so many cpus. And if you try to build the thing out of 8051 chips, it will never be successful no matter how many millions you use, if the devices don't have enough memory to hold a packet.

Or to pick a simple analogy - sometimes a rock is more useful than a pile of sand.

Again, I think our apparent disagreement is just a matter of using vague terms that we each interpret slightly differently.

You have been designing with FPGAs for decades - that can make it hard to understand why other people may find them difficult. I have done a few CPLD/FPGA designs over the years - not many, but enough to be happy with working with them. For people used sequential programming, however, they appear hard - you have to think in a completely different way. It is not so much that thinking in parallel is harder than thinking in serial (though I believe it is), it is that it is /different/.

I am not sure exactly what you are asking here, but if we are going to bring in other languages, I think perhaps that would be a topic for a new thread some other time. It could be a very interesting discussion for comp.arch.embedded (less so for comp.lang.forth). However, I feel this thread is big enough as it is!

Again, the tokens are nothing special. In most languages, the role is filled by keyboards, symbols or other features of the grammar - but there is nothing here that is fundamentally different.

I haven't looked up a list of token types, but for the sake of argument let's say that there is one indicating that something is a variable shown in green, one indicating a word definition shown in red, and one indicating a compile-time action shown in blue. And you have a name "foo" that exists in all these contexts.

You can show the different uses by displaying "foo" in different colours. You can store it in code memory using a 4 bit token tag. You could write it using keywords VAR, DEF and COMP before the identifier "foo". You could use symbols $, : and # before the identifier to show the difference. You could use other aspects of a language's grammar to determine the difference. You could use the position within the line of the code file to make the difference. You could simply say that the same identifier cannot be used for different sorts of token, and the token type is fixed when the identifier is created.

The existence of different kinds of tokens for different uses is (at least) as old as programming languages. Distinguishing them in different ways is equally old.

Yes, the use of colour as a way to show this is not really relevant. However, it is not /me/ that is fussing about it - look at the /name/ of this "marvellous new" Forth. It is called "colorFORTH".

No, no - /C/ is not perfect. But that does not mean /I/ am not :-)

The people that come to us may use Arduino or Pi's for prototyping, but it is the industrial versions they sell (otherwise there would be no point coming to us!). But no, we don't sell as many units as mass produced cheap devices do.

Reply to
David Brown

It may be an improvement in C indeed. But this is not relevant to the main point: delegating simple tasks to the tool costs the programmer more, often a lot more effort than it returns.

Well you snipped my example with the programmer and the cook, let me repost it:

In the case you let the tool do all the cooking for you you relegate yourself to the role of the waiter if not the consumer. And if you have to do the job of the cook having kept yourself fit only about the skill set of a waiter - it will cost you a lot more time to do the job than it would cost you had you not lef your cooking skills decay.

Dimiter

====================================================== Dimiter Popoff, TGI

formatting link
======================================================
formatting link

Reply to
Dimiter_Popoff

The reason I know exactly which size a datum is, is because the type has a fixed size!

If I need to know that "x" has 32 bits, I make sure of that fact by declaring "x" as a "uint32_t" or "int32_t" in C. I can do that, precisely because C supports such hard coded data sizes.

In a language like Forth, or pre-C99 C, you can't do that portably. An "int" might be 16-bit, or maybe 32-bit, or maybe something weird - the same applies to a Forth "cell". You need pre-processor directives, conditional compilation, implementation-specific code, etc., in order to know for sure what sizes you are using.

Are you seriously suggesting that sometimes compilers will get the sizes wrong? That if I ask a C compiler for an "int64_t", sometimes it will give me a different size?

Or are you talking about more complex types? If I define a struct that I know should match an external definition (hardware registers, telegram format, etc.) of a particular size, I can write:

typedef struct { uint16_t x; uint8_t ys[6]; ... } reg_t; static_assert(sizeof(reg_t) == 24, "Checking size of reg_t struct");

The /compiler/ does the counting and the checking. It does so easily and reliably, handles long and complicated structures, is portable across processors of difference sizes, and will always give a clear and unmistakeably compile-time error message if there is a problem.

You said it above - but you were wrong (IMHO).

Think of having the compiler check sizes and do "counting" as like the thermostat and timer on the oven. The cook (or programmer) decides on the temperature and timing he wants, but the oven handles the boring bit of turning the elements on and off to get the right temperature, and warns the cook when the timer is done.

Reply to
David Brown

There are three sorts of people in this world - those that can count, and those that can't.

Reply to
David Brown

So C has improved. But this is only about C overcoming one of its shortcomings by adding more complexity for the programmer to deal with, which is exactly my point. You need to know what your compiler does with your data type only if you have to rely on it to deal with it instead of just dealing with it yourself when you know what it is anyway.

Are you seriously suggesting that you have not spent well over half of your programming time figuring out what the compiler expects from you.

Then a month later the menu changes and you have to set the temperature to the one for the meal you did last month. Oops, what was it? Spend another week rediscovering that.

Dimiter

====================================================== Dimiter Popoff, TGI

formatting link
======================================================
formatting link

Reply to
Dimiter_Popoff

For the examples given you haven't demonstrated your point, and you have ignored the main point being made.

To repeat the main point being made: it is better for me to specify (in the source code) what the tool has to achieve, and let the compiler decide how to achieve it. It is worse for me to /ambiguously/ specify (in the source code) what is required, and to implicitly point to the compiler's man pages - and probably to hope the next program maintainer uses the correct compiler flags.

I ignored it because, like most analogies, its relevance is dubious and encourage focusing the attention on /inapplicable/ details.

That has already started happening in another response which has started to discuss menus/meals that people had last week!

I beats me how that is supposed to illuminate the benefits of stating the peripheral's structure and letting a compiler sort out how best to generate code for it.

Reply to
Tom Gardner

Why would that be? I can see that it's far better for programmers who don't test their programs, but what is the advantage for programmers who test their programs?

And I would especially hate it if an IDE is distracting me by nagging me about minor details while I am focusing on something else.

If the language does not require the 64-bit types, you can hardly claim them as a language feature.

Anyway, if 64-bit integers were needed on 16-bit-cell systems, we would add them to Forth. But in discussions about this subject, the consensus emerged that we do not need them (at least not for computations).

By contrast, Gforth and PFE have provided 128-bit integers on 64-bit systems since 1995, something that C compilers have not supported for quite a while after that. And once GCC started supporting it, it was quite buggy; I guess the static-checking-encouraged lack of testing was at work here.

There is a difference between "it is possible" and "it happens". My experience is that, in C, if you have tested a program only on 32-bit systems, it will likely not work on 64-bit systems; in Forth, it likely will.

Tough luck. Why would I need a double FLOOR5 on a 16-bit platform?

- anton

--
M. Anton Ertl  http://www.complang.tuwien.ac.at/anton/home.html 
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html 
     New standard: http://www.forth200x.org/forth200x.html 
   EuroForth 2017: http://www.euroforth.org/ef17/
Reply to
Anton Ertl

I feel I use a fairly passive voice in conversations like this one. But sometimes people get torqued off about their reception of my rudeness.

So far you have only identified one thing Forth does not do that you would like, it doesn't have fixed size data types. What other important things is it lacking?

There is at least one Forth programmer here who agrees with you about the data sizes. He feels many things in Forth should be nailed down rather than being left to the implementation. But people are able to get work done efficiently in spite of this.

I will say pointing out this issue is making me think. I can't think of a situation where this would actually create a problem. To allow the code to run on a 16 bit system that variable would need to use a double data type (double size integer, not a floating point type). It would then be a 64 bit type on a 32 bit system. Would that create a problem?

Yes, if you need more than a few k of RAM, the GA144 needs external RAM. But that can be accommodated. The point is there is more than one way to skin a cat. Thinking in terms of how other processors do a job and trying to make the GA144 do the same job in the same way won't work. It has capabilities far beyond what people see in it.

Sure, my app from 10 years ago would have been perfect to illustrate the utility of combining fast logic with a (relatively) slow CPU. At one point we were facing a limit to the available gates in the FPGA and the solution would have been replacing the slower logic with a small stack CPU, but it didn't come to that. I was able to push the utilization to around 90% without a problem.

Amdahl's law doesn't apply. Tasks aren't being split into "parallel" parts for the sake of being parallel any more than in an FPGA were every LUT and FF operates in parallel. If you run out of speed in a GA144 CPU you can split the code between two or three CPUs. If you run out of RAM you can split the task over several CPUs to use more RAM.

You are not designing the code to effectively suit the chip. In the GA144 the comms channel allow data to be passed as easily as writing to memory. Break your task into small pieces that each do part of the task. The packets work through the CPUs and out the other end. Where is the problem?

Concrete with sand is better than rock any day.

I thought we were in agreement on this one. Lattice and others started making small, low power, low cost FPGAs over 15 years ago.

Different doesn't need to be hard. It is only hard if people won't allow themselves to learn something new. That's my point. Using FPGAs isn't hard, people make it hard by thinking it is the same as CPUs. It's actually easier.

The devil is in the details. Making up examples won't cut it. It's not about simple syntax highlighting. The important stuff is when something is executed. The fact that Forth can do this makes it very powerful.

Who said it is a "marvellous[sic] new" Forth?

So don't knock them. I'd love to be producing things like Arduinos that sell themselves rather than things that I have to pound the pavement to find users for.

I know there are lots of people who will never like Forth. It is more of a tooll than a language. Its power lies in being very malleable allowing things to be done that are hard in other languages. I'm not an expert Forth programmer, so I can't explain all the ways it works better than other languages. The main thing I like is that it is interactive allowing me to interact with the hardware I build and construct interfaces from the bottom up testing as I go. Some of the details of using it can be clumsy actually, but it is still very useful for what I do.

--

Rick C
Reply to
rickman

Java has a BigInteger class, which seems ideal for dealing with big integers. I would not expect overflow problems when they use this class.

That may make the difference. The Java programmer may not have expected the overflow.

- anton

--
M. Anton Ertl  http://www.complang.tuwien.ac.at/anton/home.html 
comp.lang.forth FAQs: http://www.complang.tuwien.ac.at/forth/faq/toc.html 
     New standard: http://www.forth200x.org/forth200x.html 
   EuroForth 2017: http://www.euroforth.org/ef17/
Reply to
Anton Ertl

Oh I did make my point all right, check my first post on the thread.

The examples are completely beside my point, they are about C specifics and sort of.

Like it or not thinking is about making analogies. I realize my point is probably doomed to never come across to the vast majority of programmers today, sort of like trying to explain colours to a blind person (absolutely no insult meant here, I know I am talking to intelligent people, just wrestling to make a point). What I see from where I stand is that C as a language - not as a compiler, toolchain quality etc. - costs a lot more work to the programmer than needed. One of the reasons for that is the fact that the programmer has to delegate to the toolchain a lot of trivial "no brainer" work and _this_ costs a significant, at times a prohibitive, effort. How do I make my point to people who have never been really fluent in a lower level language which does not have the ugliness of a poor underlying model etc... A lost cause I guess.

It is not supposed to illuminate that. It is supposed to demonstrate that while wrestling with the compiler to make it do this or that one can often waste a lot more time if he could omit this step.

Dimiter

====================================================== Dimiter Popoff, TGI

formatting link
======================================================
formatting link

Reply to
Dimiter_Popoff

We are talking about a change in the language nearly 20 years ago...

And even before that, it was normal to have a header with things like:

typedef short int i16; typedef unsigned long int u32;

etc.

Pre C99, you had to make such headers yourself, adapt them to fit a given implementation, and there was no standardisation of the names. But it was a job you did once for each platform you used. Because C is a typed language, you can make the definitions of the types depend on the platform, but the code using the types is then platform independent.

/Where/ is this complexity you talk about? I cannot understand what you mean here, and why you think there is some extra effort. If I want a

32-bit variable, I make an int32_t or a uint32_t (signed or unsigned). I make the choice of the characteristics I need for the data, and write it clearly, simply, and quickly in the source code. There is no effort involved.

That makes no sense whatsoever. How do you "deal with it yourself"? You write source code, the compiler compiles it to machine code. You are relying on the compiler, just like when you write in assembly code you rely on the assembler.

You can get a difference when you don't care about the details. If I write "x * 5", I (usually) don't care if the compiler implements that with a multiply instruction, or shift and add instructions, or a "load effective address" instruction with odd addressing modes, or if it has figured out that "x" is always 3 at this point in the code, and can use

15 directly.

And when I /do/ care about the details, such as the size of a piece of data, /then/ I write explicitly what I want. That, if anything, is "dealing with it myself".

The compiler expects me to write valid C code. Nothing more, nothing less. I have spent time learning how to write valid C code - but no, that has not taken half my programming career.

What on earth does that mean? When a cook sets the temperature on his oven thermostat, that does not somehow suck the information out his brain and erase it from all his recipe books!

Reply to
David Brown

Yeah. Been there done that :(

Oh, there we are in violent agreement about the end result! Overall C/C++ has (arguably) become too complex for simple things, and (unarguably IMNHSHO!) become too poorly specified for complex things. Trying to be "all things to all people" is rarely successful.

Nowadays C (and even more with C++) is part of the problem rather than part of the solution. The abstractions, which were a useful valid advance in K&R days, have become *very* leaky over the years with the advance of technology. Hells teeth, it is only recently that C/C++ has recognised the need for a memory model to deal with all the subtle behaviour in SMP and NUMA machines. I reserve judgement on whether it will be a success; even starting from a clean slate Java had to revise its memory model!

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.