MCU mimicking a SPI flash slave

You have given absolutely /no/ indication in the slightest as to why that might be the case, or why you might think so. What do you mean by this? Can you give examples?

Again, this makes no sense and is contrary to common experience. What makes you think that it is more productive for the programmer to do trivial no-brainer work than to let the tools do it?

I have programmed in assembly for a dozen or more architectures - when I started my job, it was the language of choice for small microcontrollers. In fact, when I learned assembly, I even had to hand assembly the instructions to machine code on paper.

So assume that I am fluent in low level languages, and try to explain. I think the real challenge lies in that you don't really know how C works or how it is used. (As with your posts, this is not an insult in any way.)

Reply to
David Brown
Loading thread data ...

Honestly? You can't see the advantage of spotting errors at as early a stage as possible?

Why would someone bother writing test patterns to catch possible errors that the tools can see automatically? That is just a waste of everyone's time, and it's easy to forget some tests.

Errors of various sorts can happen when you write code. They can be everything from misunderstandings of the specifications, to small typos, to stylistic errors (which don't affect the running code, but can affect maintainability and lead to higher risk of errors in the future), to unwarranted assumptions about how the code is used. Producing a correct program involves a range of methods for avoiding errors, or detecting them as early as possible. Testing (of many different kinds) is /part/ of that - but it is most certainly not sufficient. It is /always/ cheaper and more productive to spot errors at an earlier stage than at a later stage - and detecting them at compilation time is earlier than detecting them at unit test time or system test time.

Incorrect code is not a minor detail.

C /does/ require them (in C99), and they /are/ a language feature. (Technically, C requires an integer type that is at least 64 bits, but for most practical purposes, real implementations have exactly 64-bit types.) There are C compilers that don't support all of C99.

And I have used 64-bit integers on an 8-bit microcontroller. It is a rare requirement, certainly, but not inconceivable.

Static checking is an addition to testing, not an alternative.

I don't see a way to write portable code that works with known sizes of data in Forth. All I have seen so far is that you can use single cells for 16-bit data, and double cells for 32-bit data. This means if you want to use 32-bit values, your choice is between broken code on 16-bit systems or inefficient code on 32-bit systems. (And you can only tell if it is broken on 16-bit systems if you have remembered to include a test case with larger data values.)

I work on embedded systems. I need to be able to access memory with /specific/ sizes. I need to be able to make structures with /specific/ sizes.

Can you show me how this is possible in Forth, in a clear, simple and portable manner?

That is not for you to worry about. Think of me as a customer asking for a piece of code written in Forth. I want a FLOOR5 function that handles 32-bit values, works correctly on 16-bit and 32-bit cell systems, and is efficient on both sizes of system. Can it be done?

Reply to
David Brown

Examples I've come across include such gems as...

"However, many C compilers use non-standard expression grammar where ?: is designated higher precedence than =, which parses that expression as e = ( ((a < d) ? (a++) : a) = d ), which then fails to compile due to semantic constraints: ?: is never lvalue and = requires a modifiable lvalue on the left. Note that this is different in C++, where the conditional operator has the same precedence as assignment."

formatting link

"i = ++i + 1; // undefined behavior[in C] (well-defined in C++11)"

formatting link

The ability to break a compiler's legitimate optimisations by "casting away constness and volatility" (IIRC that took several years of committee deliberation as to whether it was required or forbidden behaviour!)

And of course, the amusing C++ FQA.

All of which makes the simplicity of Forth seem appealing :)

Reply to
Tom Gardner

I was being sarcastic in my whole post.

--
Cecil - k5nwa
Reply to
Cecil Bayona

Yes, but you may not like it - use conditional compilation

0 invert 65535 u> [if] : floor5 ( n1 -- n2 ) 1- 5 max ; \ 32 bit cells [else] : floor5 ( d1 -- d2 ) 1. d- 5. dmax ; \ 16 bit cells [then]
--
Gerry
Reply to
Gerry Jackson

Neither C nor C++ is a problem here. People writing absurd obfuscated nonsense in their code may be a problem, but that applies in any language.

"casting away constness and volatility" means writing code that explicitly tells the compiler "I know better than you do here, and I know it is safe to break rules about the code". Either that is true, and it lets you write the code you want, or it is wrong and you've made a mistake - as you can do in all languages.

Have you read it? It is mostly misunderstandings, repetitions, outdated information, or completely unrealistic code. There are a few good points in it, but you have to work hard to find them.

C is mostly simple and clear (if well written). C++ is a much bigger and more complex language - it has greater scope for writing good code, but also greater scope for making a mess. Simplicity of a language is not necessarily a good thing any more than complexity is - you don't get much simpler than a Turing machine, but I would not want to use it for application programming!

Reply to
David Brown

Conditional compilation is fine as a solution. But supposing you wanted a number of functions that were all 32-bit (let's say, floor6, floor7, and floor8 due to a lack of imagination). Is there any way to have a single conditional bit, and then use the features in other words? (Like defining the type "int32_t" once in C, and using it thereafter.) My stab at a solution would be:

0 invert 65535 u> [if] \ 32 bit cells : -32 ( n1, n2 -- n3 ) - ; : max32 ( n1, n2 -- n3) max ; : to32 ( n1 -- n1 ) ; [else] \ 16 bit cells : -32 ( d1, d2 -- d3 ) d- ; : max32 ( d1, d2 -- d3) dmax ; : to32 ( n1 -- d1 ) S>D ; [then]

: floor5 ( 32x1 -- 32x2 ) 1 to32 -32 5 to32 max32 ; : floor6 ( 32x1 -- 32x2 ) 1 to32 -32 6 to32 max32 ; etc.

The equivalent C (without the C99 sized integers) is:

#include

#if UINT_MAX == 65535 typedef long int i32; #else typedef int i32; #endif

i32 floor5(32 v) { return (v < 6) ? 5 : (v - 1); }

i32 floor6(32 v) { return (v < 7) ? 6 : (v - 1); }

(That is, like your Forth, assuming that you either have 16-bit cells / ints and 32-bit double cells / long ints, or 32-bit cells / ints.)

Reply to
David Brown

Agreed, but the differences between the two languages is a big hint that there are surprising and unnecessary dragons lurking to catch people that haven't spent several decades following the differences and /newly introduced/ pitfalls.

Do you have any comment about the previous point about /some/ compilers apparently /choosing/ non-standard expression grammars? That seems remarkable to me.

That is problematic when a library is compiled and optimised assuming that the const statements are correct, and later on someone else in a different company uses that library in a way which violates those assumptions.

In those circumstances the user probably doesn't know better.

Indeed. But not all of the points can be "wished away"; many a truth is spoken in jest.

I completely agree :)

The major problem with C/C++ is that it can't make up its mind whether it wants to be simple low-level and near to the silicon, or an expressive high-level general purpose applications language. Either would be valid, but in trying to be both it misses both targets.

Fortunately the marketplace has decided that in most cases C/C++ isn't "the best" general purpose application language; Java, Python and similar are the future there.

Reply to
Tom Gardner

You are missing a lot. The original PSOC devices had a simple MCU which was not any standard device. It had truly programmable digital blocks and programmable analog blocks. So they were *much* more functional than other MCUs with fixed peripherals. If those fixed peripherals met your need, then the PSOC was still better in situations where you had modes where you would need these peripherals in this mode and those peripherals in other operating modes. I think an example they promoted was for making measurements in one mode and reporting the results in another mode.

The newer devices offer both 8 bit 8051 CPUs and ARM CM0, CM3 or CM4 devices. I have not looked hard at them lately, but the 8051 based PSOC3 has up to 24 digital blocks,

16 to 24 universal digital blocks (UDB), programmable to create any number of functions: ? 8-, 16-, 24-, and 32-bit timers, counters, and PWMs ? I2C, UART, SPI, I2S, LIN 2.0 interfaces ? Cyclic redundancy check (CRC) ? Pseudo random sequence (PRS) generators ? Quadrature decoders ? Gate-level logic functions

That totally blows away the totally wimpy XMega E programmability.

The Cypress web site has always been a PITA to find the info you want, but in searching this I see they have come out with a very wide range of new devices including other custom CPUs including 128 MHz RISC as well as a line of ARM CR4 and 240 MHz CR5 devices. These seem to be more conventional devices with no mention of the programmable hardware, analog or digital. But then that is not surprising. The programmable hardware allows a very cost competitive product. As mentioned in the Wikipedia article, they use PSOC in toothbrushes and Adidas sneakers. They have to be cheap to be used in sneakers. The CR5 devices are much higher cost parts.

I assume it can do a NIC in software because of the hardware assist for the I/O? Still, that's pretty good.

--

Rick C
Reply to
rickman

Yes that's a way to achieve it. However, for performance, I would use POSTPONE and IMMEDIATE to compile the small functions inline e.g.

0 invert 65535 u> [if] \ 32 bit cells : -32 ( n1, n2 -- n3 ) postpone - ; immediate : max32 ( n1, n2 -- n3) postpone max ; immediate : to32 ( n1 -- n1 ) ; immediate [else] \ 16 bit cells : -32 ( d1, d2 -- d3 ) postpone d- ; immediate : max32 ( d1, d2 -- d3) postpone dmax ; immediate : to32 ( n1 -- d1 ) postpone S>D ; immediate [then]

Then, for example, TO32 for 32 bit cells compiles nothing in FLOOR5 etc.

But, as an aside, a warning, -32 is treated by Forth as an integer so after the above definitions you couldn't ever use -32 as a literal in another definition as it would compile a -. Better to call it, say, SUB32

With this technique you could also do:

0 invert 65535 u> constant 32bits : -32 32bits if postpone - else postpone d- then ; immediate

and so on. This would result in the same compiled code for FLOOR5 etc with less source code noise. At the cost of more compiled code of course as -32 etc are bigger. But this wouldn't matter if you were cross compiling on a host for a target system.

Another alternative is to factor out the conditional part:

0 invert 65535 u> constant 32bits : postpone-it ( xt1 xt2 -- ) \ xt1 is for 32 bit cells, xt2 for 16 bits 32bits if drop else nip then compile, ;

: -32 ['] - ['] d- postpone-it ; immediate : max32 ['] max ['] dmax postpone-it ; immediate etc

but whether that is worthwhile depends on how many of these definitions there are.

--
Gerry
Reply to
Gerry Jackson

No, they were not much more useful. They had a few points where they were particularly good (I remember they were a good way of doing capacitive touch sensing when they were new). But you needed so many of the programmable digital blocks to do anything. If you wanted a UART, a

16-bit timer and an ADC, you had to get one of the bigger devices with more blocks - and that is for basic stuff that every other microcontroller had had for a decade.

Nope.

It is much easier and cheaper to have dedicated peripherals for the standard tasks. /Then/ you can make use of the interesting programmable blocks to make specialised peripherals.

The AVR in the XMega will do things like CRC and PRS in software - it is a much more powerful cpu than the 8051 of the early PSoC's. (But not nearly as fast as the Cortex-M cpus.) And since you have all these other peripherals in hardware already, you don't /need/ programmable blocks to implement them. (I am not suggesting that the XMega E has /programmability/ to compare with the PSoCs' - I have never suggested such a thing. I /am/ suggesting that they are more /useful/ than the early PSoC's because those PSoC's were far too limited.)

Now the PSoC's have enough blocks to be useful - they did not originally. And the newer devices (Cortex based) have finally got things right - they have a proper cpu rather than a core that was outdated 30 years ago, and they have a full selection of normal fixed peripherals. The programmable blocks are an /addition/ to a solid microcontroller base, rather than instead of it.

You see this process again and again. When the PSoC came out, the marketing was all about claims like yours - you don't need fixed hardware peripherals because you can use the flexible programmable blocks. Now modern PSoC's have lots of fixed hardware peripherals as well. When the XMOS came out, marketing talked about how their deterministic SMT and I/O blocks meant that you could make Ethernet and USB in software. Now you can buy XMOS chips with an Ethernet MAC or a USB interface. When FPGAs were younger, you apparently did not need a hard processor because soft processors could do such a good job. Now FPGAs with hard processor cores are much more common - even when the speed (such as SmartFusion2's 166 MHz Cortex-M3) would be achievable in a soft processor. And guess what? These devices come with a range of dedicated fixed hardware peripherals such as CAN controllers, I2C, SPI, Timers, etc. And that's on an /FPGA/ - making a timer block on an FPGA is about as simple a task for programmable logic as you can get.

Again and again it is shown - a selection of dedicated hardware standard peripherals is important, and vastly more efficient than doing everything in programmable blocks, programmable logic, bit banging, etc.

Yes, you basically have a SERDES system for each of the I/O pins, and a whole array of hardware timers that can trigger the transfers.

Reply to
David Brown

What pitfalls? Everyone who has ever been involved in C (or C++) knows that expressions like "i = ++i + 1;" are classic examples of undefined or unspecified behaviour. There is never a reason for writing such things in code, and it is always unclear to the reader (even in later C++ standards where some cases now have defined ordering). Just don't write such silly code - problem solved.

Later C++ standards some such cases defined behaviour - that does not affect C, or introduce new pitfalls to either C or C++. At most, it makes previously undefined code into defined code - that will either fix the broken code or leave it broken. It will not break code that was previously working.

If you are interested, the reason why these things are now defined in C++ is not because anyone would ever /want/ to write "i = i++ + ++i;". It is merely a side-effect of making certain other orders defined, where they /are/ useful. In particular, people have assumed that expressions like "cout > break

No, I cannot see any problem here.

If a library exports some constant data, then it is absolutely fine that the library is compiled and optimised on the assumption that the data never changes. If user code casts away constness and tried to change that data, the user code is clearly wrong. It is wrong in the same way that code passing -1 to a square root function is wrong, or code that calls a sin() function but is expecting to get the results of cos().

If the user does not know that he should not be changing data that is specified to be constant, then the user is not qualified for the job as programmer.

The FQA is not particularly funny or truthful. (I have read it, as well as the original C++ FAQ - have you?).

C++ certainly has plenty of flaws - it is a /big/ language. Some of these flaws get fixed over time in newer standards, others remain, and yet more get introduced. And of course there is plenty that is a matter of taste or style. But wild exaggeration of the problems is no more helpful than any claim that the language is perfect.

One of the major problems with C/C++ is that there is no such language - but a lot of people seem to think there is.

A lot of people find C++ works fine for one or both targets. It is, I think, the only language that covers such a wide range. But it is not a /simple/ low-level language - it is a big language, whether you use it for low-level tasks or high-level tasks.

There is, however, no need to use /all/ of C++. If you are writing PC code, you will use the standard library a lot - you will use containers, strings, etc. But you don't use them on low-level code. Different parts of C++ are better suited for different needs.

C, on the other hand, is not well suited for higher level work at all. There was a time when it was one of the better choices, because there were so few alternatives. But not now.

Most of my embedded programming is in C, with a small (but increasing) part C++. Most of my PC programming is in Python.

Reply to
David Brown

I agree with Rick that this should be an FPGA project. We are working on s omething similar. Our first thought was to tap into the SD card. But the SD card only receives data in batch at pre-determined interval. During tha t time, the device is suspended and not usable. We really want better real time access. So, we open up the box and find couple of dip32 sockets next to the surface mounted sram. We would have to disable the onboard sram an d wire up headers to a custome FPGA board.

Found a cyclone II with 64K sram. Asking seller if he can upgrade it to 12

8K. If so, save us half of the project time. Eventually, we can probably b uild a DIP32 header with FPGA and SRAM. I can't mount BGA myself, but more than happy to pay someone to do it.

The FPGA code is just one page (incomplete with control lines mux), just to see if it will fit in the cheapest Max II CPLD.

---------------------------------------------------------

library ieee; use ieee.std_logic_1164.all; use IEEE.STD_LOGIC_ARITH.ALL; use IEEE.STD_LOGIC_UNSIGNED.ALL;

entity ram is port( P7: in std_logic; -- preserve upper 9 bits for next shift P6: in std_logic; -- preserve upper 10 bits for next shift P5: in std_logic; -- preserve upper 11 bits for next shift ACLK, clear, pass : in std_logic; -- Address serial clock ASI: in std_logic; -- Address serial in ASO: buffer std_logic; -- Address serial out A: buffer std_logic_vector(16 downto 0); -- Address register AA: in std_logic_vector(16 downto 0); -- Address parallel in DCLK: in std_logic; -- Data serial clock DSI: in std_logic; -- Data serial in DSO: buffer std_logic; -- Data serial out D: buffer std_logic_vector(7 downto 0); -- Data register DD: in std_logic_vector(7 downto 0) -- Data parallel in ); end ram; architecture arch of ram is

begin process (ACLK, clear) begin if clear = '1' then A

Reply to
edward.ming.lee

In the same way that C hasn't changed.

Just like C. And I read a lot of C.

Unless you use a library for the purpose. Just like C.

No, not me. We use 256 character significance. The base system has about 20 named namesspaces.

You persist in believing that colorForth represents the state of Forth. It doesn't. It's a one-man system for one man's use and Chuck Moore does not pretend anything else.

Stephen

--
Stephen Pelc, stephenXXX@mpeforth.com 
MicroProcessor Engineering Ltd - More Real, Less Time 
 Click to see the full signature
Reply to
Stephen Pelc

I haven't used blocks for decades. You are deeply misinformed.

Stephen

--
Stephen Pelc, stephenXXX@mpeforth.com 
MicroProcessor Engineering Ltd - More Real, Less Time 
 Click to see the full signature
Reply to
Stephen Pelc

I'm calling applesauce on this one. Again, you are talking in vague terms that mean little and trying to make comparisons without specifics. The PSOC

1 parts were designed for *very* low cost. They had two advantages over other devices. They could use the same die for a wide range of peripheral combinations making the part cost lower from higher production volumes. The other advantage is the peripherals could be changed on the fly being used for one set of peripherals in one mode of operations and another set of peripherals in another mode, again keeping the part cost low because you can use the smallest possible part.

If you think there are other parts that can reach the same price point with the same capabilities, please point them out. Apple has used the PSOC 1 in their iPod Nano as well as other companies using them in high volume low margin applications where performance vs. cost is critical.

Your criticism regarding the peripherals just doesn't hold water.

The fact that you said "Nope" doesn't make it true. Dedicated peripherals are just that, dedicated. They can't be anything else. If you aren't using them they are wasted silicon. They tend to be added in proportion. Chips with more UARTs are likely to have more I2C and SPI as well, often wasted. PSOC 1 devices sold well enough to markets that needed to save every last penny.

The only real issue with PSOC 1 was the design software. I scheduled a live remote class once which turned out to be me and the instructors. lol They were willing to teach the tools one on one in the early days because they hadn't gotten things working well enough to convey the knowledge any other way. That's why they revamped the line to PSOC 3 and 5 and now all the others with all new tools. Personally I don't care for the tools because they isolate the designer from what is going on, but they work at least.

The "early PSOCs" didn't use an 8051, it was a custom M8C core. I don't recall the speed relative to other 8 bitters, but it runs at 24 MHz with a built in multiplier. To say the AVR blows it away I expect is rather an exaggeration.

You keep talking about devices purely from your perspective. The market says your criticisms are wrong. The original PSOC devices are still sold and are very cost effective. They offer a range of combinations that are much, much wider becoming optimal for a much wider range of applications.

None of this makes the XMega E a useful part. Programmability is highly useful allowing devices to reach an optimum price point. The XMega E would have only a very, very tiny niche.

I think we are starting to see why the XMOS devices are so expensive, very extensive dedicated hardware. Too bad they don't have a few programmable digital blocks that can actually do something useful.

--

Rick C
Reply to
rickman

You are showing your ignorance of Forth. The test code catches the errors at compile time same as C.

If you are going to forget tests you are doomed. Every piece of code should be written to a set of requirements, each of which must be verified, usually by testing. Forget a test and you have unverified code.

See above... You don't understand Forth.

I've never seen a C editor that would catch anything more complex than mismatched parentheses. Do editors look for missing variable declarations now?

Again, you don't understand Forth. You use single cells or double cells. Forth does not specify the cell size just as C does not specify the size of an integer.

Forth has cells (a word) and chars (a byte). I don't find it to be a problem, but then I don't typically jump back and forth between 32 and 16 bit processors. What 16 bit processors do you actually use?

No one has customers asking for simple functions.

--

Rick C
Reply to
rickman

No, but that has already been discussed and there is no need to repeat it.

No, because C is a typed language. It is not a particularly strongly typed language, and some people write their C code in very weak ways (making everything an "int"). But it is a big step up from a typeless language and lets the compiler do a good deal of checking.

C has different sized types as part of the language, not just a library.

Very good. I was commenting here specifically on Forth as found on the GA144 toolchain, but I am glad you have namespaces and long identifiers.

Again, the context has been Forth on the GA144 - and that /is/ colorForth. But again, I am glad to hear that it is not the state of Forth in general.

Reply to
David Brown

That was from information from the GA144 website.

Reply to
David Brown

The SERDES and timers are very simple, so I'm unconvinced they are the reason for any perceived expense.

Have a look in the "ports" reference I previously gave you

formatting link
You will see that the - SERDES is only a shift register plus buffer register, - each /port/ timer is only 16 bits and is constantly ticking, plus a register and comparator to enable timed i/o - there's a pinseq/pinsneq register and comparator

Scarcely enough to dominate the chip area, especially when you consider the memory and xCONNECT switch fabric.

Without knowledge, my guess is that the switch fabric will have a far larger area than the port logic, just as in FPGAs the interconnects are far larger than the IO cells.

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.