Signals and Systems for Dummies

I am not convinced that we will see more of the XMOS type of architecture. (Though I do think that they are a very innovative idea, fun to play with, and ideal for some types of problem.)

Basically, it is easier, cheaper, lower power and more developer friendly to have dedicated hardware blocks for peripherals. Sure, an XMOS is capable of making a 100 Mbit Ethernet MAC in software - but at a price much higher than a dedicated hardware MAC. An XMOS has the flexibility to have multiple UARTs and SPIs in software - you choose exactly the number you want, and the pins you want to use. But it is cheaper for an ARM Cortex M microcontroller just to have 5 UARTs and 4 SPI units on the chip even though most people will only use one or two of them.

There are different kinds of application where different solutions are a better fit. The XMOS has a place where neither a normal microcontroller nor an FPGA is the ideal fit - but it is a small slot, as "not ideal but good enough" encroaches from both sides.

What we are seeing more of in newer devices is asymmetric multiprocessing. Rather than choosing between a Cortex A9 with massive processing power but unpredictable and costly interrupt response, or a Cortex M4 with mid-level processing, deterministic interrupts and good control of small peripherals, you can now pick a chip with both cores on board.

It is all about development time - that's what costs. Specialist devices cost more to use, in time and money. There is a decline in the general usage of DSPs, because they are too costly for development - people prefer to use standard microcontrollers or processors (even if the clock rate needs to be higher), or pre-packaged units with all the development work done before (an audio decoder chip, a microcontroller with a graphics unit with video acceleration, etc.).

Reply to
David Brown
Loading thread data ...

No, that is not the case. Look it up in the docs.

12 cycles is the nominal latency - but it can be greater due to wait states or other complications. Faster Cortex M devices usually always have wait states on flash if your code (and interrupt vectors) are not in the flash pre-fetch buffers. You don't get 8 cycles when you are inside a lower-priority ISR - you simply save a few cycles if you are currently in the interrupt entry or exit stages of a lower priority ISR.

For a slower M3, where the flash speed keeps up with the core, you will get quite stable interrupt response timings. For faster devices, they are deterministic in having a guaranteed maximum delay (which can be calculated from the clock rates, wait states, flash details, etc.). And /most/ interrupts will have the same response time - but you can occasionally have ones that take longer (but within a known limit).

Reply to
David Brown

The DMA is not part of the Cortex core, and so varies from device to device. But most DMAs can handle circular mode or ring buffers. For some, the buffer must be aligned properly (a 2K ring buffer must be 2K aligned, etc.) - others are more flexible. Some DMAs give you an interrupt when they are halfway through - that can let you adjust the pointers and get a ring buffer effect.

Reply to
David Brown

Precisely.

Why have modern softies forgotten that "you can't inspect quality into a product"?

Precisely.

The standard battle-proven "design patterns" have been described elsewhere in this thread. Of course it isn't necessary to use appropriate design patterns, but avoiding them does increase risks.

Precisely. Such code is more brittle than necessary. It may well be satisfactory, but if it later mutates/expands or anything changes then the result may end up in WTF territory.

Reply to
Tom Gardner

There was a fad for a while for "provably correct" software. Obviously, it didn't last.

As far as min/max execution time goes, we can read the code and test it for roughly 10 billion times over a weekend.

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin

That's all sane, therefore I don't disagree.

However I will note that outside the embedded arena the CSP-inspired hardware and software design patterns may become more important. There are signs of this with the enthusiasm with which Go and Rust are being taken up. I haven't used either, and history is littered with languages with marginal advantages.

I'll also note that asymmetric processing is scarcely a new phenomenon; even mainframes had their special purpose i/o processors, of which I am blissfully ignorant. But asymmetric processing has never taken off.

Another example is that many companies have invented i/o offload processors (especially for TCP and networking), and they have all fallen by the wayside. The usual reason is that the main processor ends up waiting for the under-powered i/o processor to to the work and synchronise memory buffers. That phenomenon seems pretty fundamental and unlikely to change.

Finally, even "general purpose" asymmetric processing hasn't taken off, e.g. the Cell processor. Perhaps it is a case that there are only three numbers that can be dealt with easily: 0, 1, and many. 8 is relatively difficult.

But maybe the time finally has come for asymmetric processing.

Reply to
Tom Gardner

On a sunny day (Mon, 23 Oct 2017 15:44:04 -0400) it happened bitrex wrote in :

SRAM is not the same as FIFO. If you use a circular buffer and use an interupt at 44 kHz to send data from an 8 bit SRAM to the DAC, it could work.

I have done lower sample rate audio with just a PIC using the PWM output, so no DAC needed:

formatting link

With an external FIFO however

16 bit, is 2 bytes in 1/22k seconds = 44000 bytes per second. Your 1 K = 1024 bytes memory gives you 1024 / 44000 = 0.023273 s or about 23 ms time for your processor to do other things. It is perhaps just enough to cover the time a Linux taskswitch. So it depends how fast your processr is, how fast you can read from that SDcard, what else it needs to do. I would not expect any problems on a normal 8 bit PIC for example.
Reply to
Jan Panteltje

On a sunny day (Mon, 23 Oct 2017 14:31:03 -0400) it happened bitrex wrote in :

That makes no sense unless you also inspect the code it generates. And if you do that, then you may just as well and better use inline asm, or write in asm directly. And anything that has ++ in it is a crime against humanity anyways and should not be trusted, differences one vendor to the other, or any vendor. Although there is such a thing as the 'fastest' -, or 'minimum amount of' instructions there may still be a hundred different ways you can code it in asm for different reasons that a plush plush crumpiler will not automagically do for you.

Reply to
Jan Panteltje

Yes and no.

There were manifestly overblown hopes and claims for it, usually from pure mathematicians that don't get their hands grubby with the real world. I've debated with far too many of those. In the simplest example, I remember showing a real-world FSM to someone with a tool for proving properties of FSMs. They threw up their hands in horror at the number of states!

But it is wrong to say "it didn't last", without further qualification. The attitudes and processes still exist, are still being actively developed for new important niche applications. SPARK Ada is merely one example.

That's the rationale snake oil salesmen are currently deploying w.r.t. neural net AI systems. I remember a tank/car image discriminator which worked wonderfully in the lab, but failed dismally on Luneberg heath. Finally they realised that all they had done was to train the neural net to distinguish between sunny days (car pictures) and overcast days (tanks on heathland).

You can't test quality into a product.

Reply to
Tom Gardner

There is a vast difference between confirming that a C or C++ compiler has generated approximately the code you expect, and writing it yourself in assembly.

You forgot the smiley - it is not clear whether you are just joking, or really are that ignorant.

And there may be a hundred different ways the compiler can generate the assembly from simple, clear and maintainable high level code that would be impractical or impossible to write in assembly in a way that is simple, clear, and maintainable.

Reply to
David Brown

Yes. My worry - from experience - is that microcontrollers are rarely used for just one thing. I have no idea what else is on the OP's card here, but it is easy to imagine that there is more to do. The programmer's first job is to get a decent sine wave out of the thing, as that is the hard part of the coding. He has shown that works well enough. Then he is asked to add communication via a UART or CAN, handle monitoring of temperature and current levels, support different amplitudes on the outputs, etc. Suddenly his simple program with one interrupt, no critical regions and predictable timings is no longer so simple - and his "happens to work well enough" signal synthesis no longer works all the time.

(To save a couple of posts in this thread - yes, with the XMOS you can run these in another thread without affecting the first part of the code. And yes, I agree that /is/ a good reason to look at XMOS devices.)

The other worry is that problems that occur one time in a million are really difficult to find in testing - you have to design the system so they don't happen. To be fair on the OP here, he is not doing the software and can't be expected to know all the details of what the programmer is doing, but it does sound like a "happens to work" rather than "designed to work" solution.

Reply to
David Brown

It /has/ lasted, and it is still done. But it is usually only done at a more theoretical level - or in a few cases with extreme requirements on reliability and correctness. Basically, proving software correct is a huge amount of effort, and is rarely cost-effective.

What /is/ done, is /designing/ for correctness - and designing to cope with worst case scenarios.

And it is quite practical to write a lot of software with the knowledge that it /could/ be proven correct.

For comparison, you might decide to put a 22 uF capacitor and a 4k7 resistor on a signal line to filter it. You know you /could/ write up some differential equations and calculate the frequency responses. You know you /could/ fire up Spice and do the simulations. But you are happy to simply know that you /could/ prove it if you had to - there is no need to go through the motions.

The same applies in programming. When I write a loop, I am thinking of the loop invariants and the loop variants, the starting prerequisites the ending requirements. I know I /could/ prove that the loop terminates correctly - but I don't spend time doing it.

Provably correct programming was a major part of my university education. I have very rarely done such proofs since then - but the knowledge and understanding from that is vastly more useful to me and to practical software development than if my course had been about programming in C or Java, or whatever passes for "programming courses" in most institutions.

And you will never find the bugs that appear on average once every 6 months.

Testing is vital, but it is rarely done well enough. And it is not sufficient in showing that something is bug-free.

Reply to
David Brown

On a sunny day (Tue, 24 Oct 2017 12:13:33 +0200) it happened David Brown wrote in :

I am not so sure about that. If you REALLY understand the code.

I am that ignorant.

start: What is code, just similar to a step by step instruction on a map, go left, go right go, left again...

I agree that all that register jingling with RISC is a lot of work. Also I do not check gcc generated code....

But I do write a lot of asm, never had a problem understanding it years and years later.

We live in a time where high level languages .. well I have mentioned that before, the CEO will get an app, and all he has to do is say: I want ... where ... is what has to happen. His required education will be limited to saying the words 'I want' It is top down.

But.. you cannot have top down without knowing the details bottom up. else disaster is the outcome. Politicians do top down without having a clue normally what they are 'designing'. But it can be read and 'understood' by the masses. There is no difference between the ape colony where Big Ape screams orders to his 3 women and the rest of the group and they follow, or human behavior, Advisers advise Big Ape -- for a better place in the pecking order-- I think humans call it lobbying. C++ is a crime against humanity,

Code is not objects. goto start

Reply to
Jan Panteltje

Although XMOS is heavily CSP-inspired, it is quite possible to use CSP ideas and methods in more conventional processor architectures. There are also plenty of OS's that rely heavily on message passing rather than locks or shared memory for IPC.

Certainly systems have been made with multiple processors for different purposes since the earliest programmable computers. And there have been microcontrollers with specialist units for decades too - the 68332 with a m68k core and a programmable timer processor unit, or chips that combine DSP cores with a general purpose core. What is new, I think, is the current style of mixing two different general purpose cores to get different balances of characteristics. It might be to have a high speed device and a low-speed but deterministic device. Another combination is the "big-little" chips with fast ARM cores and slower but lower power cores.

The Cell was a rather specialised device. You had a main PPC core that was a "normal" cpu, but the cell nodes were very restricted - each had access to only a small section of memory, for example.

Reply to
David Brown

Then you are wrong.

Do you honestly think it takes no more time and effort to write some high quality assembly code, than it takes to /read/ that code?

It is fine not to like C++ - there are many people who don't like it, for good reasons or for bad ones. To say you don't like it because it has a "++" in the name, and that this "++" makes it "a crime against humanity" - don't you think that is perhaps a little OTT?

Reply to
David Brown

My experience too.

Curiously my lecturer went on to becoming the driving force behind SPARK Ada. I've always been sad I haven't had the occasion to use that language.

I've currently got a (rather good) SPARK textbook from a charity shop, and I'm using quite effectively to mitigate insomnia :)

Just so.

Much of the testing I've seen is abysmal - it didn't even cover the "happy daze" scenarios, let alone the "what happen if" corner cases.

Reply to
Tom Gardner

Very much so. I've even architected and implemented telecom application-level systems like that :)

I really don't understand why so much attention is given to procedures, remote procedures, and stream communications. Maybe coming from a hardware+software background makes me naturally think in terms of events/messages.

I'd forgotten the big-little ARM concept, mainly because I've never used it. It is a reasonable example of a viable asymmetric processor, but I wonder how generally applicable it is. Time will tell.

In any case that's "merely" a hardware implementation; would I be right to presume inter-processor comms are bog-standard C/C++ RPC or shared memory? Or are there libraries with more interesting properties?

Indeed, but I would expect asymmetric processors to have limitations; provided the programming is easy, that shouldn't be a problem. But as we are both aware, programming is usually the problem.

Reply to
Tom Gardner

Different solutions work better in different circumstances. And some of it is habit, or experience - some is just due to bad design choices in the past.

Take TCP/IP for example. It is a stream protocol - you see if there is some data available by trying to read some, possibly with a timeout. You don't know at the start how much data might be there, and you don't know when it is finished. It's great for things like telnet, which is why it was developed - a network equivalent of a UART. But it is /stupid/ for most network communication where each end simply wants to send a lump of data to the other end in a nice reliable connection-oriented manner.

So why do we use streams? Because people have got too used to workarounds and manual coding with streams, and don't realise there are better ways to transfer data.

I've only "used" the big-little ARM devices in terms of "I've used a phone with one". (Not my phone either - I have a cheap "young person" telephone. It's the young folk in the family that have the expensive ones!).

I have briefly used a Cortex A + Cortex M chip, but I haven't used one seriously as yet.

I'd imagine shared memory is the main method.

It always bugs me that processors for SMP or ASMP don't have more efficient communication methods. It is so /simple/ in hardware to have something like a dedicated block of locks or hardware semaphores - and so much more efficient than load/store reserved, or bus locking protocols, and endless cache snooping. You can then build your software semaphores, message passing, or whatever on top of these hardware locks.

Reply to
David Brown

He has the "Not Designed Here" mentality, which usually leads to the re-invention of square wheels.

The idea being that if you roll your own shitty implementation of a data structure or algorithm your code must somehow intrinsically be better than using a library call that dozens or hundreds of professionals have had input into and refactored over years or decades because you "really UNDERSTAND" the behavior of the square wheel you built yourself.

Reply to
bitrex

On a sunny day (Tue, 24 Oct 2017 13:02:23 +0200) it happened David Brown wrote in :

Indeed.

Not more than all the see plush plush fanatics.

Reply to
Jan Panteltje

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.