Signals and Systems for Dummies

The LPC1764 also does some unspecified branch anticipation and dual-ports the flash.

We use the LPC3250 in more serious applications. It runs at 266 MHz and has hardware floats.

It has 256K of SRAM, which is enough in most cases. It also has a DRAM interface and cache, for bigger apps.

--
John Larkin         Highland Technology, Inc 

lunatic fringe electronics
 Click to see the full signature
Reply to
John Larkin
Loading thread data ...

Yes, its clearly better to jitter the DAC clocking and "tweak" the analog filters rather than removing the added distortion in the first place. No one here has bothered to do the simple math to show the clocking jitter is not significant. John has often said he is computationally adverse preferring to worry with the things he "thinks" are a problem rather than do the simple calculations.

The point is the DMA approach is virtually free in recurring costs requiring a small investment in development effort to learn how to use the DMA. This is a tool that can be used over and over in the many designs John repeatedly does. I have no idea why he chooses to not learn this simple technique.

The world is full of noise. Do your designs actually suffer from a lack of noise? I know how to fix that problem, lol. Let John design them.

--
Rick C 

Viewed the eclipse at Wintercrest Farms, 
 Click to see the full signature
Reply to
rickman

Den mandag den 23. oktober 2017 kl. 00.09.56 UTC+2 skrev boB:

r:

s+systems+dummies

C. The

5377

ged.

the

it's

part

eep up

ill be

CPU.

neral it

signed/

.

ion with

I haven't tried it but ST also have a Cortex with three 16bit delta-sigma c onverter, STM32F373

-Lasse

Reply to
Lasse Langwadt Christensen

stems+dummies

with DMA

he data

tual clock

from reading the datasheet I believe the LPC1764 DAC has an optional double buffering feature, i.e. you set a 16 bit timer value and every time it exp ires the DAC is updated with the value in a register, the timer reloaded and int errupt issued so you can write a new value for next update

Reply to
Lasse Langwadt Christensen

Of course interrupt jitter can be a problem on RISC processors. How much of a problem depends on your needs, and on the cpu. On some AVRs instructions can take as long as 7 cycles (like far calls on the big devices). On a small ARMs (Cortex-M), such as the OP's, the variation in the interrupt response from the cpu is going to be small and deterministic - but it still varies.

The main source of jitter, however, is from sections of code where interrupts are disabled, such as for accessing critical data, making atomic accesses, etc. And if you have more than one interrupt in the system then that will cause additional jitter (sometimes very large jitter).

The answer to low jitter on the hardware is to do as Rick suggests - set up the hardware to update the signal regularly from a timer, and use the interrupts to prepare for the next output or outputs (with DMA if you've got it, or just double buffering of some sort).

Reply to
David Brown

Bitrex is wrong - the typical jitter is going to be vastly more than a couple of clock cycles. Say, 10 cycles for the cpu + flash, 10 for other interrupt disabled periods (assuming that the timer interrupt here is top priority), and guessing randomly about your code in the interrupt routine, a variation of 30 cycles to handle things like moving from one sine wave cycle to the next. That gives a jitter budget of 50 cycles, or 1 us - 2% of your sample time. You will only see worst cases occasionally - sampling with a scope is a hopeless way to see them - but they can occur.

His technical point was clear and valid, with good advice.

Reply to
David Brown

It is all quite simple. Set up a cyclic DMA buffer. Have a hardware timer trigger this at regular intervals to take the next value from the buffer and pass it on to the DAC. In your main code, or software timer, or whatever, you regularly check the level of the DMA buffer. If it is running low, fill it up with some more samples. The bigger your buffer, the more efficient this will be and the less timing pressure there is on the code (which can then have tight timing for communication, user interface, or whatever) - but that also means slower reaction times when you need to change the frequency, amplitude, or phase.

I had a system set up for a project with four independent sine waves of fixed frequency (1250 Hz, IIRC) via an SPI DAC. Samples were at about

80 kHz per channel (320 kHz total), with outputs with varying amplitude and phase. There were no interrupts, and no processor usage after setup

- it all ran from DMAs, with jitter within a clock cycle or two. It just took a few processor instructions to change the phase or amplitude of a wave.

Reply to
David Brown

It means that you will sometimes see delays and wait states, but mostly have a throughput as though there were no wait states. In practice, it's probably a 2 or 4 way fully associative cache of perhaps 16 bytes each way, with read-ahead on instruction fetch.

It will depend on the application. You will get jitter that in the worst case would be easy to see on the scope if you can catch it - but the worst case will turn up very rarely. It's the kind of problem software developers know and loath - it works fine when testing in the lab, but the customer sees a glitch during the demo. Folks used to hardware development rarely understand this kind of thing because their signals are usually much more repeatable - you turn it on and see the noise levels on your scope.

Reply to
David Brown

Code spikes are usually software errors, not hardware problems.

But on-board ADCs and DACs are never of great quality - you can't put a high precision, high stability analogue converter on the same die as a high speed microcontroller with hugely varying current needs. If you want quality analogue, put it external to the microcontroller.

Reply to
David Brown

I have found on some PICs that there is not even one cycle of jitter in the interrupt timing (with one source of interrupts from a timer, and never disabling them). Perhaps I am lucky and the compiler avoids using some multi-cycle instruction, or perhaps it is always like that.

You are assuming that interrupts get disabled. Nobody is forcing you to to disable them. With careful thought, it can usually be avoided, especially if you have only one source of interrupts. You might need a flag that you set to let the ISR know that your main code is accessing some multi-word variable at the moment, but that isn't hard to do. Some hardware peripherals requre magic sequences of instructions that must occur with no interruptions between them, and in that case you can ask the ISR to run those instructions for you, *after* it has finished its time-critical code on the next timer interrupt.

That's nice if you have hardware available that can do exactly what you need to be done regularly. Sometimes what you need it to do isn't what the hardware is able to do by itself. One option would be an FPGA, or a more fancy micro, and another option is to do it in software, but it does take a bit more thought.

Reply to
Chris Jones

No, I think you are simply mistaken. On all the PICs I have used (which is a few types, but admittedly long ago), most instructions take one instruction cycle (4 cpu clock cycles) - but there are some that take 2 or perhaps more. Typically call and return, especially "far" versions (for the PICs with paged memory), take a little longer. There is very little jitter, certainly, but not quite zero.

Certainly it is not always necessary to disable interrupts - but it is common in a lot of code to have interrupts disabled briefly. We have not seen the OP's code - we can only guess about what is often done, not give an exact analysis.

If swapping your existing microcontroller for an FPGA is an option, then swapping to a microcontroller with DMA (or an XMOS, or an AVR X-Mega with its "event" system) is going to be much easier and much cheaper.

You can also just use an external DAC (solving the analogue noise problem) with a common update trigger pin, and connect that to a timer output from the microcontroller. Have your interrupt tied to that timer

- your interrupt routine puts the next sample into the DAC ready to take effect at the actual timer tick with zero jitter (beyond that of the cpu clock source, crystal, PLL, etc.). Simple, cheap, and reliable.

If you can't make decent hardware and pick appropriate components, then certainly careful software can help make things less bad. But just as it is better to avoid generating noise than to have clever ways of filtering it out, it is better to avoid jitter problems than to try to eliminate them afterwards.

Reply to
David Brown

I'll check this again, when I have time. It will either be no cycles of jitter, or it will be sometimes at least one cycle of jitter. It should not be half a cycle, pi-and-a-seventh cycles or something weird like that. Therefore I should be able to see any jitter, if I look at a pin that goes high at the start of each interrupt, sampled with enough timing resolution that I am sure that I can see one cycle of jitter. Of course I need to capture enough cycles that I would see some interrupts during each type of instruction that could be running when the timer causes an interrupt.

FWIW, the datasheet of the PIC18F2XK20/4XK20 says: "For external interrupt events, such as the INT pins or the PORTB interrupt-on-change, the interrupt latency will be three to four instruction cycles. The exact latency is the same for one-cycle or two-cycle instructions."

Perhaps they pad out the time of one cycle instructions so that it is just as slow as interrupting a two-cycle instruction.

I'm not sure why they specifically mention that this applies to external interrupt sources and they don't specify the latency for internal timer interrupts. I wonder if there are chains of synchroniser D flipflops on the input pins (to reduce the probability of metastability) that add more latency for the external interrupt sources, and that this is why then mentioned them specifically.

Reply to
Chris Jones

Or perhaps they are just contradicting themselves. The interrupt (at least in these simple microcontrollers) is much just like injecting a "call interruptRoutine" instruction (plus interrupt disable) injected into the instruction stream. This will take two cycles, like any other call on those chips. So the current instruction takes one or two cycles, then you get two cycles for the interrupt call giving a latency of three or four instruction cycles - and therefore a jitter of one instruction cycle.

That sort of thing should already be hidden by the multiple internal clock cycles per instruction cycle (it is four cpu clocks per instruction cycle on earlier PICs - I am not sure about the PIC18. I think it may use different phases to have 2 cpu clocks per instruction clock).

Reply to
David Brown

There's no way you could bitbang software serial out of a timer overflow ISR at 57600 baud on a 16MHz clocked AVR using the processor clock as a timing reference with a ~50 cycle jitter and yet work it does. And it works on ARM, too.

You just have to not write the code that executes within the ISR like a dumbass.

Reply to
bitrex

You certainly /can/ write interrupt based software with low jitter. But I don't know what kind of software the OP has written (I am guessing from this thread that it is okay quality, but missing a good deal of the details needed to make low-jitter outputs).

I did once make a 57600 baud software uart on an AVR - with a 9.2 MHz crystal, IIRC. Interrupts were never disabled elsewhere, and the software UART timer was the only interrupt. I used dedicated registers for the interrupt routine, and got full duplex communication with 4x oversampling while still having enough processing time left over for other jobs (including handing a hardware UART).

So yes, no doubt at all that it is /possible/. It gets harder on faster processors, however - there is a bigger variation in the cpu's latency, there is variable latency in the flash (unless you are careful to put the interrupt vectors and the interrupt code in ram), and you are usually doing more than just a dedicated timer interrupt. On a system like the OP describes, interrupt jitter /is/ a problem, even on a RISC processor designed for fast and predictable interrupt response (an ARM Cortex-M). It is not an unsolvable problem, but it takes more than just not being a "dumbass".

And it is vastly easier, more reliable, and will give better results, if the OP uses Rick's suggestion with DMA - or at least using double-buffering on the DAC and tie it to a hardware timer for updates.

Reply to
David Brown

In another product, I have timed the IRQs of this uP with an oscilloscope. At 100 KHz interrupt rate, it looks perfectly periodic. Jitter is indeed tens of ns.

A DDS algorithm has no branches; it runs exactly the same code every time. We could load the DAC first thing in the IRQ, then compute the next point, but there's no need to do that.

There are no other interrupts.

A digital scope can be set to infinite persistance; the IRQs can be observed for billions of times. Or our Agilent counter can measure min/max periods to pisoseconds.

We've come a long way from 8088's with software DRAM refresh. These little ARM things rock.

--
John Larkin         Highland Technology, Inc 

lunatic fringe electronics
 Click to see the full signature
Reply to
John Larkin

If you can use DMA, you should. It will load the DAC with determistic timing

AFAIR Interrupts on ARM Mx has latency of 5 to 12 cycles depending on what it is doing

If it is not doing anything but in a while(1) loop the latency jitter effect is smaller

My guess is for a typical 50MHz processor you will see jitter of 60 to 100ns

Cheers

Klaus

Reply to
Klaus Kragelund

Zero content, all whining. If the analysis is so simple, he should post it.

Maybe some people are still thinking about old, clunky CISC architectures with various numbers of slow clocks per instruction. [1]

A 120 MHz ARM, executing one instruction per 8 ns clock, with zero wait states, is different. Since my IRQ source is internal to the chip, everything is synchronous.

[1] but I could do a nice 400 Hz sine wave on a 68K too.
--
John Larkin         Highland Technology, Inc 

lunatic fringe electronics
 Click to see the full signature
Reply to
John Larkin

"One experiment is worth a thousand expert opinions."

Werner Von Braun

--
John Larkin         Highland Technology, Inc 

lunatic fringe electronics
 Click to see the full signature
Reply to
John Larkin

We could (I think) have a hardware timer clock the DAC, and (I think) the DAC has an available FIFO. I'll ask my ARM guy. In that case, we might not need interrupts at all, if the mainline loop can keep the FIFO full. If not, an occasional interrupt could do the DDS math on enough points to top off the FIFO. But we don't need to do any of that to generate a nice 400 Hz sine. Looks like the 10-bit DAC does more damage to the sine wave than IRQ timing.

We are simulating aircraft power to synchros and resolvers. Aircraft power is not pristine. The processing of the signals from the synchros is ratiometric, and gets arc-minute or arc-second accuracy and insane tracking speeds using crummy AC excitation. Maybe I should add a noisy-power option to the code. I could charge more for that.

Good LVDT signal processing is ratiometric too, but in the low KHz range. My Spice sim looks pretty good at 5 KHz. If I wanted to improve it up there, I'd add some poles to my lowpass filter.

--
John Larkin         Highland Technology, Inc 

lunatic fringe electronics
 Click to see the full signature
Reply to
John Larkin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.