getting the AVR timer/counter to generate a 24kHz clock signal

Hi there,

I'm using an ATmega323 running on the 1MHz internal RC oscillator, and I want to generate a 24kHz clock signal using a timer/counter interrupt. My problem is in getting a 24kHz signal - according to my calculations:

1MHz clock -> 1 cycle every 1uS 24kHz clock -> 1 cycle around every 40uS

So I set the counter to count 1 every clock cycle (and clear the counter on compare match) using

TCCR0 = _BV(CTC0) | _BV(CS00);

then initialise the counter to 0;

TCNT0 = 0;

and set the compare register to 40;

OCR0 = 0x28;

Then I enable interrupts and inside the interrupt handler, I just toggle my PORTA pins.

There's no problem getting a signal, but it's just not at the frequency that I expect - it's about 8.7kHz instead of the expected

24kHz.

I'm wondering if I've totally missed something about how the timer/counter works.. does anyone know what I'm doing wrong?

Cheers, amanda c

Reply to
amanda c
Loading thread data ...

"amanda c" schreef in bericht news: snipped-for-privacy@posting.google.com...

If you want 24KHz by toggling a pin, you need to toggle it at 48KHz. So, if you measure 8.7KHz the interrupt is running at 17.4KHz. I think you need a compare value of 0x14.

That should give 17.4KHz, still not the expected 24KHz, but it's a start ;)

How reliable is that 1MHz internal RC clock?

--
Thanks, Frank.
(remove 'x' and 'invalid' when replying by email)
Reply to
Frank Bemelman

Hi, I don't see how you get 24khz from a 40us delay time. This is the wrong way to generate this kind of signal as it uses too much processor time. There are lots of micros with pwm generators.

Reply to
CBarn24050

Which compiler are you using and does it access the timer registers in the correct order? You must read those 16-bit registers in the order low byte then high and write them as high byte then low. It's possible that what you think you are writing, as what appears in C to be a single atomic operation, is not what is really being written to the registers. In addition, there's only one temp register that's used to hold the high byte value and there is a potential for this value to be changed if an interrupt does a 16-bit op in the middle of things.

You'll need for the interrupt to be called every half cycle, not at

1/period [err, not that *I've* ever missed that point, no siree, not me, well maybe... ;-) ] so you need it to fire every 20.8 usec or about once every 20 clock cycles.

Note that the 323 needs a minimum of 4 clock cycles to service the interrupt and then another 4 on the back side when it returns. If your service routine is in C rather than assembly, it's possible that you're pushing/popping quite a few additional registers, particularly if you use an auto variable (on the stack) in the service routine. More cycles.

The bit-toggle code takes up a couple of cycles, too, since there's no built-in "xor a port bit" instruction.

So, while it may seem that a MHz processor should easily be able to bit bang at a few KHz, it ain't necessarily so.

--
Rich Webb   Norfolk, VA
Reply to
Rich Webb

If I recall correctly, the clock is /4 of the master clock, so you'd have to set OCR0 to 21 or so. Remember that you need to set the interrupt to happen at TWICE the rate of your output (20.8us) so you can toggle in each direction, so you're really going for (1000000/48000). Also note that the internal RC oscillator is not very accurate, so you're going to be off no matter what you do.

Remember that time will pass between the time that your comparator fires, and the time that the interrupt is serviced, so you will lose a few cycles per interrupt (that probably accounts for the difference between 12khz and

8.7khz in amongst the fact that the interrupt needs to fire at twice the rate it is). It's best if you can do this:
  • Stop the timer
  • Take the current TCNT0 value and subtract 21, place it back in TCNT0
  • Reenable the timer

That should make things more accurate. Give that a shot.

-->Neil

Reply to
Neil Bradley

It's more like 42uS.

Like others mentioned before, you need to toggle twice per period. That makes 21.

The counter counts from 0 to 40. That is 41 cycles.

However, you need 21. So set OCR to 20.

Let's see how long it takes to toggle the pin in software.

The toggle itself is "in", "ldi", "eor", "out" -> 4 cycles Interrupt call + "reti" -> 8 cycles Two registers and SREG are saved/restored -> 14 cycles

That are already 26 cycles. Most likely your compiler will save more than (the required) two registers to stack. So, you don't have any chance to get near 24kHz this way.

From the numbers you gave, I would guess that your whole ISR takes something like 60 cycles ;-(

I haven't found a datasheet for the mega323, but you should look, if this chip supports toggle on compare match for timer0. If not you have to use another timer (at least timer1 should work).

Also read about the "OSCCAL" I/O-register (calibrating the internal oscillator)

Good luck Jan-Hinnerk

Reply to
Jan-Hinnerk Reichert

Thank you to you all for your advice!

What I'm actually trying to do is to clock data in from an analogue-to-digital converter (I'm not using the ADCs on the AVR chip), and I need the AVR to provide a clock signal of around 24kHz to the ADC.

I'm coding in C and compiling with avr-gcc. I'm not too sure how accurate the

1MHz internal oscillator is, but I have calibrated it using OSCCAL.

Somehow, I am getting a 24kHz (ish) signal when I set the compare register to

  1. This is when my interrupt handler does nothing but toggle the pin. However, I need my interrupt handler to do some other things (like read a clocked in bit of data, keep track of some counters etc.), and when I add in this code, I don't get any clock signal at all. Even just one extra line.

Does anyone know why this might happen? I would have thought that if it was due to the extra clock cycles needed to execute that code, it would just slow down the clock signal, not make it disappear altogether.

Cheers, amanda c

Reply to
amanda c

Amanda,

May I suggest... become familar with

formatting link
The name says it all. It's nicely done - enough so that I'd think it was subsidized by Atmel.

Check the datasheet on the Mega323. Under the Timers/Counters section you'll find a lot of info on PWM mode. PWM gets you out of the business of bit-banging the data strobe, and leaves your cycles to run real code.

In PWM mode, there is an OCx pin that can be automatically toggled when the counter matches the compare value. Use it for your clock strobe.

For your purposes, either Timer0 (8-bit) or Timer2 (16-bit) should work fine. Be sure to disable the prescalar so the counter feeds directly from the AVR system clock.

Some ideas:

1) If you need a consistent clock strobe length, do as suggested and set the OCR value to 1/2 cycle and set the OCx bit to toggle state at compare match. This will give a consistent 50% on/off cycle.

Run your code as the Compare Match ISR. Every other call, just reti immediately (you only process on high or low strobe states, not both). This will burn more program cycles than the method below.

2) If the strobe duration isn't too picky, set OCR to the full cycle length and config the PWM to toggle the OCx pin only high on compare.

Your ISR fires half as often, only when it's needed, does it's business, then "manually" toggles the strobe low before reti. Strobe length will not be 50% duty cycle, but it's much more efficient use of program cycles.

3) You mention counting in your ISR. To save code cycles, feed the data strobe output to a counter input pin. Use that hardware counter to do the counting - it'll increment one for each clock cycle, and you just read it when you need to (or set a compare threshold and have it trigger a separate event ISR).

Now, on why your ISR stops working... you haven't posted any code snippets.

I'd expect if your ISR ran too long, it would just miss an interrupt, which should fire upon the ISR's exit. In the case of #1 above, the ISR would fire progressively later in the strobe's pulse, eventually missing a data read cycle. In the case of #2, the on/off duty cycle would get progressively imbalanced until one became 0 and the other double, then it'd recover. Moral: make sure your ISR will fully execute in less than one interrupt period.

Are you doing a ret instead of a reti? (i.e., interrupts not getting re-enabled upon completion?)

Have fun!

Reply to
Richard

analogue-to-digital

the

to

However,

in bit

don't

was due

down

As Rich Webb pointed out, there is not much time to do anything beside servicing the interrupt. An other way me be to refrain from using the timer at all:

I suppose the programs is an endlesss loop like : toggle clock, read something from ADC, proces data, output something etc.

If you insert a toggle instruction say every 20 instructions, you would generate the clock without the overhead of interrupt and timer servicing. However, programming will be awkward, as you have to count the clocks for every instuction, and write if's in a way that both brances take the same time. I suppose a good macro-assembler can take care of most of it. I used this technique to generate a 57 and a 19 kHz clock for a RDS encoder

Wim

Reply to
Wim Ton

Do you use "signal" or "interrupt" keyword for your ISR. With "interrupt" interrupts will be reenabled at the start of your ISR. This is

- causing a stack overflow if your ISR is too long (strange effects)

- wasting a cycle ;-)

Have you turned optimisation on ("-O 2")?

Have you studied the assembler output generated by "-S"?

Why do you want to use the internal oscilator?

Is coding in assembler an option?

You should also give a broad picture of your whole application.

/Jan-Hinnerk

Reply to
Jan-Hinnerk Reichert

Beside that there are programmable clock generators, a CPLD would do that too.

Rene

--
Ing.Buero R.Tschaggelar - http://www.ibrtses.com
& commercial newsgroups - http://www.talkto.net
Reply to
Rene Tschaggelar

Which ADC? It sounds like "three wire interface" (aka Microwire (tm) somebody). If so, read up on the SPI interface in the mega323. You can set that up to get a byte and then interrupt. It takes care of the bit counting and shifts for you.

It's actually reasonably accurate. On the other hand, crystals and canned oscillators are relatively cheap, more accurate, and fairly easy to hook up. Why not just run at a higher speed?

If you re-enable interrupts while you're inside an interrupt handler and, before you do a return-from-interrupt, you get hit with that same interrupt again then you're in a death spiral. The code never gets a chance to get as far as a return statement, it just keeps being called.

The processor automatically resets the I-bit when it services an interrupt and sets it again as a side effect of the RETI instruction. This prevents the "death spiral" effect although the interrupt can get called again after executing the next instruction in the main routine.

Note that the I-bit may be explicitly set, with the SEI instruction, inside of a service routine to permit nested interrupts. Presumably, that's what your code is doing; check the asm dump.

In the most general sense (dons flame-proof skivvies) nested interrupts are "better" and not permitting nested interrupts (the default) is "safer." However, using nested interrupts REQUIRES that all possible combinations be checked to ensure that you don't get a stack blowout or enter a death spiral or a lock/race condition.

--
Rich Webb   Norfolk, VA
Reply to
Rich Webb

Hi, it's a pity you didn't properly explain your problem in the first instance. Most micros have hardware to implement serial interfaces, thats the only practical way to do it.

Reply to
CBarn24050

Some more background on my situation:

My ADC (the ADS8320) is wired to the SPI pins of the AVR (on a PCB, so I have to generate the clock signal on the SCK/PB7 line). The reason I'm not using the AVR's SPI interface is because the SPI clock signal generated by the AVR looks like this: _ _ _ _ _ _ _ _ _ _ _ __| |_| |_| |_| |_| |_| |_| |_| |_| |_

This is how the ADC works:

- A falling ~CS initiates the conversion and data transfer

- the first 5 clock cycles are used to sample the input signal

- after the fifth falling DCLOCK edge, DOUT is enabled and outputs low for one cycle

- for the next 16 DCLOCK periods, DOUT outputs the conversion result (MSB first)

- subsequent clocks repeat the output data in LSB format (which I'll ignore, by pulling ~CS high) A minimum of 22 clock cycles are needed for a 16-bit conversion.

so I'll have problems sampling data if I use the SPI clock signal (I think? I couldn't see a way around the problem, which is why I'm trying to do my own serial protocol now).

Okay, here's my code:

// // Serial protocol for getting data from the ADC (incomplete atm) // Generate a 24kHz clock using the timer/counter interrupt // amandac //

#include #include #include

#define CHECKBIT(ADDRESS,BIT) (ADDRESS & (1

Reply to
amanda c

What's the problem with this. I had a quick look at the datasheet and it seems that the clock does not need to be stable. [...]

IMHO, you can just do 3 SPI-reads and do a little bit-shifting afterwards.

An alternative is to do the SPI transfer in software in the main program.

What is best, depends very much on your application as whole. So, please tell us what else your uC must do.

Nevertheless, here are some comments on your code.

Try SIGNAL instead of INTERRUPT

Way too long, and you haven't started reading, yet.

BTW: Use indentation to increase readability:

SIGNAL(SIG_OUTPUT_COMPARE0) { if (CHECKBIT(PORTB,PINB7)) { if (cycles == CONV_END) { SETBIT(PORTB,PINB4); } CLEARBIT(PORTB,PINB7); } else { cycles++; if (cycles == POWDWN_END) { cycles = 0; CLEARBIT(PORTB,PINB4); } SETBIT(PORTB,PINB7); } }

-----------

Try running the interrupt with half rate and do

SIGNAL(SIG_OUTPUT_COMPARE0) { CLEARBIT(PORTB,PINB7); SETBIT(PORTB,PINB7); // Do read here ... }

Stack grows down. So you only have one byte stack. Everything else is pushed into I/O-Registers ;-(

Stack pointer is initialised by gcc anyway. Never do it yourself, since you don't know where variables are allocated.

Not safe, an interrupt could occur between these instructions.

Reply to
Jan-Hinnerk Reichert

It's fine now!

Like several of you mentioned, it was the interrupt handler getting interrupted. Changing to SIGNAL made it all better.

Thanks to all of you for your patience and advice, it is really appreciated :)

Cheers, amanda c

Reply to
amanda c

So you mean there shouldn't be a problem since, data is clocked out on the falling edge, and it doesn't matter whether this edge occurs regularly or not?

as in bit-shifting to get the 16-bit value out of the 3 8-bit reads?

The AVR is responsible for getting the data from the ADC and transmitting this out over the USART. That's all it needs to do at the moment, but in the future, it might want to do some digital filtering of the signal values it is getting from the ADC.

Thanks, that stopped the problem of the disappearing clock signal. I get happy too easily (referring to my earlier post). The frequency of the clock signal that I get is around 11.7kHz.

What do you mean by half rate?

Yep, that was silly of me. I saw the SP initialisation in the assembly when I was looking through that.

Cheers,

Reply to
amanda c

Yes. You can also try CPOL=CPHA=1. That way you will have the results one cycle earlier and MOSI is high in inactive state. However, you have to do 24 cycles anyway ;-) BTW: DORD must be 0.

Exactly. Shifting the received bytes, do some masking at put them together. sample = ((int) byte1) 2); sample &= 0xffff; // only required, if int is larger than 16 bits

The typecast from uchar to int (or uint) before shifting is essential.

This source may produce slow code. You may have to try different approaches and take a look at the code produced.

You may have to adjust the shift values. You could also try to autodetect the start (if your MISO pin has a pull-up).

At what sample rate?

Just 24kHz not 48.

/Jan-Hinnerk

Reply to
Jan-Hinnerk Reichert

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.