Why should I (not) use an internal oscillator for 8-bit micros

This discussion is confusing the general term 'synchronized' with the comms term 'synchronous'. They are not interchangeable in this context.

Synchronous comms is where the baud rate is clocked using a signal transmitted in parallel to the data stream; the receiver uses this signal to sample the incoming bits. Start & Stop bits are not used, and it often uses a packet format with leading "sync" bytes and a trailing checksum instead of parity bits. Receiver clockspeed is irrelevant, so long as it can sample once per cycle on the incoming clock. (TX and RX baud rates are synchronized via the clock in the data stream, so the rate can drift without harm.)

Asynchronous comms uses no inline clock, and depends on the TX and RX to operate near the same baud rate, i.e., using the same I/O sampling frequency. Bytes are "clocked" individually, and the RX detects the byte via the Start bit that's prefixed. (TX and RX baud rates are synchronized independently from the data stream, and they'd better be close.)

With async comms, a bad reference clock (MCU oscillator) causes sampling to miss entire pulses (or start counting them twice, depending on who's running fast). IIRC, TX and RX baud rates have to be within 5-6% of each other or the last pulse in a byte starts getting sampled wrong (regardless of the baud rate).

Reply to
Richard
Loading thread data ...

Well, of course we all have slightly different definitions of what words mean. Someone else's "sloppy timing" is someone else's "tight timing". So, to be a bit more specific:

1% of 2400bps = 2378-2423bps 1% of 9600bps = 9505-9695bps 1% of 115200bps = 114049-116351bps

So I'd say that 1% is "pretty damned close" in this case. ;-)

It's also fair to say that "we've synchronized our watches". Of course it's not synchronized down to the millisecond. That's all I was saying. For that brief moment in time when a byte is transferred, both ends are operating independently but are synchronized, not locked together.

I'm having a tough time explaining it in a fashion that will be globally understandable, but here goes. The short answer is that the errors are cumulative.

Both devices start their respective internal baud clocking at the rising or falling edge of the start bit. That is a point where the "synchronization" (hehehehe) occurs. Most UARTs run off some sort of timer/counter/divisor arrangement, as does the 16550 or even the 8051. The standard clock chip feeding a PC's 16550 (or equivalent thereof these days) is fed by a

1.8432Mhz part. There is a /16 prescaler in the 16550, followed by a programmable baud rate divisor beyond that. So, 1.8432Mhz/16=115200bps internal clock (that will help minimize jitter).

Let's divide it down even further - 2400bps. That'd be a divisor of 48 (115200/48=2400). That assumes a perfectly divisable clock. For the sake of argument, let's turn that in to a 2Mhz crystal (roughly 8% difference from the 1.8432Mhz clock). Let's say that the sender is using a 2Mhz crystal and the receiver is using a 1.8432Mhz crystal - both with a divisor of 1, each aiming for 115200bps.

The 2Mhz fed 16550 will yield 12500bps and the 1.8432Mhz fed 16550 will yield 115200bps. At that rate, there's 1.08506944 hz of the sender for every

1Hz of the receiver. Mulitply that out 8 bits and you're at 8.680555552 cycles on the sender and 8 on the receiver, roughly a 7% error after 8 bits, but still within the realm of possibility in terms of sampling since the sender hasn't blasted past the receiver by more than 1Hz.

Okay, now let's take that down to 2400bps. 2Mhz fed 16550 yields 2604bps.

1.8432Mhz receiver yields exactly 2400bps. Remember we have a divisor of 48 to get 2400bps, so we need to multiply everything by 48, so 1.08506944 * 48 = 52.083333312Hz vs 48Hz - you're off by over 4Hz, so anything past the first 4 bits of data are guaranteed to be sampled at the wrong time, ending up in incorrect data.

Okay, my head hurts now. ;-) I wound up running against this with a really poor crystal on a prototype 8051 weather station controller. I had already used the serial port for PC communications, so I had to do an interrupt driven/bit banged UART using INT0 (if you want the code I'll post it). I had a windspeed/direction PIC I was interfacing with that had a 20% tolerance on the internal clock. It communicated @ 1200baud, so my opportunity for sampling was smaller than a reasonable crystal.

-->Neil

Reply to
Neil Bradley

asynchronous

The phrase "when you are in a hole, stop digging" springs to mind...

In the context of serial communications, "synchronous" means there is a clock line (or an "embedded clock" in the data signal) going between the sender and receiver, while "asynchronous" means there is no directly shared clock, so each side uses its own time reference to clock the transfers. A "clock" signal does not have to be regular - for synchronous transfers, it can vary as much as you want. For asynchronous transfers, you normally have a regular clock - while you could theoreticly have a varying one, it would be a challange to implement reliably. But if the clocks on the two sides are not joined directly to synchronize their clocks, they are not synchronous - it doesn't matter if their speeds match at 0.00001% accuracy.

For standard uart asynchronous communication, the limit for communication over a good line (noise-free, and nice sharp edges) is a 5% match between sender and receiver, so that by the 10th bit they are no more than 50% of a bit in difference. Normally, that means ensuring that each side is within

2.5% of the nominal baud rate. The absolute baud rate does not matter.

must

separate

Reply to
David Brown

message

The sender and the receiver in uart communications are *never* synchronized. We are not talking about the general usage of a common word here (as in "let's synchronize our watches") - the term "synchronous" has a very specific meaning in the world of electronics, especially for communications. It means there is a shared clock. Two devices can't be "synchronized for a while" (unless you are swiching the clock signal on and off, etc.). They can't be "closely synchronized" - they are either synchronized, or not.

When a uart receiver notices a start bit, it does not become synchronized to the sender - it synchronizes its state machine (using its single internal clock) to the start bit as sampled by its own clock. This is going to happen at roughly the same time as the sender transmits the start bit, but not exactly - it depends on transmission delays, over-sampling rates at the receiver, and so on. I.e., the sender and receiever are not synchronized.

and

This is a myth - a commonly believed myth, but a myth nonetheless. There are definitely factors that make serial communication harder at higher speeds, such as rounded edges, noise, cross-talk, etc., and closer clock matching might help marginally, but the main effect of the clock speed matching is a relative effect - a 5% error means a 50% *bit time* error after 10 bits, regardless of the length of a bit time.

Think about the sort of speeds async communication runs at. Suppose we say

5% is required for 9600 baud. Does that mean we can run 600 baud at 40% error? Or do we need 0.005% to run Profibus at 12 MBit? To say nothing of the 0.5 ppm accuracy crystals needed for gigabit ethernet...

It reminds me somewhat of when my father was once buying beer at an off-license. They were having a special offer of 5% discount for a crate of

12 bottles. On seeing this, my father said he would get two crates, and the shop assistant gave him 5% off for the first crate, then another 5% off for the second crate, giving him a 10% discount altogether. He was tempted to buy more, but didn't want to push his luck...

sides

in

effect

definition

I'm

the

since

Reply to
David Brown

It's no wonder that there are so many framing errors on Async comms when some people don't understand the basic concepts of Async and Sync comms.

Sync transmits the clock either directly (on a clock line) or indirectly (embedded in the data) so that the receiver know when to look at the data bits.

Async uses the "start" bit of each byte to tell the receiver to start timing to look at the bits of this one byte only.

The term Asynchronous here means that the sender can send data at anytime without having to worry about whether or not the receiver is in sync.

How many designers try to send or receive at say 5% tolerance instead of aiming for as close to 0% as possible. Makes for interesting interfacing when one unit is 5% high and the other 5% low because they've been designed by "prima donnas" who know better than the rest of the world.

As for crystal versus baud rate tolerances - if the crystal is 1% low then the baud rate is 1% low. Simple as that - always allowing for the fact that you are using the "correct" frequency crystal to start with and allowing for any division errors in the baud rate generator/clock.

Alan

++++++++++++++++++++++++++++++++++++++++++ Jenal Communications Manufacturers and Suppliers of HF Selcall P O Box 1108, Morley, WA, 6943 Tel: +61 8 9370 5533 Fax +61 8 9467 6146 Web Site:
formatting link
e-mail:
formatting link
++++++++++++++++++++++++++++++++++++++++++
Reply to
Alan

No, you're flatly wrong about that.

Um... well, originally the question was the tolerance of the *CRYSTAL* inside of microprocessors, not the "bit time" error:

"Overall the Philips LPC900 oscillators are specified with +/- 2.5% over temp range and voltage range"

But the error is *ADDITIVE* fron a per bit basis until the next start bit, though. I actually have working experience with this one. We had to swap crystals because lower baud rates wouldn't work with our embedded system with standard PCs because the lower divisors were outside tolerable ranges. Yup, it got worse as the baud rate got lower. See my other response for a working example.

You're not taking in to account that asynchronous communication *HAS* a byte synchronization method - a start bit!

-->Neil

Reply to
Neil Bradley

What difference does that make? If an oscillator is 2% out, the the bit time on a uart based on that oscillator is 2% out. That's how "relative error" works. Here's an example:

Suppose you have two microcontrollers, each with an oscillator (of any sort) running at a nominal 9.8304 MHz, and each with a uart running at a nominal

9600 baud (the divisor is therefore 1024, ignoring the typical x16 oversampling for now). Suppose micro1's oscillator is 2.5% too slow, while mico2's is 2.5% too fast. Then micro1 will run at 9.584640 MHz, and generate a baud of 9360, while micro2 will run at 10.076160 MHz and a baud of 9840. You will note that 9360 is 2.5% slower than 9600, and 9840 is 2.5% faster than 9600. Now, suppose micro1 sends out a standard 10-bit uart frame to micro2. Micro2 starts up its receive state machine on encountering the start bit, and aims to sample the first bit after what it believes is 1.5 bit times. In reality, micro2 samples at 0.1524ms after the start bit, which is sooner than the middle of the first bit send by micro1 (at 0.1602ms). The absolute time difference here is 0.078ms, which is well within a bit time, so the receiver samples fine. But by the stop bit, sampled at 9.5 bit times after the start bit's falling edge, things are different. micro2 samples at 0.9654ms, which is quite a bit out from when micro1 send the middle of the stop bit at 1.0149ms. In fact, amazingly enough, it is out 47.5% of a bit time (9.5 times the 5% error - by original "50%" had rounded the half bit time). micro1's final transition from the last data bit to the stop bit occurs at 0.9615ms - just before micro2 samples it. Assuming that the line is noiseless and has sharp transitions with equal delays on rising and falling edges, this is fine - the communication will work with up to, but not beyond, a 50% bit time error at the end of the frame.

Now, just for a laugh, go through that example again and replace "MHz" with "GHz", "baud" with "kbaud", and "ms" with "us". Lo and behold, the same calculations hold and show that 5% difference between the sender and receiver clocks is just on the edge of what can work, regardless of the absolute speeds. Now do you understand?

Incidently, there are no microprocessors (that I've ever heard of) with a crystal - internal oscillators come in various types (VCO, R-C, ring oscillators, etc.), but crystals are always external to the microprocessor.

Exactly - your 5% error adds up to 50% (or 47.5%, to be exact) error over the ten bits transmitted. But this is completly independant of the bit time - it is a relative error.

ranges.

Are you suggesting that your microcontroller's uart can divide it's clock by a small number without problem, but fails to divide accurately by a larger number? I think you can be pretty confident that there is some other problem, such as incorrectly setting the divisor bits. I too have had to change crystals to get low baud rates, but that is merely because the uart in question (on an avr8515, IIRC) did not have enough bits in the baud rate divisor to reach down to 300 baud from a 8 MHz crystal.

byte

Does that mean you think gigabit ethernet needs 0.5ppm crystals and 600 baud modems can run with +/-40 % tolerance crystal, or does it mean you agree with me (and the rest of the world - at least, the tiny part that cares :-) that the actual rate is irrelevant when discussing the percentage error?

Reply to
David Brown

required

don't

tolerance.

shared

A

it

have

would

accuracy.

a

within

when

on

hard

different

Not quite - the term "asynchronous" here means "not synchronous" - i.e., the opposite of your correct definition of "synchronous". The receiver is

*never* in sync with the sender in async communication, since it does not have a clock signal on which to sychronize.
5% tolerance is for the *total* error. Normally that means that you need to be within 2.5% at each end, unless you happen to know for sure that one end will be much tighter tolerance. Of course, only a fool would aim for something that is only just within the theoretical limit of what was possible! A realisitic rule of thumb would be to aim for 1% match - any crystal or ceramic oscillator will do, but an internal oscillator or an R-C oscillator would need trimming.

That is, of course, correct - I'm slightly stunned that there are people working in this field who apparently fail to grasp that. Hopefullly, "apparently" is the operative word, and that it is merely the wording of their posts that is ambiguous, rather than their understanding.

Reply to
David Brown

Perhaps I could have explained it better. But the point is that the Async receiver uses the leading edge of the start bit to trigger it's own internal timing mechanism which should produce sampling at the correct time for the incoming data. It is not, as you say, in sync with the incoming data as it doesn't have a clock to sync to. However the internal sampling clock needs to be less than 5% different from the clock that produced the data to reliably decode the data.

This is always presuming that the receiving UART (or software) has been designed properly to sample in the middle of the data bit (in the case of a single sample per bit UART) or at the correct times for a multi-sample per bit UART.

In fact multi-sample per bit UARTS "could" make the tolerance situation worse!

There is also a third type of synchronous data where the clock is not sent but the receiver and the transmitter have to have accurate clocks which are synchronised by a preamble only.

It's unfortunate that there appear to be a (large) number of people out there that don't seem to know the basic of data transmission and end up writing code that produce wrong baud rates - especially when it comes to bit-banging. I always try to get as close to 0% tolerance as possible with baud rates to cater for all the funny ones.

Alan

++++++++++++++++++++++++++++++++++++++++++ Jenal Communications Manufacturers and Suppliers of HF Selcall P O Box 1108, Morley, WA, 6943 Tel: +61 8 9370 5533 Fax +61 8 9467 6146 Web Site:
formatting link
e-mail:
formatting link
++++++++++++++++++++++++++++++++++++++++++
Reply to
Alan

That was my point earlier although perhaps I did not make it well. Just because there is a point of synchronization at the edge of the start bit doesn;t make it synchronous comms. Way early in this thread the OP referred to synchronous comms with reference to the synchronization at the start bit edge. Since then the thread had gone off into the weeds of error tolerance between tx and rx cloack.

Doug

Reply to
Doug Dotson

the

The only scheme I know of for multi-sample receivers is to take 3 samples in the middle of the bit (which is typically divided into 16 sub-bit time slots), and use majority voting to get the result. This shouldn't affect the tolerance (the middle sample is going to fall exactly half-way within the nominal bit - samples are taken *between* time slots) directly, as far as I can see. Having 16-times oversampling will add another +/-(1/16) bit time to the total error, which I suppose should also be taken into account. Certainly for 4-times oversampling receivers it would be a significant difference, reducing your total error margin to 25%, thus requiring about

2.5% match between the sender and the receiver.

Do you mean when the receiver's clock speed is adjusted to match a preamble (typically a 010101 pattern) ? As far as I know, that is used for LIN communication, which is basically standard uart except that a preamble is used to counter for clocks with greater than 5% mis-match (i.e., LIN slaves are typically cheapo devices with internal oscillators). There are plenty of other schemes for adjustments - CAN controllers adjust their sub-sampling clock on each bit, to avoid the error building up too much over a 80-bit frame.

David

Reply to
David Brown

Very true. My only point was that everything that comes between the start and stop bits are susceptible to cumulative error since there is no (hehehe) sync point between data bits.

The greater the baud rate divisor, the more cumulative the error becomes.

Let me try to explain clearly what I'm talking about:

1.8432Mhz Clock comes in to 16550, internally divided down to 115200 (by 16) to generate the master clock. From there you can specify a divisor of 1-65535. To get a 2400bps baud clock for example, you need to program up the divisor to 48 (115200/2400). That means for 1 cycle of the 2400bps baud clock, you are incurring the cumulative error of 48 cycles of the master clock, multiplied by 8 bits (or however many you're sending). At 2400bps in this example, there are 48*8=384 master clock cycles of cumulative error per 8 bits of data transferred. At 57600 in this example, there are 2*8=16 master clock cycles of cumulative error. For something like 75 baud, that's 1536 master clock cycles per bit, or 1536*8=12288 master clock cycles of cumulative error. Gets much, much worse as the divisor increases (and the baud rate lowers).

I'm saying the baud clock becomes more sensitive to cumulative error as the divisor increases.

Or a source clock that is far enough out of tolerance that the baud rates can't be used (see above).

I've had to change crystals due to lousy tolerance AND because of flat out incorrect rates.

Well, I don't know how ethernet works, so I can't answer that. ;-) I'll say yes that the rate is irrelevant when discussing percentage of error only if it's clocked with no divisor of any sort.

-->Neil

Reply to
Neil Bradley

In absolute time, yes, in percentage of BAUD NO.

This is what we call 'bass ackwards' thinking. The UART does NOT care ( or even know) how many master clocks it takes. ALL the UART sees is the BAUD clock, commonly 1/16 the bit time. The UART state engine starts sampling on the nearest 1/16 bit time to the START edge, and thereafter follows the BAUD clock, with the necessary half bit shift to get centre sampling. If all goes well, the stop bit arrives within the correct sampling window, and a valid byte is flagged as received.

-jg

Reply to
Jim Granville

Wouldn't it be the first 1/n bit time *after* the START edge (therefore a maximum error of about 1/n bit time rather than 1/(2*n), or is the UART presumed to be prescient?

Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany

:) - well spotted - when I wrote that, it did occur to me, that just maybe, someone would consider that 'nearest' might apply to both before and after the START edge. Maybe this thread will now have another lease of life on this ?

-jg

Reply to
Jim Granville

SPI is a synchronous protocol. This does not imply that the clock has to be free-running or at a fixed frequency.

A UART is asynchronous, because it requires no clock to be provided with the data. The clock is generated locally, and resynchronized at the leading edge of the start bit. "Asynchronous" is the "A" in "UART".

Reply to
Eric Smith

Thank you for stating in MUCH BETTER TERMS what I had meant to say originally!

-->Neil

Reply to
Neil Bradley

And cars get better fuel economy than trucks do, as a rule. What does that have to do with anything? That wasn't even a point I was trying to make!

-->Neil

Reply to
Neil Bradley

I don't think I ever stated that it did matter. However, imperically speaking, there is a 16 bit divisor from the input clock. If you take a look at 6.0 RCLK description in the 16550 document it states:

"RCLK, Receiver clock, pin 9: This input is the 16 x baud rate clock for the receiver section of the chip"

This indicates to me there is a 16 bit divisor from the clock which generates the baud clock (which is in turn fed to a divisor). The chip operates this way imperically as well. So I guess what you're saying is that it's really 16X the baud rate internally, and the sampling state machine uses that as a basis for adjusting when it samples for a 1 or 0. If this isn't entirely clear, please try to understand what I'm trying to say rather than blasting me for my words.

But the net effect is still the same even if the internal description isn't right. The lower the baud rate, the worse the cumulative error becomes.

For each bit, yes, but I never said it wasn't.

Again, I never said it wasn't.

No, this is one of those communication things where if I'm not 110% clear, people jump all over me rather than realizing by my descriptions that I really have a clue what I'm talking about but may not be describing it well. My problem is not one of lack of understanding, but rather not being detailed and clear enough in describing everything I'm thinking.

And the more I post, the more people pigenhole me and look for things wrong in my statements and write me off as an idiot, insult me, or look for ways to shift the conversation to continue to make me look wrong about something. The more I try to describe something, the more scrutiny I receive. This is typical Usenet, though. Sheesh. Even after I very clearly laid out what I was talking about, people continued to address issues that I wasn't even debating!

I actually did develop modem and fax machine firmware for a couple of years, doing async and sync communication, so I really do have relevant experience even if my vernacular isn't 100% to spec.

-->Neil

Reply to
Neil Bradley

On Sun, 15 Aug 2004 17:16:31 -0400, "Doug Dotson" wrote: [top posting fixed]

One must distinguish between BIT synchronous/asynchronous or BYTE or MESSAGE synchronous. the asynchronous in UART refers to the fact that one or more bytes may be transmitted at any time. The time between bytes must be at least 1 or 2 stop bits, but can as long as one wants. On the bit level though it can be either synchronous or asynchronous. I think that when a clock is transmitted together with a byte level asynchronous protocol, it is refered to as isonchronous.

Regards Anton

Reply to
Anton Erasmus

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.