I thought for async you could be pretty sloppy, since the receiver is edge triggered by the start bit, and you just have to sample from somewhere in the interior of each of the 8 bits going by. So the timing could be off by hmm, well, several percent at least.
Or, if you have a spare timer capture pin, you could track the pulse widths and make the baud rate adaptable. Would not be appropriate for every situation, of course.
With one start and stop bit each, plus eight data bits, just barely catching the leading or trailing edge of the stop bit means being within
+/- 5%. That would be exceedingly marginal. If your hardware is doing the usual majority-of-three voting that trims your margin some (because you want all three samples within the bit), if you're allowing for some lowpass filtering, that'll trim your margin, etc.
Just as a knee-jerk estimate I wouldn't be happy unless it's +/- 2.5% overall, which leaves a bit over 1% for each clock if such processors have to talk to each other and you can't assume that they'll track together with temperature.
--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
Why am I not happy that they have found common ground?
Tim Wescott, Communications, Control, Circuits & Software
http://www.wescottdesign.com
I've used SiLabs' C8051F061. It has two 1MHz 16 bit ADCs on it -- nobody has them that fast on-chip. But it's internal osc is 2%, sadly. (You absolutely MUST use the DMA to get the data into memory fast enough.)
The C8051F5xx is surprising, having the 0.5% internal timer. Looking over the selection guide, I see that the C8051F381 datasheet says "Internal Oscillator: ±0.25% accuracy with clock recovery," and elsewhere, "Internal oscillator with ±0.25% accuracy supports all USB and UART modes."
Silabs has several products with internal oscillator being synchronised to USB, e.g. an USB - UART interface running from the internal RC oscillator. Very nice!
Grown up with 6502, using 68HC-something and Coldfire (rest in peace) for two decades, I hesitate (but don't refuse) to use 8051 controllers today. My preferred path is ARM Cortex.
I prefer visible solder joints if possible. The LPC111x are avaliable also in TSSOP20.
Yes, that's great, but they seem to be rather picky regarding power supply: "The device might not always start-up if the power supply (on the VDD pin) does not reach 200 mV. The minimum voltage of the power supply ramp (on the VDD pin) must be 200 mV or below with ramp-up time of 500 ms or faster". That's very bad.
Another disadvantage (especially for small cases) is that the cyrstal oscillator pins can't be used for I/O.
If you like to check out (disassembly or compiler listing) the code produced by a C compiler, don't use a 8051. ;-( Well, even if you don't do this, the architecture isn't good for C. (Very limited stack space, only one or two pointers for accessing "external" memory etc.) Sure, it works, kind of, but you can't write "normal C" for these chips.
ARM is good. A 4 GB linear memory space, 32-bit registers, etc. Your C compiler will thank you. ;-)
and not GCC - I strongly disliked it's output for Coldfire. So much unneccesary re-ordering. And the mixed output was completely out of sync (IIRC a long standing bug).
Can't wait to check the Cosmic compiler for ARM Cortex.
Oliver
--
Oliver Betz, Munich
despammed.com is broken, use Reply-To:
Which Coldfire core were you using? For the bigger cores, re-ordering can make a big difference to the throughput as instructions can be overlapped in the longer pipelines. gcc does not re-order instructions just for fun - it does it because it considers instruction scheduling and memory access patterns to get the fastest code. And of course, you can control the amount of instruction scheduling via optimisation switches (at least to some extent).
I found that for some gcc versions (I can't remember which versions or which targets), mixed assembly output could get out of sync - especially with "-fverbose-asm". Using the "-nopipe" option seemed to help, IIRC.
Unfortunately, gcc only knows about the core cpu, and (AFAIK) will only consider pure cpu cycles and latencies when scheduling. There is no way for a compiler to know about off-cpu wait states.
People often think they can force the timing and ordering of execution and data accesses by using "volatile" - typically, peripherals will use "volatile" accesses. But remember that "volatile" only influences ordering with respect to other volatile accesses - it does not affect ordering with respect to calculations or non-volatile accesses. So if you have a write to a peripheral and you want it to be dealt with /now/, you have to use a memory barrier as well.
I too used to be a big Coldfire fan (and 68332 before that), but it does not look like the architecture has a bright future. We use MPC cores for some parts, and are moving to Cortex for many others (in the 32-bit space).
I didn't expect this, but I can imagine a compiler not delaying accesses without benefit. If the value isn't needed later, what is the reason to delay writing it?
I'm aware of this.
Oliver
--
Oliver Betz, Munich
despammed.com is broken, use Reply-To:
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.