I've implemented a simple software-based UART on a 50-mhz Parallax uC. I'm operating it at a baud rate of 2mbps. It talks to an FTDI FT232BM chip
Most UART implementations use an over-sampling (I've seen it called super-sampling or majority center sampling as well) method where the UART samples each bit between 8 and 16 times, and then uses that to determine the transmitted bit value. My simple UART samples just once.
My pseudocode looks like this:
(interrupts are off)
wait for transition high to low 'look for start bit NOP delay x-cycles to middle of 1st bit
for 8-bits
sample the bit NOP delay rest-of-bit-period
next NOP delay to eat stop bit
What PRACTICAL problems could show up by not oversampling? My cable distances are about 3 inches. The FTDI converter uses a 6mhz crystal, and the uC uses a 50mhz Murata resonator.
By waiting for the transition, I'm in-effect sync'ing to the start bit. So how much could my clock, or the FTDI's clock) really drift in
8(or 10 if you included start/stop) bit-times? Even if there is a slight change in the transmitted bit-period, wouldn't it have to be a huge error in the same direction, before I end up sampling in the wrong place?I'm not arguing the time-tested methods that UARTs use --- I just don't understand them in context of today's hardware.
Note that this is a hobby project, not a commercial or space shuttle application, so simplicity here is really the deciding factor for me.
Thanks
Keith