Extending UART queue capacity in software

I'm not sure this question is meaningful without a lot of extra data, but here goes anyway:

I have an application running on an older device with unbuffered UARTs, and am porting to a newer device with 16550 compatible UARTs. There is a requirement to minimise program overhead for the data transfer in and out, and the buffering requirements are well in excess of 16 both ways - the existing device uses 128 byte code-controlled FIFOs.

Intuitively, having some, if insufficient, queueing available in hardware has to be a good thing, however I'm wondering what the best way is to combine it with the additional buffering I need. I'm always ready to code it up myself, but if there is a known good solution, would appreciate some information. TIA

Reply to
Bruce Varley
Loading thread data ...

FSVO "best"...

The usual approach is to just enable the FIFOs, and then at any interrupt fill/empty the send/receive FIFOs into the host's buffers as appropriate. Some care is required to kick things off properly if the send FIFO is empty and new data arrives to be sent. Setting the receive FIFO trigger limit to 4 is a good compromise. Usually you just add a loop to the interrupt handler checking bits 0 and 5 of the LSR, and continuing to process while there's data to move.

The simple approach reduces the number of interrupts taken by the hosts, (usually) handles multiple characters per interrupt, and reduces the sensitivity of the communications throughput to interrupt handling latency. You have to be a bit careful if you care about some of the low level stuff (like the line break received indication, or on exactly which character a parity error occurred), but most applications don't.

Reply to
Robert Wessel

I would start by using all of the FIFOs in the 16550 *and* buffering it. I believe the 16550 has a status mask that indicates that one or more characters are in the FIFO that is separate from the mask indicating that the FIFO is full - "triggered".

formatting link

Look at section 8.0 about page 18, "Character Timeout Indication". That'll cost you two character times in latency.

Once you get it basically working, you need the following metrics for testing:

- A count of dropped character runs ( deduced from "packets" received ). This'll depend on the message format in use. A CRC or LRC will help - on CRC or LRC "miss", use local heuristics to deduce the root cause of failure.

- A histogram of the number of characters in the FIFO on a read interrupt.

Then you can tune the trigger level for the read FIFO. Might be worth an ioctl() to set the FIFO depth and have that be configurable during testing. Or just set the FIFO to max and accept the two-bytes uncertainty risk.

If your target offers a select() or pselect() ( or poll() ), use that.

--
Les Cargill
Reply to
Les Cargill

What Robert said. You have to double-down on taking care of not creating race conditions &c, and you have to be very careful that you set up the UART so you don't leave some characters waiting indefinitely in the queue (IIRC those things have a timeout, so if you're below the FIFO interrupt threshold it'll still interrupt after a finite and configurable amount of time).

It's a great opportunity for looking at a wedged system and asking yourself "what in heck happened THIS TIME?", but it generally works well once you get it working.

--
Tim Wescott 
Control system and signal processing consulting 
 Click to see the full signature
Reply to
Tim Wescott

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.