Greetings:
I am writing driver code for the serial communications interface (SCI) of the TI TMS320F2812.
So far I have been developing a set of functions analagous to C stdio stream IO calls that allow me to communicate via a RS232 link to a PC. The 16-byte receive and transmit FIFOs are utilized by the lowest level getc/putc functions. The other functions merely call getc/putc:
These functions work Ok:
int SCIb_getc(void); int SCIb_putc(int);
int SCIb_gets(char *s, int size); int SCIb_puts(const char *s);
These are not yet implemented, but will be for binary block transfer:
uint16 SCIb_write(const void *ptr, uint16 size, uint16 count); uint16 SCIb_read(void *ptr, uint16 size, uint16 count);
There are some other functions for initialization, setting baud, control char handling, echoing, etc.
My next task is to make this driver codebase interrupt driven. I am first trying to understand all of the reasons for using interrupts, and what the range of implementation possiblities might be.
So far I can see the following reasons to use interrupts:
- To implement flow control at the driver level. For ex., the simplest case would be to have the FIFO threshold interrupt handler deassert CTS so that the DTE would stop sending when the FIFO is full. The SCIx_getc() code could then reassert CTS when the FIFO has fallen below some threshold.
Interestingly, I have just learned that the RS232 standard does not provide for hardware flow control in the RxD direction. It would seem highly non-standard to use the DTR/DSR pair here. Thus, the transfer protocol level would have to ensure that the DTE doesn't get overrun. For my devices, this shouldn't be an issue. They will do much more receiving than transmitting.
- To increase the effective size of the FIFO buffer. By having the interrupt handler for the receiver put the data into a larger buffer than the 16 byte FIFO, and base the CTS state on the condition of the larger buffer rather than the FIFO then the user code can process data in larger chunks and less frequently.
This might increase throughput somewhat, but doesn't solve the fundamental problem of needing flow control. It just trnasfers the problem of buffer overrun from the hardware FIFO to the software buffer. Thus, either hardware must still do this via CTS or the protocol needs to be able to stop the DTE. If the protocol is XMODEM for instance, and the buffer is >=132 bytes, then I suppose this would guarantee no overruns.
One application will have a mixture of binary file transfer from DTE->DCE which will be XMODEM-like so this will be fine. However, it will also have a text command language, so it is conceivable that the buffer could overflow before the command processor had a chance to digest commands, if a machine was sending the commands rapidly. Thus, hardware flow control would be needed here, or limiting the data rate.
- It is possible for the SCI interrupts to call user code. Ie, the user can "register" a user function with the driver. Then the user could be "forced" to process data before the buffer overflows.
I don't particularly like this, nor do I think it is typical.
What can I expect of typical PC serial port drivers, on Windows and Linux? What do they do with the RTS/CTS and DTR/DSR lines?
Obviously I am learning about this by doing and for the first time. Comments regarding the direction I am taking and my understanding of the purpose for interrupts in a serial driver are welcome.