You will probably fail with this, because you need hard real time operation. Even if it is not really fast, you need to sample at least 4 times/bit time to find the start bit and its center and from there on sample the bits at each center point. Anything else will be to sensitive on minor variations from the sender. This timing must be met with at least half a bit time. The sender is a bit easier bit you also have to meet the timing.
things can come in the way even if running inside the kernel. It might work if your kernel is doing not much more.
I have tried to look an answer for this on the web, but I mainly got "no can do" (timing issue).
Kernel: 2.6.3x MCU: atmel at91 (9263)
I wish to get/write a driver to bitbang a couple of gpios to emulate a uart. the communication is with a remote PIC (PIC12F683) to which I have already wrote the software and checked with linux terminal. the communication does not have to be fast at all (1200 would be fine). only a few bytes are exchanged.
can someone point me in the right direction? did someone already develop such a driver? is there an example/guide I can follow to do it?
I disagree with the failure prediction. I did a bit banged MIDI interface with a PIC16F84 at more than a decade ago with great success. The real predictor of success or failure will be your requirements. It's true that hard real time response is needed however there are many ways to skin this cat. If there's no RTOS, your odds of success improve. If there are no other interrupts active, you will likely succeed. How many times you need to sample a bit cell will be determined by your operating environment, baud rate, reliability requirements, etc.
No I didn't do it from Linux. But I've been in the business long enough to know that a solution frequently can be found if you're clever enough.
For example, I propose this idea might work on Linux:
Interrupt on a GPIO pin to capture the start bit. Disable all interrupts and poll each bit for all bits that you expect at the operational baud rate. Then enable interrupts after the last bit is captured. At 1200 baud that takes about 10 msec. If the system requirements will tolerate that length of apparent dead time, you've solved the problem of UART receive.
Unless you can hook a periodic interrupt (e.g., the jiffy), chances are, you won't be able to co-operate with the kernel (and its notion that *it* has exclusive control over when IRQ's are enabled/disabled/etc.)
I think there are some RT Linux implementation in which the Linux kernel actually runs as a task on top of a more deterministic kernel (microkernel). There, your UART emulator could run on that same microkernel ALONGSIDE the linux kernel and, potentially, get the timeliness guarantees that it requires.
If you are *stuck* with the Linux kernel approach, you could possibly cheat by moving the timeliness issues into the remote PIC. To get you thinking in this direction (but not a genuine solution, per se), imagine the PIC is pushing bytes at the Atmel machine at some baudrate. CONTINUOUSLY (or nearly so).
As such, the serial port handler in the Atmel machine will be getting periodic interrupts each time a character arrives. Hook that driver so that the Atmel box uses those interrupts as "periodic events" (clocks) that govern how it shifts each successive bit out to the PIC.
[Flip this approach around to see how the Atmel can "drive" the PIC, instead -- so the Atmel "knows" when to examine the incoming serial data line!]
You can also change the comms scheme (is there a reason is must be a traditional UART?). E.g., you want to effectively distribute the "clock" IN the signal instead of it being *implied*.
If, for example, you can acquire the processor for small, contiguous chunks of time, instead of shifting *a* bit out, you can opt to shift a "bit boundary" out (i.e., toggle the level of the signal that you are sending) and then, after a delay (not too long that you are tying up the CPU needlessly... and, not too short that you introduce a high frequency transient to your comm line), toggle the signal *again* iff sending a '1'.
[you can come up with other schemes as well]
The device at the other end sees the first transition as marking a "bit boundary". Then, waits for that "delay" to reexamine the signal. If a second transition is detected, it knows that a '1' has been sent. Regardless, it waits for the NEXT transition to signal the next bit time...
[think about these approaches in concert]
If you have something more than a RxD/TxD interface, you can drag other signals into the mix as well. (distances and any intervening kit will obviously affect what you can get away with!)
A standard 2.6 kernel on an at91 has a typical interrupt latency in the many tens of usec, and it will probably occasionally be 100-200 usec. [Though it varies a _lot_ depending on what other drivers you're using.]
On the transmit side, you're going to have to be able to toggle an output port pin every 833us +/- a few percent. You've got almost zero chance of that unless you can use a hardware timer that allows you to schedule pin transitions via a compare register. On the receive side, you need to sample an input pin at least once every 100us or so to detect the leading edge of a start bit and then sample a data bit every 833 us after that. The only possible way you could do that is using a hardware counter/timer with an edge capture feature.
_If_ you have a spare counter/timer register with output compare/toggle and input edge capture features, and it can generate interrupts on compare/capture events you should be able to do it. It's going to generate interrupts at a 1200Hz rate, so you'll have to be pretty careful with what you're doing.
Most AT91 family members have both SPI and I2C controllers -- any chance you can add a UART with an SPI or I2C interface? Then you don't need to write any driver code at all: just use the standard user-space API for SPI or I2C.
Even hooking up an SPI or I2C UART to a couple GPIO pins and then bit-banging SPI or I2C is probably pretty straight-forward compared to bit-banging a UART.
Grant Edwards grant.b.edwards Yow! Like I always say
at -- nothing can beat
I see not real problem of to do that, if no OS comes in the way. I have done such things many times on even smaller CPUs, e.g. a Z8 running at 6MHz internally full duplex 9600Baud. The suggested sampling rate comes from the experience doing it for an industrial environment.
But the OP mentioned Kernel: 2.6.3x which tells me there is a Linux "in the way" and it will very probably not have real time extensions by default.
Can you get an interrupt on that GPIO pin for both rising and falling edge ?
If that is the case, read a free running high frequency counter (at least a few kHz) and save the transition time in ISR and later on, use user mode processing of pulse edges to determine how many 1s or 0s there are between transitions. As long as the _variation_ of the interrupt latency is shorter than a serial bit time, you should be able to extract the data.
Trying to do it from Linux (and with a very old kernel on a slow processor) is going to be difficult unless he can dedicate a high priority timer interrupt to the job. With a timer interrupt with fairly low jitter, it should be fine, even with Linux running. If possible, the interrupt code and the relevant data should be put in internal ram so that cache misses do not slow it down.
My record for bit-banged uarts is 38.4 kBaud on a 9.8 MHz processor.
Someone suggested you use a high speed interrupt with a fast timer/counter. I think you can do better using the timer directly in interval measurement mode. I haven't looked at the details, but it should be possible for the timer to detect the falling edge of the input, measure the time to the rising edge and then repeat for the rising edge to the falling edge again. Each of these intervals will be some multiple of the bit period giving you the number of bits of that polarity. I'd bet this is less work than writing a proper UART in software using an interrupt on the input pin, and highly accurate.
On the output side you can use the PWM facility to generate the high and low pulses of the output bits. Since both of these approaches do the timing for you, the CPU has little to do except handle the logic.
If possible, OP could try FIQ with highest priority and write own small handler in assembler, using the FIQ banked registers for speed. No kernel API usage at all, just plain FIFO (ring buffer) somewhere in memory for lowest overhead.
All the implications with kernel disabling the interrupts remain.
As a very quick test, I'd put short FIQ code that simply toggles GPIO, and then check visually how bad the jitter goes under load.
Agreed. It may also be possible to use the compiler but with restricted register access - he would have to check the details here.
Since it is Linux, and he has the source code, it is actually possible for him to modify the interrupt enable/disable code to make sure the FIQ (or other interrupt) is still enabled. Real-time Linux extensions that combine an RTOS with Linux running as the RTOS's idle task do that.