Capturing a remote controls IR signals - timing is sometimes wildly off.

Hello All,

I'm currently capturing a remote controls IR by using an IR reciever module and edge-triggered ISR callbacks (as provided by wiringPi), and it works ... somewhat.

The problem is that rather often my calculation of the time between two ISR callbacks (subtract the previous gettimeofday() from the current) is *way* off (because of other process tasks needing their time too), which than, even when a generous timing deviation is allowed, causes the decoding of the signal to fail. And that simply won't do.

My question: Is there a way to determine, from within Userland, the time between two events a bit more dependantly than doing it in the ISR callback (wiringPi, BCM or other(?) ) ? Maybe the events together with the time of occurance can be put into a FIFO ?

Maybe an even simpler question, what is the commonly used method to capture such timed signals in Linux (Debian) ?

By the way, I've seen LIRC mentioned, but thats currently not what I like to use (hobbyist programmer, wanting to know how everything works and all that).

Reply to
R.Wieser
Loading thread data ...

On Sun, 7 Jan 2018 17:20:07 +0100, "R.Wieser" declaimed the following:

Linux is not considered a "real-time" OS, although some work has been done to produce kernels that are more suited for RT applications.

Dedicated counter modules (which return the time between triggers as a cycle count)...

Auxiliary processors meant for non-OS/RT usage (the chip used on the BeagleBone Black has a pair of "PRU"s (programmable realtime unit) with a fixed instruction cycle rate (all instructions take the same time) optimized for writing I/O protocols. There are means for transferring data from the PRU application memory to main (Linux) application memory.

An RPi-3 -- if you can set affinities for the cores -- might be an approach. Put your timing application on a dedicated core, and leave the other cores for regular stuff. Use shared memory or other IPC system to transfer time-stamped/decoded IR.

Possibly tweak the "nice" level (into the "not-nice" range) to give more priority to your process. This will tend to bog down regular operations unless your app spends a lot of time sleeping/blocked for an I/O event.

Or -- in the auxiliary processor realm -- wire in a PIC, AVR (Arduino UNO), or something with an ARM M-series processor (TIVA C123 [M4f], Arduino Duo [M3] or Zero [M0]) and rig a serial port (or fancier channel if you can support it) to pass decoded data and time-stamps (note: since these don't run a Time-of-Day clock, time-stamps would be system ticks since startup).

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
Reply to
Dennis Lee Bieber

Dennis,

Yes. And that lies at the base of my problem. I was hoping that there would be a certain, standard way to deal with it.

That would be a good start. Though when time between two time slices becomes to large its easy to lose multiple events in it (as, presumably, the later overwrite the earlier ones). Hence the fifo idea.

Thats certainly something to look into (I'm pretty much still a novice on Linux & the Pi I'm afraid). But even though I can see that that could work for a single program, I think it would quickly become a problem when there are multiple (userland) programs running that all want to do some (near) realtime I/O capturing ...

Like in my case I'm currently busy with IR reception, but am already thinking about how, for example, to do the same for a 433 MHz RF receiver module.

:-) I've got a few Atmel controller chips lying about here and was already considering them too, but would like to do the whole thing with as few external, special components as possible. Heck, the Pi is a full 'puter with more power I could ever imagine on such a board, and would like to (try to) use it to its fullest.

Update: After posting my initial question I stumbled over a remark on the LIRC page where it mentioned that some of its functionality was absorbed into the kernel. Which lead me to 'pigpio' and 'gpioSetAlertFunc' and family (not sure if that is what they ment though).

... which will probably work a bit better than what I've got now (it delivers a 'tick count' to the callback), but will most likely suffer from losing events like described in the above. I'll just have to try it and see I guess. :-)

Heck, maybe it already has some kind of buffering mechanism, insuring that

*all* events will be delivered to the (userland) program.

-- Doing some Googeling for 'gpioSetAlertFunc' ...

Yup, it seems to do that:

[quote] The thread which calls the alert functions is triggered nominally 1000 times per second. The active alert functions will be called once per level change since the last time the thread was activated. i.e. The active alert functions will get all level changes but there will be a latency. [/quote]

It sure looks promising.

... Shucks ... Did I just answer my own question? Looks like it. :-D

Regards, Rudy Wieser

Reply to
R.Wieser

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.