Hi:
I am planning to evaluate three options for the architecture of a research laboratory optical internal combustion engine controller system. Basically it takes a quadrature shaft encoder signal, and outputs 16 or more digital signals to control various gadgets attached to the engine and the experiment, like fuel injectors, cameras, lasers, etc.
We currently use DOS and a program written a long time ago.
Approach 1: I am considering a different approach based on an embedded CPU taking configuration parameters that describe the waveforms to generate (start and stop encoder positions, basically), and fills a dual-port RAM with the actual waveforms. The encoder will be counted and the counter addresses the RAM on the other port from the CPU's memory bus. This scheme has no jitter and no substantial delay between the encoder ticks and the output of signal. This is hard real-time performance. There is also some chance that it can be adapted so that the embedded CPU can know where in the RAM the encoder is addressing. If it is desired to change the waveform on the fly, the CPU can write in new data by following behind the encoder count, so the next time around the waveforms will be different, without any discontinuity or glitches. Since an engine running at 3600RPM with a 0.25degree encoder will produce 86400 ticks per second, this is about the upper end on the frequency of events to which the controller must respond.
Note, this is not a vehicle engine, it is a research engine, so the approaches described may seem unorthodox to some who are experienced with vehicle engine control. The method described here is based on the needs of an engine research lab experiment, not a vehicle engine. The research engine typically operates at fixed conditions, and for each condition, data is taken such as images of the in-cylinder combustion (these are *optical* engines) and various laser spectroscopy data.
My approach will likely use a PC as the interface to the embedded CPU. The user can specify the timing of his waveforms on the PC screen, through some software written in LabView perhaps. This software is rather simple, it's basically just an editor for waveform parameters. It doesn't actually do any real-time activity.
The waveform data will be uploaded to the embedded CPU, which will fill the RAM and twiddle any bits relevant to the operation of the actual waveform generation hardware.
Approach 2: Generate the waveforms in software using a fast CPU running an OS capable of real time response, such at RTLinux. There are two options then about the user interface. In one case, the user interface could be based on the same hardware, a PC, as the RTLinux waveform synthesis program is running on. The waveform editor could then be written as an X window system application, or perhaps even LabView for Linux. In the second case, the RTLinux could run on an embedded CPU in the hardware chassis that contains the shaft encoder interface and digital IO buffering to the outside world, similar to approach 1, but in this case the CPU would be much more powerful of course, capable of evaluating and outputting up to 64 bits of digital IO within about 100ns of an interrupt. The user interface would run on a PC and send configuration data similar to approach 1.
Approach 3:
Use Windows to do the same thing described in Approach 2, case 1.
Approach 4:
Use Windows to set up commercial counter-timer PCI cards to count the encoder and output waveforms. This is what is done on the original DOS machine, but it is inflexible and becomes very cumbersome when attempting many channels of waveforms.
The question is: Can Windows do the real-time control, that is, can a Windows machine be configured to respond to hardware interrupts with priority given to the interrupt handler like in RTLinux or other RT OSes, so that there would be negligible jitter and consistently low delay time from receipt of an interrupt to the output of data?
For that matter, can RT Linux or any other RTOS for a 32-bit CPU respond at the 100-200ns timescale?
Thanks for comments.