Can Windows do real-time control?

Hi:

I am planning to evaluate three options for the architecture of a research laboratory optical internal combustion engine controller system. Basically it takes a quadrature shaft encoder signal, and outputs 16 or more digital signals to control various gadgets attached to the engine and the experiment, like fuel injectors, cameras, lasers, etc.

We currently use DOS and a program written a long time ago.

Approach 1: I am considering a different approach based on an embedded CPU taking configuration parameters that describe the waveforms to generate (start and stop encoder positions, basically), and fills a dual-port RAM with the actual waveforms. The encoder will be counted and the counter addresses the RAM on the other port from the CPU's memory bus. This scheme has no jitter and no substantial delay between the encoder ticks and the output of signal. This is hard real-time performance. There is also some chance that it can be adapted so that the embedded CPU can know where in the RAM the encoder is addressing. If it is desired to change the waveform on the fly, the CPU can write in new data by following behind the encoder count, so the next time around the waveforms will be different, without any discontinuity or glitches. Since an engine running at 3600RPM with a 0.25degree encoder will produce 86400 ticks per second, this is about the upper end on the frequency of events to which the controller must respond.

Note, this is not a vehicle engine, it is a research engine, so the approaches described may seem unorthodox to some who are experienced with vehicle engine control. The method described here is based on the needs of an engine research lab experiment, not a vehicle engine. The research engine typically operates at fixed conditions, and for each condition, data is taken such as images of the in-cylinder combustion (these are *optical* engines) and various laser spectroscopy data.

My approach will likely use a PC as the interface to the embedded CPU. The user can specify the timing of his waveforms on the PC screen, through some software written in LabView perhaps. This software is rather simple, it's basically just an editor for waveform parameters. It doesn't actually do any real-time activity.

The waveform data will be uploaded to the embedded CPU, which will fill the RAM and twiddle any bits relevant to the operation of the actual waveform generation hardware.

Approach 2: Generate the waveforms in software using a fast CPU running an OS capable of real time response, such at RTLinux. There are two options then about the user interface. In one case, the user interface could be based on the same hardware, a PC, as the RTLinux waveform synthesis program is running on. The waveform editor could then be written as an X window system application, or perhaps even LabView for Linux. In the second case, the RTLinux could run on an embedded CPU in the hardware chassis that contains the shaft encoder interface and digital IO buffering to the outside world, similar to approach 1, but in this case the CPU would be much more powerful of course, capable of evaluating and outputting up to 64 bits of digital IO within about 100ns of an interrupt. The user interface would run on a PC and send configuration data similar to approach 1.

Approach 3:

Use Windows to do the same thing described in Approach 2, case 1.

Approach 4:

Use Windows to set up commercial counter-timer PCI cards to count the encoder and output waveforms. This is what is done on the original DOS machine, but it is inflexible and becomes very cumbersome when attempting many channels of waveforms.

The question is: Can Windows do the real-time control, that is, can a Windows machine be configured to respond to hardware interrupts with priority given to the interrupt handler like in RTLinux or other RT OSes, so that there would be negligible jitter and consistently low delay time from receipt of an interrupt to the output of data?

For that matter, can RT Linux or any other RTOS for a 32-bit CPU respond at the 100-200ns timescale?

Thanks for comments.

--
_______________________________________________________________________
Christopher R. Carlen
Principal Laser/Optical Technologist
Sandia National Laboratories CA USA
crcarle@sandia.gov -- NOTE: Remove "BOGUS" from email address to reply.
Reply to
Chris Carlen
Loading thread data ...

Sounds interesting.

I've actually recently *used* a DOS-based PC approach in an in-line manufacturing system which calibrates and validates medical pump part assemblies (motor, cam, encoders.) In this system, the encoder is 10,000 counts per rev and we operate the motor at up to 300 RPM (5 RPS), for a maximum of 50,000 quadrature changes per second. I sample the I/O in a hard, assembly loop. The ISA bus I/O I use is the standard 6-cycle, 8 bit transfer and the ISA bus operates at 8MHz (which is normal, these days) so that is 750ns per I/O read. I emulate the HCTL2020 chip (we decided not to use it due to availability and cost, and use software to handle the quadrature changes.) The instruction stream and my other tasks (ADC reading, too) in the middle allow me to handle a motor speed of 600 RPM, which gives me a comfortable range.

You are, though, talking about an upper end which would justify some testing to validate the continued use of an x86 based system. However, you might just make it. The software development tools, the amount of ram, the low cost, easy availability, and small size of PC-104 systems (if you want to use them) may argue on one side of staying with a DOS based system. Hard requirements may argue oppositely, of course. As might your own familiarity with DOS development.

But if I were modifying an existing system based already on DOS, I think I'd try and evaluate just how well I could approach these tougher "corners" under DOS. If it appears to work okay, I'd probably stick with DOS development. The tools are lots better, dirt cheap, and well vetted after all these years.

I guess I don't know what an *optical* engine actually is.

How is this handled? Is there a sapphire light pipe drilled and tapped through, via the spark plug ceramics in order to gain entry into the engine cylinder without impacting the operation, for example? (I've actually worked with a guy who used to do this very kind of thing, by the way, and still design and build instrumentation capable of monitoring the flame front in real time.)

"Windows" means a lot of things, these days. Everything from Win3.1 to Win95 to Win98 to WinNT to WinXP to Win2000 to...

But not as a practical matter. Yes, as an impractical one. In the versions I'm somewhat familiar with, you *can* write your own VxDs or ring-0 drivers and install them. You can get control once when Windows is booting and is still in "real mode" and again later as it boots when it has transitioned into protected mode operation. After those initialization events, you can pretty much sit there in ring 0 and intercept the interrupts as they happen. And if you do everything there, in the interrupt code, you can ensure reasonably low jitter and low phase delays in your responses. But if you use the messaging to start ring-3 processes acting on the events, I believe the answer is "forget it."

And DOS underneath Windows won't help. A straight DOS boot without EMM386.EXE will allow you full control, though. And there, you can get very good results.

This, I don't know.

However, I must have missed recognizing this requirement earlier. No x86 system with connections on the ISA bus can get anywhere near 100-200ns. PCI interrupts, similarly (they are mapped via the APIC bus to the CPU, but I doubt you'll see anything like that kind of interrupt response -- 33MHz PCI is

30ns signalling and there is the chipset responses to it plus APIC communication delays and then the CPU responses.)

Jon

Reply to
Jonathan Kirwan

Heh heh, oh, it's actually much more amazing than that! We have engines with windows in the piston, head, and even quartz cylinders. An extended piston allows a mirror under the piston to look right up into the combustion chamber. You can basically look right into these engines and watch them fire, or take photos.

Actually, we used fiber instrumented spark plugs too, in some old experiments. Now days, all experiments are imaging combustion through full aperture optics installed in the heavily modified engines. We have about 5 optical engine labs right now, 3 diesel and 2 spark.

Good day!

--
_______________________________________________________________________
Christopher R. Carlen
Principal Laser/Optical Technologist
Sandia National Laboratories CA USA
crcarle@sandia.gov -- NOTE: Remove "BOGUS" from email address to reply.
Reply to
Chris Carlen

Labview RT Have never used it, but you might want to look into it.

100-200ns sounds like a job for a control oriented DSP such as the TMS320f24xx or ADSP-21xxx.

Cheers Martin

Reply to
Martin Riddle

Funny you should mention that. We have a new open project JTAG programmer that consists of a XC2S50 or XC2S100 FPGA and a FTDI245 on small PCB (3.5" x 1.5"). The FPGA config is downloaded via the USB side. It has about

40 I/O bits available so should be easily capable of doing the task, Maybe not enough BlockRAM for a straight forward approach but you could fill the blockRAM with event locations and encoder counts so you only update the output and increment the BlockRAM address per event...

Peter Wallace

Reply to
Peter C. Wallace

Well, then obviously you can do real-time "control" with "windows".

SCNR, Marc

Reply to
jetmarc

~10us per tick.

The short answer is "no" ... you can't simply write a program that will do that. The complicated answer is "yes" but it depends on the version of Windows and your skill at writing kernel mode drivers.

You would have to place all the waveform playback in a kernel mode driver. You could edit the waveform in your application and download it to the driver for playback.

Windows main difficulties for RT are:

A) Kernel mode interrupt response times are 5 - 50 ns depending on CPU speed. Propagating the interrupt to the applcation level can take over 1us.

B) There is a thread priority inversion in the GDI code that interferes with context switching while the screen is being updated. Once the screen unpdate begins the program is stuck in the GUI thread until the update completes.

C) The thread scheduler is preemptive, the algorithm is fixed and the timeslice quantum is not easily changeable (though it can be done).

All versions of Windows up to NT4 have the priority inversion fault in the GDI code. I haven't checked out ME, 2K or XP yet. For Windows

3.1, which is cooperatively tasked, switching tasks is not allowed during screen update. For preemptive versions, other programs continue to run normally ... only threads in the program updating the screen are affected.

Also note that DirectDraw does not have the priority inversion fault, but is much more complicated and harder to use than GDI.

I write industrial machine vision QA apps on Windows NT. They are hard RT, but the cycles are measured in 100s of milliseconds. Even so, I have to pack a lot of processing into those cycles and I do count milliseconds in my code. Not quite the same as counting nanoseconds, I know 8-)

George

Reply to
George Neuner

Chris, In response to your question "Can Windows do the real-time control, that is, can a Windows machine be configured to respond to hardware interrupts with priority given to the interrupt handler like in RTLinux or other RT OSes, so that there would be negligible jitter and consistently low delay time from receipt of an interrupt to the output of data?" The answer is yes, but the real question is how well. For example, I've used Windows and Visual Basic to do realtime data acquisition for up to 32 serial ports that have data acquisition equipment attached (fiber optic power measurments). In my case, I was happy to keep a 110 mS schedule, which is 2X the 55 mS resolution Windows normally provides. If you need hard realtime that is faster than 55 mS, then I would suggest a good realtime OS like QNX.

Dennis,

Chris Carlen wrote:

Reply to
Dennis

Chris Carlen wrote: > Hi: >

Another option would be to have the software cast into silicon together with a NIOS kernel in an FPGA.

Rene

--
Ing.Buero R.Tschaggelar - http://www.ibrtses.com
& commercial newsgroups - http://www.talkto.net
Reply to
Rene Tschaggelar

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.