Linux question -- how to tell if serial port in /dev is for real?

I'll bite. I do most of my desktop support in Python as well, but always assume that at any time the combination of Python and the OS may wallop me for any number of milliseconds, and that I just need to suck up and deal, and find some other way to handle it when I need to ensure realtimeyness.

How do you guarantee microsecond level response from Python (and I assume Linux)?

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com 
Email address domain is currently out of order.  See above to fix.
Reply to
Rob Gaddi
Loading thread data ...

Actually I explained it in my original post: I'm running the software on a desktop, but it's written with a Linux serial driver and a cool GUI that are easy to lop off. At some point all of the (by then hopefully well-tested) stuff in the middle will get shoved into an itty bitty processor on a board with two buttons and a 2 line by 16 character display.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

Yes I did -- thanks.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

Linux has a realtime scheduler but guaranteeing microsecond response is not realistic because of nondeterministic cache misses and that sort of thing. For soft realtime maybe it's feasible. Milliseconds are easier than microseconds of course.

Python is interpreted and slower than C, but if the realtime loop is simple, then you can generally have at least a "soft" bound on the latency. Memory management is by reference counting so there should be no GC pauses unless you're building up large connected structures and freeing them in one piece.

Reply to
Paul Rubin

or you use something like Linux RTAI that gives you hard real time.

Bye Jack

--
Yoda of Borg am I! Assimilated shall you be! Futile resistance is, hmm?
Reply to
Jack

Is CONFIG_PREEMPT_RT dead?

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

.. providing, of course, the processor neither instruction nor data caches. If either are present then the ratio of mean:max latency rapidly becomes very significant.

Even a 486 with its tiny caches showed a 10:1 interrupt latency depending on what was/wasn't in the caches. (IIRC that was measured with a tiny kernel, certainly nothing like the size/complexity of a linux kernel)

Reply to
Tom Gardner

Aren't interrupt routines in some permanently-cached portion of the MMU?

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

There's no such thing on typical x86s.

Reply to
Robert Wessel

The TAPI functions allow you to share a device with other applications. So if your PC had a modem that was also used for faxing, you could ask nicely if the modem was available. Which is not possible with the traditional API - the fax program would always have the serialport/modem open for receiving faxes.

Reply to
Robert Wessel

No, and once an MMU is involved all the paging information might or might not be cached. Double whammy.

Reply to
Tom Gardner

So you're telling me that Intel made a processor that, by design, could not service interrupts in a deterministic fashion? Hard to believe.

Is that also the case for the present-day Intel architectures?

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

I should add that real-time operation is therefore not possible on such processors, regardless of what operating system is used. This just doesn't sound right to me...

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

I believe that is the case. They got down to millisecond-ish resolution.

I was using PPC boards at the time, so I didn't try it. ARM may be better, or not.

When people need fast and deterministic, the answer has been generally to use an FPGA or some high-speed PIC.

Yes. Everything is highly buffered, although with Windows there are some services in the multimedia sphere that may be better*. I've never seen an ASIO audio driver than gets much below 1msec, but that may be in part to limit turnarounds on exchanging data with the card/bus device.

*may be true of Linux; dunno.

You should see some of the things online gamers have to deal with related to latency.

--
Les Cargill
Reply to
Les Cargill

The MingW GCC port I use for 'C' programming on Windows has a "winsock.h" and a "winsock2.h" that both offer select().

I presume it's available for Microsoft toolchains but have no way of finding out for sure.

--
Les Cargill
Reply to
Les Cargill

It was even worse with CPUs like the Geode. The video system on the chip "stole" the CPU at every HSYNCH for some microseconds. Really nice for real time operation.

--
Reinhardt
Reply to
Reinhardt Behm

Yes, that is correct.

There is always a tradeoff between deterministic real-time behaviour and high throughput. You can't optimise for both - either in a cpu or in an OS. So processors like desktop x86 devices have long latencies and reactions to interrupts, which is countered by using buffers, DMA, etc. Processors like Cortex M devices have short reaction times, but less throughput per clock. And the same applies to OS's.

You can change the compromises to some extent. Linux has options in the kernel to control the balance, such as by controlling the preemption of kernel calls (disabled preemption means smoother flow for greater throughput on servers, enabled preemption means faster reactions to user input on a desktop), and the OS and cpu can work together by locking interrupts or processes to particular cores in order to avoid cache flushes.

Reply to
David Brown

Il 06/08/2014 04:38, Tim Wescott ha scritto:> On Tue, 05 Aug 2014

16:15:44 -0700, Paul Rubin wrote: > >> Tim Wescott writes: >>> All of the desktop serial-port stuff I've done in the last decade has >>> been in support of embedded work, ... >>> So I'm constrained to C or C++. >> >> If this is about embedded Linux, Python works great for that. > > Will Python run on an ARM Cortex M0 with 64k of ROM and 8K of RAM? > > With room left over for actual application code?

Linux on ARM Cortex M0? Fantastic... could you give us more details about the board? Is it a custom board? Are you able to run a full Linux (no uclinux) on a M0-based board?

Reply to
pozzugno

If we ignore bugs and errata (which we shouldn't) then the end result is deterministic but the time delay depends /significantly/ on the current state of the processor, MMU and memory system.

And all those are effectively unpredictable.

Yes, in spades.

The last Intel processor that made a nod to *hard* realtime was the i860, which had a small instruction cache and an instruction to lock down whatever was in the cache.

Reply to
Tom Gardner

That depends on your requirements. Soft realtime certainly is possible. For hard realtime then you will have to determine the mean:max latency and "derate" the processor appropriately.

As I noted, you needed 10:1 for the i486, and I have no idea whatsoever what you need for a current Intel processor.

The problem is not confined to Intel; it *must* occur wherever there are caches. After all, the whole point of caches is to speed up things *on average*, so by definition there must be some sequences that perform worse than average.

Your job, for hard realtime systems, is to determine the pessimal sequence :) (Optimal sequence be damned!)

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.