Linux question -- how to tell if serial port in /dev is for real?

The only microcontroller I have used with a cache in which I wanted to be particularly careful about fast responses was a Freescale MPC5674F. This has two cpus (which can be run independently, or in lock-step), each with their own cache. It could hardly be called easy-to-use, but is a very powerful device. Cache lines can be locked in different ways, and you can do all sorts of fiddling with different rules for different memory areas (such as making some parts uncached, some parts write-back, and some parts write-through).

For more "normal" microcontrollers, such as fast M3/M4 cores with caches, you won't usually get quite as many features like that. But you will always have a solid chunk of static RAM (perhaps /all/ the onboard ram) that can be accessed quickly without caching - you put your critical routines and data there, and enable caching for everything else (in flash, off-chip ram, etc.).

Higher-end micros such those as the Cortex R or PPC cores (like the MPC5764F) will also have some fast-access ram even if the main onboard ram is slower than the cpu. Sometimes this will be combined with the cache - you can configure all or some of the cache to be static ram. (Actually, I believe that is possible on at least some x86 cpus - I don't know details, but I have heard of them being used without any external memory!). Faster ARM cores also often have "tightly coupled" memories, that can be used for this sort of critical code and data.

Reply to
David Brown
Loading thread data ...

Is significant overestimation really that bad thing ?

It is of course a bad thing if you ship millions of units a year, but assuming hundreds or a few thousand units a year, this is not so significant. For instance, the possibility to use the same hardware as used in non HRT applications simplifies the logistics.

Reply to
upsidedown

For hard real time, any estimation is A Bad Thing :)

Agreed, subject to the above.

Reply to
Tom Gardner

We're talking about Linux, which means there's not just caches, but also an MMU, preemptive multitasking, etc. I think microsecond HRT in this environment is simply not on the menu. The Beaglebone Black has a pair of realtime coprocessors built into the main CPU chip because of that.

Reply to
Paul Rubin

Usually the main memory (or at least the memory interface bandwidth) is very slow compared to cache and processor cycles. If dynamic RAM is used, loading a cache line would typically mean

1 x RAS cycle + n x CAS cycles. Depending of memory bus width and hence the size of "n", this will take a while. By pessimistically assuming that any memory byte access would cause a full DRAM cycle, you should be on the safe side, compared to any speculative execution issues.

Of course, if instructions are always on word boundary and Word/DWord data accesses are properly aligned, you could use a single full RAS/CAS sequence time for Word/DWords.

For any kind of WCET analysis, you really need some kind of programs these days. I have done pre-emptive scheduler task switching worst case performance analysis for 6502/6800/6809 using manual (pen and paper) methods. Anything more complex and you can't do that analysis by hand :-).

Then use cache lookup plus RAS/CAS sequence time for each memory access.

One should remember that in soft/hard RT environment, you really do not want to load the CPU close to 100 %. For soft RT, I would consider anything above 60 % short time (1 s) average load as overloading.

If you have multiple RT tasks at different priorities, you can reliably predict only the highest priority task latencies (based on interrupt and kernel scheduler latencies).

The latencies for the next highest task depends not only on those latencies but also on he worst execution time of the highest priority task. In practice, you can have only one HRT task and multiple soft-RT tasks below it, unless you do a worst case execution time analysis after each HRT task software update.

Reply to
upsidedown

Most RT extensions are actually true RT kernels and you can put Linux, Windows etc. desktop operating systems into the NULL task to consume CPU cycles not needed by RT tasks.

Of course, this Linux/Windows NUL task will schedule various applications based of their internal scheduling algorithm, such as priority based or even time sharing scheduling (nice). Of course, the RT kernel does not know anything about this low priority activities.

Reply to
upsidedown

My first thought on this was, "Yeah! That's a cool way to crack this nut." But what about the tasks in the NULL task (i.e., kernel tasks) that disable interrupts? One of the requirements for hard real-time is that there is an application-specific limit on the maximum time interrupts can be disabled.

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

FWIW, I've just started using the Qt Serial Port library, and it has some magic that does this. It apparently uses the udev library to interrogate for serial ports.

At this point I don't know or care about the details: as long as the magic works, I'll be a happy, ignorant magic-user. (Linux just has too many layers between me and the hardware, and I just don't CARE how it's done, as long as it works).

--
Tim Wescott 
Control system and signal processing consulting 
www.wescottdesign.com
Reply to
Tim Wescott

Tim,

I'm glad to see someone else here praising Qt. I've been using it for a few months and find it absolutely wonderful (98 percent of the time...).

E.g., awhile back I used it for it's audio interface abstractions. They worked on my desktop system and a Beagle Bone Black with a Sabre USB connected.

I've also written a couple of database-centric utility apps for my wife and I around the house. They work beautifully!

Overall it's a powerful way to generate user interfaces and its abstractions save a lot of time.

/end{QtDrumBeating}

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

On the one hand it's bloatware.

On the other hand, if I'm careful about how I write things I can write PC- side software using Qt for the GUI and a boatload of software in the middle that also gets compiled into stuff that's embedded into customer products and have not a whiff of Qt about them.

So it works well for me.

I think that if I were _just_ writing for the PC I'd use Java or Python or something like that -- but I'm not, so I don't.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

How so? True, you're not going to get 2k executables, but in the days of 2TB drives, who gives a rat's behind?

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

The size of the created file, and the thought of all the signals and pointers and whatnot going on behind the scenes just to say "Hello World" pains my aesthetic sensibilities.

Who _should_ give a rat's behind? Probably no one. But I never claimed to be rational when it comes to my aesthetic sensibilities.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

Personally I'd prefer to have two processors such as the dual cores in a Xilinx Zynq running different operating systems, one a "real" RTOS and the other linux. Communication between the two would be via trivial custom hardware in the FPGA fabric. Apparently that's possible, but I haven't done it (yet?).

Have to check about memory contention, though.

That's presumably equivalent to the coprocessors in the Beaglebone Black.

Reply to
Tom Gardner

Good point.

However, with any reasonable hardware (e.g. modify and test instructions) there are very limited needs for the non-RT OS to disable interrupt to handle mutual exclusion etc.

For more than a decade ago I evaluated some RTOS extensions for Windows and Linux, but the _soft_ realtime performance with standard Windows/Linux OS was adequate (+/- 1 ms), provided that strict hardware (headless systems etc.) and strict software selections were used.

It seems that the old RMX/86 kernel is still alive and kicking in the form of INtime kernel, running Windows applications in the null task.

Reply to
upsidedown

On Windows interrupt routines themselves must be located in non-paged memory, but don't have to be present in the cache.

Disabling caches only eliminates the non-deterministic behavior caused by caches. On modern PC's there are many other sources of non-deterministic timing behavior inside and outside the CPU; branch prediction may get it right most of the time but not always, access to main memory isn't constant time, interference of other devices on shared busses (DMA), chipsets that try to be clever at unexpected moments...etc.

There is no practical way to give a 100% guaranteed response time on a modern PC even when using a RTOS due to the complexity and the number of unknowns in a PC. Derating by the observed worst case timing plus a significant margin can be an option if meeting the deadlines 99.9999% of the time is acceptable.

Reply to
Dombo

Yes, if it is "significant" :-)

Say you specify the amount of processing to be done and the processor to be used, aiming at 50% CPU load. If your WCET tool over-estimates by a factor of 2, you are in trouble -- or sooner, if you must satisfy margin requirements. If your tool over-estimates by a factor of 10, you must aim for at most 10% CPU load, and so on.

With the current high cost of cache misses, even a small increase in the fraction of memory accesses that the analysis tool cannot prove will be cache hits will increase the WCET bound considerably.

True, but the problem with choosing a very powerful processor is that it may be too complex for static WCET analysis, or there may be no such analysis tool available for this processor. You can then fall back to a hybrid static/dynamic analyzer such as RapiTime

formatting link
but then you must have a good test suite, and you get an estimated WCET bound which has a certain (hopefully small) risk of being underestimated, unlike the static analysis case where (assuming the analysis tool has no bugs) the computed WCET bound is always safe.

--
Niklas Holsti 
Tidorum Ltd 
niklas holsti tidorum fi 
      .      @       .
Reply to
Niklas Holsti

Quite right.

I've only pointed out that, in order to have hard realtime operation it is /necessary/ to avoid caches. I have never claimed that avoiding caches is /sufficient/ for hard real time operation.

Quite right.

The only difficulty is in adequately demonstrating that your chosen derating factor is sufficient to satisfy your objectives.

Reply to
Tom Gardner

Ok, but then the analysis assumes 100% cache miss rate, which can give a hugely overestimated and probably useless WCET bound.

The point is that the speculation may evict stuff from the caches, leading to later cache misses when the program accesses this evicted stuff; these misses would not have occurred if the initial memory access had been a cache miss which would have prevented the speculation from being done at all. So the relative slowness of main memory balances out and the timing anomaly remains.

There are several other forms of timing anomalies in many current processors. Designing fast processors without anomalies is not easy but the HRT academics are trying.

Some processors even have "domino effects" in which the occurrence of one "timing accident", such as a cache miss, typically within a loop, causes further cache misses or other effects which delay *every* later iteration of this loop. In other words, the initial timing accident is never "forgotten"; the processor never regains its original "balance".

I fully agree, but even such programs have combinatorial-explosion problems when the target processor has timing anomalies.

I did not claim that these processors and programs cannot be analysed; the question was just if there exist processors which are slower with caches than without. And for certain programs that happens.

Standard schedulability analysis methods such as Response-Time Analysis work for any number of HRT tasks at different priorities, assuming that you have WCETs for each task in isolation (and a suitably constrained model of inter-task interactions). These methods account for the pre-emption of lower-priority tasks by higher-priority tasks.

A difficulty here is that caches add to the delay caused by pre-emption, because a task that has been pre-empted and is then resumed has probably lost much of its cached data, and will run slower for a while before it has reloaded its working set into the cache. This is called Cache-Related Preemption Delay (CRPD). A number of ways to avoid CRPD or include it in WCET and schedulability analysis have been proposed, and some seem to work not too badly.

--
Niklas Holsti 
Tidorum Ltd 
niklas holsti tidorum fi 
      .      @       .
Reply to
Niklas Holsti

No, static WCET analysis works well for several forms of caches. Airbus jets use cached processors in their flight control systems, with WCET analysis tools.

But as has been said repeatedly in this thread, modern PCs have many other sources of hard-to-predict execution time. Some may be amenable to static WCET analysis, but I don't know of any off-the-shelf tools for it.

Some of the people working on "probabilistic WCET analysis" claim that the mathematical tools of "extreme-value statistics" can provide that demonstration. I am not convinced, but I may be wrong.

--
Niklas Holsti 
Tidorum Ltd 
niklas holsti tidorum fi 
      .      @       .
Reply to
Niklas Holsti

There is more than aesthetic sensibilities to it, though most people seem to have stopped noticing it.

Bloated code runs slower - often much slower - simply because it takes more time to transfer to/from memory, wastes memory thus causes swapping, last but not least programmers who write bloated code routinely are simply incapable of writing good code.

Todays OS etc. code is bloated by more than one order of magnitude, I'd say more than two orders really, often 3 and above. Most people, having not seen much else, just accept it and get on with it, I suppose.

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
Dimiter_Popoff

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.