Linux serial port dropping bytes

Can you point us to the two systems you are comparing one of which is yours with the above advantages you claim?

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

David Brown wrote:

Reply to
Didi
Loading thread data ...

Freescale have on more than one Power based part a relatively new design, the call it a PSC (programmable serial conrtoller). They have generally more than one would ask for, 512 byte FIFOs with programmable request tresholds, can work as UARTs, codecs, AC97, you name it. The 5200 costs $18 at 1000+, not a bad deal for such a good design.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Ant> >

Reply to
Didi

This is basically what the common OS does.

Please don't strip attributions from your replies.

--
 [mail]: Chuck F (cbfalconer at maineline dot net) 
 [page]: 
            Try the download section.
Reply to
CBFalconer

I agree. The worst part is that it seems that it is becoming more prevalent. The Philips LPC2000 series has got these horrible UARTs, which have been badly implemented as well. If one needs 8 uarts or more, then it seems that only '550 UARTs are available these days. It is amazing how probably the worst UART ever designed, has become the "Industry Standard".

Regards Anton Erasmus

Reply to
Anton Erasmus

So the common OS stays in a sleep loop and does the bulk of its work while in an ISR. I suggest you stick to your anti-top posting campain... :-). (Hopefully you appreciate the smiley, I know you have valuable experience, it just does not seem to be in processing interrupts or serial interfaces).

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

CBFalc> >

Reply to
Didi

Except much of it has been. An OS runs until something, usually an interrupt, occurs and demands that the current process be changed (possibly for time expiration). Then it does some heavy work to change the visible memory and other things, including code executed, and resumes the new process. This all has to be carefully co-ordinated with such beasts as interrupt driven serial ports. The present generation of multi-processor chips complicate this again.

Note the OS is never in any sleep loop. It is just a set of functions awaiting calling from the running process. Some things can't be called from there, and need to be prevented, either by traps or by refrainment.

I restored the attributions.

--
 [mail]: Chuck F (cbfalconer at maineline dot net) 
 [page]: 
            Try the download section.
Reply to
CBFalconer

Yep - also the Sharp ARMs (LH79520 etc) and Analog devices ADUC7k series ARMs. All the ones I work with, basically :(

--

John Devereux
Reply to
John Devereux

Dimiter,

I appreciate your contributions.

And I am not going to get drawn into an argument about it.

But for the record: your insistence on destroying the "threading model" of these posts is a real PITA...

--

John Devereux
Reply to
John Devereux

Polling code ============

main.c

------ extern volatile uint8_t doSomethingFlag;

void main(void) { while (true) { sleep(); if (doSomethingFlag) { doSomethingFlag = false; doSomething() }; } }

interrupt.c

----------- volatile uint8_t doSomethingFlag;

void interrupt(void) { getData(); if (ready()) { doSomethingFlag = true; wakeup(); } }

Work done during interrupts code ================================

main.c

------ void main(void) { while (true) sleep(); }

interrupt.c

----------- void interrupt(void) { getData(); if (ready()) { // Re-enable interrupts if required doSomething(); } }

Does it make sense now? The "doSomething()" function is also much simpler in the second version, since you don't have issues with locking or synchronising data (such as UART buffers) shared between the main loop and the interrupt function. If "doSomething()" takes a long time, then it's easy to enable interrupts while it runs.

This is really nothing more than standard multi-threaded event-driven code - it's just that the interrupt "thread" is "woken" directly by the interrupt.

Since we are talking about Linux, it's worth noting that this second structure is *exactly* the way interrupts used to be handled in Linux. Modern kernels use the first version. The reasons for the changes are mainly scalability to SMP, scheduling issues under varying loads, making the kernel pre-emptable, and splitting the context for the critical interrupt code (which must be run by the kernel) and the "doSomething", which can often be done in user mode. If these issues don't apply - which is the case in most small embedded systems - then the second structure is often a better choice.

Reply to
David Brown

OK, I'll try something else - like in this message, to provide a pointer to the complete context. If the link does not work, it will not be my fault, and I will have indicated there is a complete context which may apply.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

In reply to:

formatting link

Reply to
Didi

You don't have to add a link to the complete context in a Usenet post (note that the google link is a link to an archive page - Usenet posts do not have URLs). Just snip the parts of the post that are not of interest in your reply, ensure that the attributions at the top of the post are intact (for levels that are not snipped away completely), and add your comments to the middle or bottom of the post as appropriate.

Then we can go back to disagreeing about interrupts and program structure :-)

Reply to
David Brown

I've programmed the delay line software of ESO telescope array in Paranal Chile. 1000 interrupts per second, with a mirror that had to be at the correct nanometer at the correct microsecond. (14 nm RMS) Does that count as experience? Pre-emptible multiple interrupt levels were essential (Vxworks). network < disks < motor < mirror

The system spent a considerable part of its time (>10%) in the highest priority ISR. There was a considerable calculation in f.p. needed at the highest interrupt level, the reaction from the system to actual atmospheric fluctuations. At this point a simulation of the behaviour of the cart was taken into account, hence all the f.p. (in double precision! 80 meter words of nanometers. (On the nanometer scale carbon fiber looks like caoutchouc.) This had to be ready before the command to the mirror could be sent out. That message was under real time constraint.

That interrupt routines only should set a flag for a polling main system is just bunk. Proponents of such dogma wouldn't have got this system running on specs (as it did). Maybe in simple embedded systems you get away in adhering to the dogma. By the way: dogma it is. I have heard nothing from proponents except:"This is the proper way to do it."

Groetjes Albert

--

--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- like all pyramid schemes -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
Reply to
Albert van der Horst

As frequently happens on this news group, there is a broad spectrum encompassing the embedded experience.

We usually wind up with the simple "it depends on your requirements."

In your case, you have what many of us would call a hard real-time system constraint. Your design solved the problem. Great!

Many of us, however, are not constrained so much by the real-time aspect, but rather we must deal with highly variable requirements across a range of products. For us, reliability, extensibility, along with reusable software components and architectures are the norm. Any real-time aspects of the systems are often on a scale that does not push the limits of the available technology. The use of an RTOS and thread priorities are sufficient to meet our timing constraints.

I suspect that Didi falls into this camp and you do not :-)

There is no one-size-fits-all in this business.

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
Kennesaw, GA, USA 30144    http://mnmoran.org

"So often times it happens, that we live our lives in chains
  and we never even know we have the key."
"Already Gone" by Jack Tempchin (recorded by The Eagles)

The Beatles were wrong: 1 & 1 & 1 is 1
Reply to
Michael N. Moran

That, I think, is precisely Albert's and my point - in this business, there are few universal rules. That's why we object strongly to Dimiter's claims that small, fast interrupt routines are the *only* way to write interrupt functions.

I've used "do your work in the ISR" structures precisely to make the software simpler, more reliable, and more reusable (across similar applications) - the smaller, faster and more predictable code is a bonus. But of course, the best method depends on the situation.

Reply to
David Brown

So the rest of your system has been fine with 100mS latency (10% of 1 second). Do you call this real time? I certainly do not.

You have misunderstood what I stated. Setting a flag to be processed by a polling loop is one of a vast variety of cases. The general rule is, again, "do in the ISR only what you cannot do reasonably elsewhere". (once you get unmasked you are no longer counted in an ISR, this is just a variety of context switching). If you think you need a 100mS long ISR, think again. This is never the case. If you do not need a few microseconds range latency, you can get away with a design like you describe - but do not call it good practice nor call it real time. There is little if any dogma I can fall for. I am just being practical

- you do get that after about 1/3 of the first million lines of code you have written.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

Just because 10% of 1 second is 100ms does not mean that is the latency! Where did that come from? Why not say 10% of 1 microsecond is

100ns. Then the latency is better :) !

More likely it is something like 10% of 1/1000 seconds, or 100us. But of course it could be anything without knowing the details.

[...]
--

John Devereux
Reply to
John Devereux

It's real time if it meets the requirements. "Real time" doesn't have a predefined limit of X milliseconds of interrupt latency.

--
Grant Edwards                   grante             Yow!  Wow! Look!! A stray
                                  at               meatball!! Let's interview
                               visi.com            it!
Reply to
Grant Edwards

You might want to think that one through once more. He has 1000 interrupts per second, with interrupts at that priority taking 10% of the time. That means that each interrupt function takes about 100 us, which is then the latency for for the next level of interrupts. That doesn't sound too bad - I've worked with microcontrollers that have close to 100 us overhead for interrupt processing (preserving critical registers, handling the vectored jump, and the restoration of the context before exit).

In that case, you don't understand what "real time" means. It means his system must react to certain events within certain timelimits. Even if your initial misunderstanding about latencies were correct, and lower interrupts were disabled for 100 ms at a time, the system could still be "real time" if 100 ms reactions are good enough.

This is *your* general rule - it applies to the way *you* write your embedded systems. It also happens to apply to many other embedded systems (including many of those I write), but it is not some sort of law that all embedded programmers must follow. It is good enough advice unless you have reasons for doing something different for the system in hand.

You are changing your rule here - you've been claiming that the *only* way to do time-consuming non-critical tasks triggered by an interrupt is to set a flag (or similar mechanism) to be polled from the program's main loop or other non-time-critical code. Now you are saying that it is okay to to do work in the interrupt function as long as interrupts are re-enabled, because you are no longer in the ISR? Well, of course I agree with you that you can often re-enable interrupts (or some interrupts) when doing the time-consuming part of the interrupt response, but you are still very much within the context of the ISR, and there are still plenty of situations (Albert's being an example) when you want to keep all other interrupts disabled during the time-consuming processing.

Real-time programming is about doing the time-critical processing within the required time limits - if you can do that most simply and reliably with long interrupt functions and no nested interrupts, then that's the correct way to do it.

If you think you know all about every different embedded system and its requirements, then think again - *that* is never the case.

I've written a system that spends over 90% of its time inside its uninterruptable interrupt functions. The system works perfectly (and is small enough that I did the cycle counting to prove it).

If your system needs interrupt latency to be in the range of a few microseconds, then you've got the wrong hardware (baring a few very specialised designs).

If the system can be shown to work well and reliably, is well coded in a maintainable way, and is at least roughly as good and efficient as any other design, then it is good practice - because it does exactly what is required.

If the system responds to its needs within the time limits, then it is real time - that's all that is required.

It is dogma when you can't give any explanation for your stance except that it is a "general rule" which you always follow.

I'm not suggesting that "keep your interrupt functions short and fast" is a bad rule - it is simply that many systems can be well designed and well implemented without following it, and indeed some systems will be significantly better if they ignore the "rule".

Reply to
David Brown

OK. 100 uS is a huge latency nowadays as well. And since he has at least one more interrupt, make this 200 uS. Still worse.

This argument is pointless as long as we take for granted, that doing work in the ISR other than the work which must be done there is easier. It is not. It just indicates misguided thinking in the planning phase.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

I agree, I should have written "low latency" rather than "real time".

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.