Linux serial port dropping bytes

Without knowing the precise requirements and constraints that have to be addressed, there no such thing as 'the right way'. Though the general rule to do as little as possible in the interrupt routine is sound, there are cases were this is not a viable or the best solution.

Reply to
Dombo
Loading thread data ...

OK, so I'll correct myself: it is the *only* way to achieve the best latency a CPU is capable of in multi-interrupt systems.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

Try to understand this:

LATENCY DOESN'T FLIPPING MATTER.

You're obsessessed with latency, when in the end, latency doesn't really matter _at_all_. Latency is simply one of many different measurements of a system. And it's not one that the end user cares about at all.

What matters is meeting the system's timing requirements.

You could design a system with very low interrupt latency, but if it doesn't meet the system timing requirements, IT'S A BAD DESIGN. While another system that does a lot of work in ISRs would have high latency, but if it meets the system timing requirement IT'S A BETTER DESIGN THAN YOUR "LOW LATENCY" DESIGN.

This has been explained to you over and over and over, but you seem incapable or unwilling to understand.

Since interrupt latency is the only metric you worry about, I hope I hope your customers are happy with systems that have low interrupt latency regardless of whether they're so what they're supposed to.

I can design a system with an interrupt latency of under 1ns. It won't do anything useful, but it's extrememly low latency, so it must be great!

--
Grant Edwards                   grante             Yow!  There's a SALE on
                                  at               STRETCH SOCKS down at the
 Click to see the full signature
Reply to
Grant Edwards

Yes, I've been stating the obvious - because you seem incapable of understanding it (you've got it wrong *again* here).

When you *do* need speed, and you *do* have requirements for timing limits, you must choose the *best* method to meet those requirements. Can you agree on that?

The best method for dealing with a particular problem will depend on the exact circumstances and requirements of the problem. Can you agree with that?

In embedded programming, there is seldom a single solution that fits all problems. Still with me?

In Albert's case (as in some of my own systems), doing substantial work during an interrupt handler made the system and the program simpler and faster to write and test, and to make sure that it fitted the timing requirements. And contrary to your beliefs, programs structured this way (when used appropriately) are smaller and faster, and therefore run on smaller, cheaper and slower hardware.

So are you still trying to tell us that *you* know *your* method of structuring interrupts is always "better" than any other method? I understand entirely the principle of doing as little as possible during interrupts - and I've written programs that work that way. I also understand a variety of different ways of handling interrupts, multiple tasks, and different latency and throughput requirements. So when I design a program, I pick a suitable model for the job. Are you trying to tell me that I'm wrong, and instead of thinking about and analysing the task in hand, I should always aim for minimal interrupt functions, based solely on *your* unsubstantiated claims?

His first post said "a mirror that had to be at the correct nanometer at the correct microsecond". That sounds like micro-second accurate timing to me.

No, you haven't explained anything - you have *claimed* it a number of times. Albert has given details of a system where doing work during the interrupt was the best way to structure his system, and I have described uses different structures. I even gave pseudo-code showing how it gives simpler and clearer source code, as you specifically asked for.

So what's next? Some guff about all embedded systems needing an RTOS, as that's the "only right way to do it" ? Or perhaps a claim that the power pc is the "only right choice of processor" ?

Reply to
David Brown

Thanks you for restating the obvious as part of the chorus.

Latency ne=C5ds to be addressed, as in this thread, WHEN IT IS NECESSARY TO BE ADDRESSED.

Why are so many of you keen to tell me how they live without caring about latency I do not understand (actually I do, but I want to keep on being nice). Now tell me how you don't need a processor at all to plant potatos because this is how your grandparents did it, bla-bla-bla.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

Whatever you say. You seem to think that if the obviious has been hard for you to grasp it must be so for the rest of the world. Trust me, it is not. You may want to stop digging.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

It is *not* hard for me, or for the rest of this group - everyone else in this thread understands perfectly well that you structure your interrupts to suit the task in hand.

You've been at this game for two decades, and judging by your other posts (such as recent ones in ppc threads), you are good at what you do. But you are at odds with everyone else here, and I know that at least some of these people have long experience and are good at what they are doing.

When someone disagrees so determinedly with everyone else around you, I can think of perhaps four possible reasons. One is that you're incompetent. I think we can rule that one out.

A second one is that you know something that few others know, despite our experience and knowledge. If that's the case, then I'd really like to hear about it - but so far, you've given no evidence or explanation for your viewpoint.

A third one is that you have simply got it wrong, but never realised it. This can happen to anyone - in a recent thread here, there was someone who had developed embedded systems for decades under the belief that serial speeds where normally given in bytes per second, rather than bits per second or baud. If that's the case, then I hope you can learn something here - this group is about sharing of information.

The most likely explanation, I think, is that this is a misunderstanding and that we are not really discussing the same thing, although I'm not sure where we could have lost track.

mvh.,

David

Reply to
David Brown

Another explanation would be that though he is very experienced, his range of experience is limited to a relatively narrow field, and that within that narrow field his way may very well be the only/right way to do it.

At the beginning of my career I had strong convictions about what was the best programming language, what was the best OS and what was the "right' way to do things, everyone who thought differently just got it wrong. And I was right too, every time...until I switched companies and I had to deal with different problems and different constraints. Over the years I have become much more reluctant to advice a certain solution when I don't fully grasp the problem and the context.

From both technical and application perspective the embedded field is just too wide for a single solution to yield always optimal, or least acceptable, results. I guess this keeps this line of work interesting.

Reply to
Dombo

It is wide indeed, so is the world. And when we talk about one of the characteristics of processors - IRQ latency - we talk about this characteristic. Joining the chorus whining how much else there is to it and how you can get away without caring about it only indicates that this particular side of programming is hard to understand for many programers - apparently many of them manage to spend their entire career without ever understanding it. Feel free to dream how my experience is limited to only that field if that makes you feel better. But do not preach the wrong techniques just because often it does not matter they are wrong - unless your purpose is to mislead some beginners willing to learn.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

... snip ...

Oh boy. Now I have become a horrible example to use to scare the children. Actually that goof was limited to that thread - it had not persevered over the generations. :-) Really.

--
 [mail]: Chuck F (cbfalconer at maineline dot net) 
 [page]: 
 Click to see the full signature
Reply to
CBFalconer

I didn't mention any names... but it does serve as an example that anyone can make a mistake without realising it.

Reply to
David Brown

It would certainly be worse to make a mistake while realising it, wouldn't it?

Reply to
Lanarcam

David ... I think you went a bit too far with this statement. There certainly *appears* to be a vocal majority beating Didi around the ears for his unwavering opinions, but I think some of understand his point-of-view quite well.

One difference that I can see between the two viewpoints is one of optimization priority:

1) Didi: Apply general rule and optimize later. 2) Others: Optimize first ... we don't need no stinkin rulze.

If you happen to be one of those fortunate enough to know every system requirement before setting about the business of implementation, then either approach will work just find.

However, for those of us who must plan for the unexpected requirement, this interrupt-latency rule-of-thumb is more than just a little important, as it helps prevent a major redesign because we made an assumption used a technique which, though perhaps convenient and allowing for simplification, assumed that a high latency interrupt would not be a problem (regardless of who is to blame for the new requirement.;)

Not every embedded system is a well-defined static roll-your-own fixed purpose system, and planning for the unknown is just good practice.

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
 Click to see the full signature
Reply to
Michael N. Moran

I can only judge by what others have posted in this thread - I welcome other opinions.

Note also that I am not beating Didi - I disagree with his opinion here, and can't understand his apparent unwillingness to take a wider viewpoint. In other cases, I have had nothing but respect for him and his helpful posts - which is why I find this case particularly hard to understand, and particularly frustrating.

That could well be an explanation for the differences in viewpoints. Certainly, if you must design your system with a view to expansion for unforeseen modifications, then you need to use more general design techniques than when you are confident that you know the full requirements.

But this group is about embedded development - we cover a wide area. Certainly there are occasions where you want a flexible and general system, but there are also occasions when you need a tightly specified design that does exactly what is required and no more. I've worked with systems that where the program fills over 95% of the available code space - there is no room for using general rules, because that would mean bigger devices that are outside the budget. Rules-of-thumb are exactly what it says on the tin - you can use them when you don't have any better way of doing things.

Didi is apparently under the assumption that people whose code does substantial work during an interrupt are not interested in interrupt latency - as far as I can tell, he believes that doing work during interrupt routines leads to high interrupt latencies. In fact, the opposite is true for critical interrupts - when your critical interrupt routines do all the work required for an event, the latency for reacting to the event is as low as it possibly can be on the given hardware. Latency for non-critical interrupts is increased, but latency for the critical interrupts is minimised.

In many systems with time-critical interrupts, you only have one (or a few at most) critical interrupts - other interrupts are much lower priority (if that's not the case, you've probably got your hardware wrong). Doing the work during an interrupt routine is a method of sacrificing response time for the non-critical interrupts to improve the response of the critical interrupt.

It is well known that "premature optimisation is the root of all evil". But it is equally well known that all generalisations are false. You should pick an interrupt structure that is appropriate for the design in question, which may well be "do as little as possible in the interrupt routine". But it may also be to respond completely to critical events during an interrupt routine - or any other structure that fits the design. The only wrong method is to pick the interrupt structure based on a rule-of-thumb, then try to see how to fit it into the design.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.