Linux serial port dropping bytes

... snip ...

It's even simpler than that. If the requirement is that an external signal S be replied to with suitable controlling adjustments within T seconds is met within that T seconds REGARDLESS of what else is going on, you have met the requirement for a real time system. If that timing constraint can ever be missed, you have failed miserably.

'real-time' 'time-share'.

--
 [mail]: Chuck F (cbfalconer at maineline dot net) 
 [page]: 
            Try the download section.
Reply to
CBFalconer
Loading thread data ...

Then you did not try the SAM7/9/AVR32 UARTs... Atmel wins many designs on the superior UARTs. I had one 700ku/year design which was won solely on one of the UART features.

I think they can be improved in many ways, but they are still quite good.

--
Best Regards,
Ulf Samuelsson
This is intended to be my personal opinion which may,
or may not be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

We *do* know the details - he said the interrupt in question was at 1000 times per second - so 100us is correct.

It's only a "huge" latency if that is relevant in the system in question. Unless you actually know the needs of his system (which clearly does *not* need faster than 100us latency for other interrupts), your statement is as absurd as claiming "8 MHz is terribly slow nowadays" or "4K flash is tiny nowadays". I know that some people build embedded systems based around processors at several hundred MHz - it would apparently surprise you to learn that other people sometimes base them around processors at a few MHz, and may be perfectly happy with millisecond timings (and it's still "real time").

These numbers, of course, are totally fabricated just so that you can pretend the latencies are twice as high as his system description indicated. And even if the interrupt functions took 200 us - so what?

Those of us who write a variety of embedded systems, using a variety of structures according to the needs of the system in question, are perfectly aware that doing more work in an ISR can sometimes be the best solution. We don't take anything for granted - we *think* about the problem, and the best way to design the software. The only one taking anything for granted here is you in your sweeping assumptions that a particular design choice is ideal in all circumstances.

No, it's an indication of *thinking* in the planning phase, rather than trying to cram the problem into a single model.

Reply to
David Brown

It's certainly not huge on some of the platforms I work with (where the CPU clock frequency is in KHz rather than MHz). Even if it is "huge", there aren't any stone tablets handed down form God that say huge is bad. If the system requirements can tolerate interrupt latencies of hundreds of microseconds, then what's the problem?

I don't see how having multiple interrupts makes the latency a sum of the ISR lengths...

Sometimes it is. If it means you can eliminate a foreground task and the associated synchronization mechanisms, then it almost certainly easier to just do the work in the ISR.

Apparenty the defintion of "bad" and "misguided" is anybody who does things differently than you do.

--
Grant
Reply to
Grant Edwards

"Low latency" is a relative term. A system that reacts to user input within 50 ms is "low latency". A system that handles postal deliveries that reacts within 50 minutes may be "low latency". A gigabit Ethernet switch needs to identify packets within 500 ns to be "low latency".

Were all your millions of lines of code written for just the one project? Because you seem to be having difficulty appreciating the breadth of possible embedded systems, and how their requirements may vary enormously.

Reply to
David Brown

It is, relative to the current state of technology. Readers of this newsgroup are assumed to be aware of it.

No. You can see some of my projects at

formatting link
. Then take into account the fact that "the 1 project" contains 7 processor designs so far - two of them running DPS.

And there you will not see some which are too old - from the 1 MHz

6809 and 6800 times, some of which did latency related miracles of their own (1985 or so). These sources are not included in the line/byte counts I quote either.

I do not have this difficulty. You seem to have a difficulty to comprehend the fact that being able to afford overkill resources resulting in a working product does not mean your practice is worth being repeated by others. If you design a 100 tonn car which will take you places at a speed and cost you accept you may have solved your immediate problem, but not many people would be well advised to repeat your design.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

I think you are looking at a very narrow view of the technology used in embedded systems. Just because fast devices are available cheaply and easily, does *not* mean that fast devices are appropriate for every embedded system. When designing a new system, I choose a microcontroller running at 150 MHz if that's what suits the system - but I might choose one running at 1 MHz if that's a better fit.

< Hits head repeatedly on the table...>

The whole point of picking your program structure to match the application is that you get smaller, faster, simpler, clearer, and more reliable code by picking the most appropriate setup for the application in hand.

Go back and re-read that paragraph.

You have been claiming it is *always* better to use the one single technique (minimal work in the ISR) - despite repeated examples from various other posters of when other methods can be better, and have been used successfully in real world projects. Spending time thinking about the design and using an appropriate interrupt structure results in a better program using less resources (developers' resources as well as run-time resources) - forcing your code to use a single structure regardless of the application can easily result in wasted resources.

Again - it is *your* dogma that will lead to wasted resources. I (and everyone else still following this branch) recommend using the best interrupt routine structure for the given application. It *cannot* be worse (assuming a competent developer!) than sticking to a general rule such as yours - if minimal interrupt routines are the best for the job in hand, then that's what we'll use!

To correct your analogy - *you* say that all vehicles must have four wheels. I say that four wheels is a good number in many cases, but that sometimes two or six wheels is a better choice, and you should pick the right number of wheels for a given type of vehicle.

Reply to
David Brown

The Atmel UARTs are quite nice, and easy to use, but they are within a MCU. Sometimes one require an external UART which is connected to the external I/O or memory bus. The 68332 can be used as a peripheral only by strapping one of it's pins. This disables the MCU32 core, and makes all the internal MCU available via the address and data bus. If the SAMs can be used in such a manner, they would be usable as an external peripheral.

Regards Anton Erasmus

Reply to
Anton Erasmus

You can use the AT91CAP7 and AT91CAP9 in this manner, but the interface is non-standard and would require an FPGA which contains a block which will regenerate the ARM AHB bus and you will have to design an IP block which implement a peripheral bus to AHB master bridge.

A better way would be to use one of the UARTs as an uplink at high speed and the rest of the UARTs as downlinks.

Then you can use the GSM 07.10 multiplexing protocol to multiplex the downloink UARTS on the uplink UART.

You can also use SPI, since it is quite fast as well on the AT91s.

--
Best Regards,
Ulf Samuelsson
This is intended to be my personal opinion which may,
or may not be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

I don't understand where you get 100 mS latency. If I process 1000 interrupts per second and spend 30% of my total computing budget in those interrupts, the latency is about

300 uS worst case. Other parts of the system could live with that latency, such as the 50 motor commands given out each second.

Groetjes Albert

--

--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- like all pyramid schemes -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
Reply to
Albert van der Horst

100 uS latency used to be highish - but acceptable - on 1 MHz 8 bit systems. 300 uS was too high even back then - and is a huge figure nowadays. Your system is definitely no teaching example to be given in a "low latency" or "interrupt handling" context.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

Are you *still* having trouble understanding this one? 300 uS latency is perfectly good if that's what works in your system.

It's quite simple, really. In this guy's project, he has certain actions that occur 1000 times per second, and must be handled as fast as possible - new data must be acted upon with a few hundred uS. Other events can be delayed for perhaps several ms without a problem. So he has a timer running at 1000 Hz, and does the high priority work then. This work is higher priority than any other interrupts in the system, so the work is done in the interrupt handler.

It's a good example of doing the right work, in the right place, at the right time, and of getting your priorities right for the job in hand.

Reply to
David Brown

if you think *I* am the one having trouble understanding this, think again.

That it may well be. But it is not an example on how low latency and interrupt handler programming is best done - any trouble getting that? If you think 300 uS latency is "low" and maintain it, well, I'll begin feeling like trying to explain colours to a blind man - which I don't think is the case, you seem unwilling rather than unable to see.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

Are there others in this thread? There are no doubt other embedded developers who think that *their* way is the *only* way to structure interrupts. Most of us, I hope, think that the choice depends on the task in hand.

Exactly *who* has been describing 300 us as "low latency" in some sort of mythical absolute terms? I (and Albert) understand that for this project, 300 us is a perfectly acceptable latency for this project. I would never try to put some sort of figure on a phrase such as "low latency", because it is meaningless without context. As has been explained to you before, "low latency" for postal deliveries and "low latency" for gigabit Ethernet are totally different things - arguing absolute latencies is like arguing about the length of a piece of string.

Secondly, and again you seem to be the only one having trouble here, is that Albert's system has very fast and low jitter timing for the time-critical tasks that he is interested in. The latencies between his

1000 Hz timer interrupt and his system acting are probably less than a microsecond. The 300 us potential latency is for non-critical, low priority events.

Thus this *is* a good example of a way to get your high priority events handled quickly and with low latency and low jitter.

Reply to
David Brown

David,

whatever you say. I give up. If you don't get what I have been saying so far there are little if any chances you will ever get it.

No matter how many times you repeat how you can walk instead of run if you are not in a hurry this will not make your point valid in the context of a running competition - unless talking paraolympics, that is. Or do you see 1 minute/100 metres as a running achievement?....

Dimiter

David Brown wrote:

Reply to
Didi

And no matter how many times you try to tell people that a car is the

*only* way to get from A to B, I will continue to use a bicycle, a car, or a boat as appropriate.

And I prefer to walk to my local shop rather than run - for the purposes of going shopping, I will "beat" anyone running there.

But as for giving up, I agree there (unless you change your mind :-)

mvh.,

David

Reply to
David Brown

The critical part of the computer system had about 1.5 us latency reacting to a hardware event happening sampling the telemetry at ns precision. The existence of some parts with 100 us latency in the same system sort of demonstrates the viability of prioritized interrupts.

That you judge a system by the part that can tolerate the largest latency, and attach an absolute meaning to it, instead of looking at specifications, demonstrates your -- lets put it nicely -- lack of experience in certain fields.

I have never seen a requirement that a valve in a chemical plant has to close within 300 uS, "even back then". Real time requirements are imposed by the environment, and are not a self inflicted measure of performance that must get tighter with the availability of more powerful hardware.

Groetjes Albert

--

--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- like all pyramid schemes -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
Reply to
Albert van der Horst

I can only judge by the info you give. You mention 1.5 uS latency for the first time in this post. This is low latency indeed, but staying in the IRQ handler for hundreds of uS after that demonstrates your lack of experience with multi-task low latency systems. Just because it worked does not mean you deserve the credit, in this case all of it goes to the CPU designer. Had you needed a second low latency response your approach would have been useless.

I have considered your approach when I was a beginner - this is a classic beginners mistake. I must say I did understand I was making a mistake back then (20+ years ago) and got things right - even as a beginner.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

When you are in a hole, stop digging.

Albert made it clear from his first post that it was fast handling of critical events that meant long latencies for non-critical events. He didn't mention the actual figure of 1.5 us, but it was clear that the critical events were handled as fast as necessary.

Albert's handling of interrupts demonstrates clearly that he understand the requirements of his system, and write code that is structured appropriately in order to handle events within the required time limits. If he had followed your approach of doing very little during the critical interrupt, and handling the calculations and control from a normal OS task, then it would be much harder to guarantee the results - task switching latencies, other interrupts (however short), non-pre-emptable code, locks, and other possible delays could conspire to delay the calculations that had to be done within a time limit from the critical events. Even if time limits can still be met, the calculations *proving* that are vastly easier when the critical work is done during the interrupts.

If Albert had needed a second low-latency response his approach would not have been useless - he would have had a different set of system requirements, and have written code appropriately (perhaps enabling only that second interrupt during the calculations - or perhaps using your structuring if that is the right choice).

You seem to be under the misunderstanding that we (Albert, myself, and anyone else still paying attention to this thread) are advocating always doing all your work in interrupt functions. We are, in fact, advocating picking the right structure for the job in hand rather than narrow-mindedly forcing a single structure onto all programs.

With all due respect to your experience, I find it hard to understand why someone in this branch would pick a single structure and defend it fanatically as the *only* way for *all* programs written by *all* developers for *all* tasks. If you have learned anything at all over those twenty years in embedded development, surely it must be that the answer is always "it depends" ?

Reply to
David Brown

So you may really want to do that. All you have been saying so far is stating the obvious: if you do not need speed it is OK to use fast parts to go slow because they are inexpensive anyway. We all know that.

He did not make clear anything - his numbers followed from 1 second decreasing down to 1.5 uS in subsequent messages. His last post said something about sampling at nS precision with 1.5 uS latency...

It is by far not the only way. It is just the right way to do things when low latencies & interrupts are involved. I explained that as well a number of times already.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
Didi

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.