Recommend good RT Linux ?

Hi,

I am finally reaching the end of my patience in trying to keep Test code running under DOS/Win98 etc. It is fast becomming a situation where I am spending 10 times more time trying to get my test app going on a PC than I spend coding the embedded app in the first place. I am thinking of using a "Real Time" linux on a PC for hosting my test apps. Most of the time I only need to be able to get a number of reasonably accurate "Timer Ticker" tasks in which I Rx/Tx to my embedded system. (Typical 100Hz with 5 to 10% jitter is good enough,

1kHz would be nice)

Is it possible to load the RT components on top of RH9 or some other distribution. If so which RT Linux would people recommend. I would like to purchase a reasonable starter kit with good docs to get me started. If some support exists for porting code from RT Kernel (16 bit), it would be a bonus.

If versions exists for ARM and PPC for embedded SBCs, it would be even nicer.

Regards Anton Erasmus

Reply to
Anton Erasmus
Loading thread data ...

With OS/2 warp3 warp4, you are slicing 1 second into about 32 slices, so you would be able to do something like 30hz somewhat okay. Win9x, NT is similar in structure... and possibly the other windows series. You may have some luck with ecomstation

formatting link
since I understand the OS/2 kernel was updated and you could slice into smaller portions (probably allowing you 128hz or 256hz accuracy.... you would have to setup the timeslicing in config.sys)

Linux/Unix, from what I understand is more of a sequential OS and not a RT OS, so you may be able to load-it-down to the point that everything looks frozen and it still don't break, but real-time is not the priority.....the latest kernels are more desktop oriented, and have better response, so maybe the "real-time" factor you are looking for exist here.... I'd look at Mandrake 9.x if you are not familiar with linux since it takes care of most of the bumps and hiccups. Since you're running Win98, you could make the same machine dual-boot, just scandisk, defrag, and save your drive 1st before installing linux.

There was a RT linux distro/fork at one point in time called Turbo linux I believe.... you are best asking for a RT linux in comp.os.linux.advocacy (watch out for trolls, there are many of them), plus I'd mention the 100hz requirement too.

If you really need the "100hz factor", realize that whatever OS you have, whether it's Linux, OS/2 or Windows, you are going to have "overhead" which likely makes the 100hz fairly jittery. I would preferably tend to remove the OS-factor by running DOS on the machine which talks to your embedded project.... this means you're going to have to roll-your-own serial I/O and use timers to keep accurate time, but atleast "you" control timing accurately, versus getting the jittery results due to OS overhead. If you don't have DOS any longer, you could test in Win98, then when it comes to "actual" running tests, use a freeDOS bootdisk.

Linux may be your answer, but I'm not expert enough to recommend which distro at this point..... check the comp.os.linux.advocacy

Reply to
Amused

I've read 12bit data from the paralell port at ~22kHz with no problems ubder linux 2.2.x

I believe the scheduler interrupt is 1kHz now, compared to only 100Hz back then.

obviously when the scheduling happens you get glitching but its not bad at all.

--
Spyros lair: http://www.mnementh.co.uk/   ||||   Maintainer: arm26 linux

Do not meddle in the affairs of Dragons, for you are tasty and good with
 Click to see the full signature
Reply to
Ian Molton

Huh? Where did he ask about warp?

Real-time IS the priority for some developers.

Some history, do also take a look at the test program (but you do not need to run with the maximum possible priority):

formatting link

Note: a accidental infinite loop in SCHED_FIFO/RT thread WILL lock up your computer. (An even higher priority monitor can help you regain control)

For more recent information take a look at

formatting link

No, Turbo Linux = server. Look at embedded or professional audio workstations instead. I think there exist precompiled kernels for RedHat.

It is of cause possible to run without dOS. Build the application and flash it over the computers BIOS. :-)

comp.os.linux.embedded would be a much better choice...

/RogerL

--
Roger Larsson
Skellefteå
 Click to see the full signature
Reply to
Roger Larsson

I think the default is still 100hz. I used the 2.4 kernel on the axis cris 100lx processor and it was 100hz, but it was easy enough to find the code the set up the timer and the time.h files to push it to

3200hz.
Reply to
TCS

Roger Larsson wrote:

Hehehe. He didn't ask. Would you have replied here if I said FreeBSD?

When he said DOS/Win98....I made an assumption that it isn't a very fast machine by today's standards, so threw in a comparison. I know warp is about 30 something slices a sec, so it won't be that far-off for win9x. the structures will be somewhat similar in some respect. That is, you can have about 30 processes running evenly per second, same amount of time divided up per process. If you have less, then one or more of those slices will be given to a process that needs it. So supposing you have 5 processes running, it means each process would get about 6 slices each and if it don't need it, it gives the rest of it's time to other processes (assumming you don't give priority to some process over another process). Now supposing he has only 2 processes running, that means about 15 slices given per process. Okay, he's got 1 program running, that means 30 slices right? Wrong! Windows is not running that DOS program 100% of the time, it's got to take care of other stuff like clocks, task managers, back-end-stuff, whatever..... so let's say that's about 29 slices for the DOS program and 1 slice for windows...if it doesn't need it, it surrenders the rest of it's time to his program...... but it's that 1 slice that is screwing up his "evenly spaced" 100hz rate. So, from what I understand of warp 3 or 4, with about 30 slices, yes I can understand that he has a major headache trying to keep 100hz because there will always be that 1 slice that creeps in and messes things up for him. Warp, Win9x were created for 20mhz machines in 1994 or there abouts. 30 slices per second is probably a close guess. 100hz with a little jitter would be a headache for Windows9x, 30hz would be realistic.

If he runs DOS, the OS overhead is a non-issue and he does as he pleases. If the OS allows for 100 slices per second, he's sort of okay. If the OS has more slices, better, less jitter, but realize you need faster machines for more slices too, otherwise you get bogged down.

Yes it is, but don't know what you want to say here since most of my reply above is about easiest way to get into linux if he never seen linux before. Dual boot allows him to get to his old tools assuming he's got just 1 machine. You don't simply throw out your old tools, .... you migrate to your new tools.

If the person that asked, is an expert, then end of discussion, but supposing not, you have to provide/suggest the easiest path. As there was no indication, got to assume not an expert in linux. My suggestion is get Mandrake, not Debian, not Gentoo. Get Mandrake. You don't put a beginner skier on the black diamond runs.... you put em on the gentle bunny hills first. ;-)

Priority is probably a new term to some windows users as it's not an option presented on a windows menu. On OS/2 running DOS tasks, it's a settable option. Under Linux, I wasn't sure you could set it... it isn't something I found using KDE (not something you can right-click or something), but the article also mentions instability on slow machines running tiny slices.... 100hz or

10ms slices on a 133mhz machine may be pushing it . Tiny slices are great for RT as long as you use a decently fast machine for the task.

Excellent. I didn't know that RT precomiled kernels existed either, glad someone asked, otherwise this would not have occurred to me either. Thanks for pointing out the article.

Yes it's possible. That would now be considered a "dedicated" machine until you flash it back. :-( Most of us tend to like to use a computer for more than 1 thing. ;-)

Better.

Reply to
Amused

Assuming that the PC is fairly modern and you do not have a lot of interrupt load from ethernet, IDE etc. cards, those jitter requirements (0.5 .. 1 ms typical) does not seem to be too hard to reach with a patched up 2.4.x kernel. Look for the low latency patch and the kernel preemption patch.

To get a higher tick rate, you would have to change the HZ constant and recompile, but of course, you would not get 5-10 % jitter at 1000 Hz :-).

If your external system would have contained a timer that would generate an interrupt signal at regular intervals, then a device driver (in practically any OS) could have serviced your device, without messing around with the OS clock rate.

If you need better jitter performance than that, then you need some proprietary real time kernel, doing all the time critical things in various high priority tasks. Instead of running the lowest priority idle/null task when there is nothing else to do, this null task is replaced with an ordinary Linux (or Windows NT) operating system, thus the whole ordinary OS runs simply as the lowest priority real time task within the real time scheduler. The ordinary OS then schedules its own processes, when no real time activity is going on in the high priority RT tasks.

At least the WindowsNT + RT kernel systems are quite expensive and often require programming the RT part of the application with proprietary system service calls and may also run completely in kernel mode, thus without any user mode memory protection.

Time slices and quantums are concepts usually associated with time sharing systems, in which a large number of equal priority runnable programs are waiting to get access to the CPU, usually using some kind of round robin system.

In a real time system everything is executed strictly according to the different priorities. When the highest priority task after waiting for an event becomes runnable, the execution of any low priority task is suspended, the highest priority task gets the control and executes as long as it needs. When the highest priority task no longer needs the CPU, the suspended low priority task will continue execution.

Of course, the high priority task must be written in such a way that it does not consume much time in a single session. I would consider it a very bad design, if the highest priority task would execute for more than a millisecond.

Typically, the highest priority task just waits for interrupts or other signals, then execute a few hundred instructions, before waiting for the next activation. During the time when the high priority tasks are waiting for activation, the lower priority tasks may continue.

Paul

Reply to
Paul Keinanen

See RTLinux from FSMLabs

formatting link
It will do everything you need.

Reply to
Ed Skinner

Use eCos

formatting link
which is a *real* real-time OS for embedded applications. It's API compatible with Linux (which means most Linux software should compile on for it).

There are also ARM and PPC versions available. And it's FREE!!

Reply to
Ultimate Buu

hehehe.....of course at 1000hz there would be less jitter, but we're starting to assume a lot based one posting now..... there is a point of no return where our assumes turns into ass u me ;-)

I know what you're talking about for the rest of your message since I've been there myself, but I think I'm going to safely leave this "as is" as we haven't heard any more details from Anton (the original poster). Anyhow, recent replies are pointing out good sources to look at.

Reply to
Amused

To clarify things.

When setting SCHED_FIFO or SCHED_RR with sched_setscheduler() that process will continue as long as it wants to. (SCHED_RR will round robin with other SCHED_RR processes at the same priority)

In addition whenever it is waked up - by interrupt service routine or by another process - it will preempt any lower priority process running. Unless the lower priority process executes code inside the kernel.

Most execution paths in the kernel are short - but there are exceptions: a) When a new memory page is requested - might need to scan all pages. b) Stupid device drivers that busy waits... (watch out for IDE in PIO mode) c) . . .

This can be avoided in some ways: (1) Prioritize the important task be independent on the kernel. (RTLinux, ...) Con: the task can not use any kernel services. Pro: better RT than even writing an interrupt driver can provide. When to use: Real RT - you have tho hit that deadline or else...

(2) Check if a higher priority process wants the CPU while executing those long kernel paths.

Low Latency patch. Lowers the maximum time.

Some of this work got into 2.4.x kernel - but not enough (IMHO)

(3) Use SMP locks open up the possibility to reschedule in kernel. SMP lock = I want to be alone here = Do not risk that any other process that might be schedule might end up inside this region before I left it.

Lowers the mean time, helps maximum time.

Is a compile time feature of 2.6.x

(2) and (3) works best together! They can also benefit the case where you need (1) since you often need some sort of service processes - like visualization.

And finally the question why you can not set this in KDE,

formatting link

Linux tries to avoid the possibility to shoot yourself in the foot, unless you are root...

An ordinary user can only lower the process priority (make it more nice). Note: A process with negative nice value still has a time slice that can run out, to allow other processes to progress.

There are KDE processes that benefits to be run as SCHED_FIFO, like arts (KDE sound system). But this is considered to dangerous for ordinary distributions so most does not use this (it opens a local denial of service hole, since you can run own code as an arts pluggin - put a "while (1) {}" in a pluggin and you have killed your machine). If the executable artswrapper is suid root - then the configure option to run arts in RT mode will be effective.

/RogerL

--
Roger Larsson
Skellefteå
 Click to see the full signature
Reply to
Roger Larsson

For clarification..... These are kernel calls.... see Unix manual pages, System calls

Waiting for a reallllly slow device to respond.... example, waiting for an input device to reply back. Take the assembler code in dx,al ... it doesn't matter which high level kernel you use, they would all have to wait an equal amount for the hardware to respond.... meanwhile the CPU hangs in limbo-land until then.

DMAs, RAM refreshes, etc.... ;-)

Well, apparently the kernel calls you mention above are in the 2.2 Kernel as per Mandrake 8, plus the manual pages, but I'm still at a loss to figure out how to prioritize a process..... OS/2 for example, you right-click the program and prioritize it before you start it or, just do it on-the-fly, so that makes it nice-n-easy, windows, well, don't think you have the luxury under Win9x..... KDE.... right-clicking doesn't bring up that option... user or root.... is there a GUI application to play with here or do you have to write it into your own programs? Nevermind, found "nice"..... TaskBar>K>Applications>Monitoring>"Process Management" That handles running tasks, but still stuck on defaults....any way to default something as low or hi priority?

This is where you start having to check for version numbers if planning on releasing code out into the wild.... ver>=2.6?

Nobody wants an app to be using up a whole slice if it's just sitting there waiting for keyboard input..... then again, some stuff out there can make your eyes roll. :-(

Reply to
Amused

Since you are using KDE - write "man:sched_setscheduler" in konqueror location bar.

If ISA bus runs at about 10MHz, how many waitstates do you need to get 1 ms stalls (10% jitter)?

Please - if you care about lowlevel stuff like this you should not use a processor at all - use a FPGA!

You shouldn't!

Unless you write a application that requires these features to be able to execute at very regular intervals.

Ask yourself - why isn't aRTs (advanced RT synthesizer) allowed to use it for most Linux distributors?

Ctrl-ESC (didn't Warp use something like that), open process menu with right mouse button, renice [I am running the cvs version of KDE]

Why do you want to do this? Do you have any specific problem?

No, it affects only the kernel. Your user process can (will) be preempted anytime (at worst possible time). The only case you have to worry about is if your code is not SMP safe in the first place. Code that is not SMP safe they are not UP safe either. (Only less likely to be struck)

This was true for Warp applications as well!

"Since a thread can be preempted at any time due to an interrupt or timeslice end, threads that are sharing resources with threads in the same process or with threads in other processes must protect critical sections where these shared resources are manipulated." from "The Design of OS/2" Dietel Kogan, Chapter 5.4 SCHEDULING

You are not using DOS you see. Very few application use polling, and if they do they sleep too. Unix has been multiuser for a long time, and you better avoid wasting shared resources.

/RogerL

--
Roger Larsson
Skellefteå
 Click to see the full signature
Reply to
Roger Larsson

Now that's nice. Learn something everyday. Thanks. ....wording seems a bit off, can't quite place a finger on it, but it would appear if you have suppose 1 program with priority2, one with priority 1 and 3 programs with priority 0. Supposing the 0s are running, then 1 comes along, then 2 comes along ....it would get scheduled as.....

0a 0b 1 1 2 2 2 2 1 1 1 1 1 0c 0a 0b 0c 0a 0b 0c....

I tried looking for the OS/2 article, and it was worded more like sharing:

0a 0b 1 0c 1 0a 2 1 2 0b 2 1 2 0c 1 0a 0b 0c 0a 0b.....

If I'm wrong, I'm wrong, but it would appear the priority 0 programs would get put-off until the 1s finished there business, or released the rest of their time due to waits of one sort or another.

We're getting into the assume -> ass u me area here ;-) based on the little info we got from the 1st posting, but I'll answer it.

Lots. Fewer if you're talking to some smart-card that calculates something before returning a value....say some soundcards....some reply at 30k. If a program was written smartly to take care of time... as: while 1 { while not next 100th-of-second then wait do next TX or RX } ...then it should work fine, except the windows timer is fairly "chunky" to attempt 100hz.... much better luck running under DOS without the overhead.

However, you only need to slow this particular program by 10%: while 1 { for i = 1 to 1000000: do nothing : next i do next TX or RX }

I'm Joe Average, I draw something with GIMP, looks pretty. Meanwhile, Bob The Graphics Artist has to produce these 10ft by 2ft posters to splash across the side of a bus. To me, speed ain't important, priority 0 is fine, on the other hand, Bob the artist, needs the speed versus anything else he's going to run.

Repeat, senario with Draftsman using Cad software... Repeat senario with Artist doing 3-D graphics.... When we start saying "You shouldn't", it put's Linux no better than Windows. ...just food-for-thought.

Self-imposed Denial-of-service attack? Which goes back to the scheduler again...... I hope we're not talking about a log-jam here

0a 0b 1 1 2 2 2 2 1 1 1 1 1 0c 0a 0b 0c 0a 0b 0c.... because this seems better at not plugging-up a pipeline 0a 0b 1 0c 2 1 2 0a 2 1 2 0b 1 0c 1 0a 0b 0c 0a 0b.....

memorizing keys wasn't on my to-do list, maybe later, for now, consider me point-n-click. Thanks for the info.

Not myself specifically ... at the moment, but back to previous example: Bob the Graphics Artist needs speed to paint his bill boards on buses. It would be important for Bob to have fast. Joe Average, on the other hand, don't need speed. Both people use same graphics package, yet different priority.

Sometimes you have access to the code, and you can tinker with it like you mention, other times, all you have is an executable, therefore the only way to set priority is externally, via another method. Imagine Bob setting the nice on all the other programs to minus 10 every morning before starting work. Imagine on the other hand right-clicking the priority higher once when the program was installed, set-n-forget.

Sometimes, there's some emegency or another where something just has to happen....

....snip.... The rest was interesting, but we're making mountains out of a mole hills. ;-)

Reply to
Amused

There are three classes SCHED_FIFO Process that get processor continues to run until: a) It is done. b) Some process with higher priority preempts it.

0a 0a 1 1 2 2 2 2 1 1 1 1 1 0a 0a 0b 0b 0b

SCHED_RR Round robins processes at the same priority. But at end of process queue when timeslice gets empty. If only process at the priority it will continue to run.

as you indicated 0a 0b ...

SCHED_OTHER It could be named SCHED_NORMAL, since this is where the most processes belong.

Higher priority processes gets longer timeslices(!) Process with highest priority and timeslices remaining is selected. Recalculate timeslices when there are no remaining timeslices among runnable processes.

This results in: a) It will not be possible to starve a low priority process forever. b) Higher priority processes will be able to use more CPU time.

Priority is related to niceness and dynamic behavior.

If Bob the artist would be given higher priority than you. You would soon realize that priority 0 was not fine. You would get frequent user interface stalls. Often for more than ten seconds! Priorities can be difficult - probably both of you would be better off with you having the higher priority. He is working on a big poster and expect things to be slow - a filter run will take more than ten seconds having a higher priority might save less than one - Bob does not need higher priority, he needs an Opteron! (big cache, 64bit linear address space)

Windows can do this today: C-A-Del, right click on task, select scheduler class.

Exactly, and especially when running in a multiuser environment.

So why change before there is a need? When you and others have a specific need - yell at linux-kernel and it might change.

There has been suggestions to let all users set upto -5 in niceness. But I guess your example was one of the reasons it has not been done - all users (but you) would think that their own work is the most important...

And also to allow a SCHED_SOFTRR that works as SCHED_RR until the computer gets overloaded with SOFTRR tasks. This could be nice for applications like aRTs!

But didn't you say that Bob should have a different priority than you on the same program?

/RogerL

--
Roger Larsson
Skellefteå
 Click to see the full signature
Reply to
Roger Larsson

Since you've gone into much detail, I tried again to look up the article, I'm fairly sure it's one of these:

formatting link
unfortunately, it's a looooong list. SCHED_OTHER, if it is the "normal" method is the closest to what I was trying to describe because it had (a) and (b) in it. From a "user" perspective, a user would like to see everything rolling of course. The slicing method is a bit different too, so instead of dealing with 30ish slices and allocating them in some fashion, in this case you're talking thinner slices (faster hardware anyways), but giving one task say 10 slices while giving another task with higher priority, say 25 slices.... that's okay.

Ideally, yes, seperate machines, and fast machines are relatively cheap for the hourly rate of today. Back in the 60s, it was 1 machine, many programmers (hardware expensive, programmers cheap by comparison). Today it's just logical to give Bob the artist his own machine (programmers expensive, hardware cheap), and if you look at many fortune 500 companies, they are literally donating old hardware which are still useful by school or even basic user standards (forget gamers, their always bleeding edge).

However, back to 1 machine only, you mention above "stalls" and not "major slow down". A stall would be unacceptable if suppose Bob The Artist was running his software while I was running my wordprocessor. With a stall, I just can't really do anything until you hit that tiny window of opportunity, with a major slowdow, well, it may look ugly, but you still could get something coherent happening..... I remember doing number crunching and perhaps being able to "type-ahead" say an entire sentence on a wordprocessor, nothing shows up on the screen, then a whole bunch shows up afterwards..... That you can do with a slowdown, but a stall.....sorry-dead-keyboard-too.

The 2.4 Kernel is better, but I'm still running the 2.2 kernel, and I've done a few things that have locked up the machine that the only ways out is either wait 1/2 hour, or do a cold reset(nasty).

Key words were "not at the moment", I do occasionally. I'm sure you run into creative genius spurts every now-n-then too, where you wouldn't mind squeezing a bit more speed into a certain program at the expense of other things running on your desktop. Since this is comp.arch.embedded, say, you simulate a chip by hand by clicking the appropriate buttons for a couple of steps...... No big deal, that's okay at priority 0. Have you ever run into occasions where you simply got to animate a long sequence? If yes, would you like to click-in high priority on your simulator and come back in a couple hours versus a couple of days? Annoyed your screensaver kicked-in? Okay, turn it off... forgot to turn it back on? On the other hand, hi-priority the simulator, who cares if the screensaver runs like molases? So what if everything else becomes slow. Atleast you don't need to fiddle with your settings, then refiddle them back again.

...and so it shall be!!! when we get the time!..... maybe.... later...

I guess these suggestions are from people who recognize some sort of need for a set-priority higher. From a single computer / single user perspective, Bob the artist would want to set his graphics package to, say 10 (everything else by default is priority 0 out of 255), 255 would probably freeze everything else running to that 10second stalls you've mentioned.... So Bob, from watching his machine would probably try 255 first, then realize that is way too high, then try 128, still too high, eventually find a user level that makes his package quick, yet doesn't starve other apps in the background.... so back to that figure of say 10 again. This is a user-set priority, not program imposed. Bob would tolerate a 9 or 11 but by Bob's own choice, decided that 10 fit the bill.

The -5 appears like an attempt in the right direction, but a bit of a kludge too if the nice doesn't return back to the original level after that "priority" program quit. We'd hate to see poor Bob running everything slow because he tried to allocate more time to that 1 app. This, again, on the assumption that the nice levels are an across-the-board nice level, and not a per user nice level.

0 is an ugly number to sum "per user" as a lot of 0 still adds up to 0. Perhaps taking the lowest value in the list and calling it a floor of 1. Right now, -20 appears as "my CPU" current floor, perhaps if the kernel (user still sees it as -20) saw it as 1 and all the 0 apps as 21 (user still sees it as 0), then there's something to sum-total-up to give each user a fair shake at cpu time (ideally 50% for me and 50% for Bob) If a computer could reasonably "number crunch" say 10 apps, then that would add up to 10applications x 21 = 210.... maybe 210 is better to work with vs 0? ...Just thinking out loud....

From Bob's point of view, when he would run the program, yes, it would ideally run faster than all his other programs, so his other programs would get a performance hit relative to the graphics program, because, after all, he'd be shoving 10ft worth of pixels around. In my case, it would run slower, but since it was a letter-size page and not a big 10ft poster, it wouldn't be anything to fret about since it's so few pixels, therefore, priority 0 would be fine from my point of view. This priority thing would have to be /home based I guess.

Reply to
Amused

If you haven't used or programmed in linux, you'll probably find the learning curve too long. If you have, it's simple. Lowish latency in linux can be got on a non-loaded system by holding your program from swap using mlockall. If you want 2-5us latency, install RTAI in any linux (i use debian). You can do tasks at 10's of kHz regardless of system load. RTAI works between the linux kernel and the hardware, intercepting interrups from linux. For slightly less latency and *no* risk of locking the PC, you can run the same code in user-mode rtai. RTAI is free. You can leave it permanently in your desktop pc without any effects until it's used.

Reply to
Russell Shaw

I have started to play in LInux, but I have only programmed some console apps that will work under almost any environment. I mostly program embedded systems without any OS. 8051 up to 68K, MCORE, ARM. I want to spend as little time as possible writing the Host/Test side of the code. DOS has been becomming more and more difficult to support, and my few entries into trying to use windows was a frustrating excersise beyond belief. I would like to run most of my future test code on a "Standard" linux if possible. "Standard" in the sense that all the normal Linux tools for general use are available as well. For now I have bought the UNC20 development board which is using a NET+ARM with uCLinux. It seems from the technical info I have seen that I should be able to develop for this setup quite easily using RH9 or any of the other main stream distributions. I will have a look at RTAI. My main concern is to be have enough Docs/ Tutorials for a novice Linux programmer (Not novice programmer as such) to get going.

Regards Anton Erasmus

Reply to
Anton Erasmus

I have not used this, but it looks interesting.

formatting link

Reply to
Flipper

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.