Delta Queue Help - Paging Mr. Kirwan

One thought just occurred to me... what to do if you utilize a watchdog timer? Do you make a process that clears it? Or do you integrate it as part of the operating system code? The latter seems more reasonable as you couldn't guarantee any execution rate when inserting it as a process since you can't really predict the sum of the execution time for all tasks in the ready queue.

--------------------------------------- Posted through

formatting link

Reply to
eeboy
Loading thread data ...

I'll answer this first, because it is easier.

There is no galways-right answer about where to reset the watchdog timer. If you place it in a routine that always gets invoked based upon a timer, that won't tell you anything about whether or not the rest of the system is operational and working correctly. It will only tell you that the timer still works and that some loop used to execute ready tasks also still works. But there could be a lot still wrong going on.

In fact, it is almost _always_ wrong to drive a watchdog reset off of some regularly timed event.

You might place it in a place where you know that everything else must be working okay. Or you might test various values and routines to make sure they are all working and only then in some place reset the watchdog. But it is almost always different where the 'better place' is at and it is the kind of thing that is a great subject for a code walkthrough in a meeting room. Lots of minds get a chance to consider various alternatives and I usually feel a LOT BETTER picking a spot that has been vetted in a meeting room with more than just me looking at it.

You could also lay out the overview design here and ask for input about that, too. If you lay out a clear presentation, I think you will get some good thoughts on it. If it is important enough, of course.

A lot of folks just turn the darned thing off and don't worry about it. Others, very much more concerned and wanting it to be absolutely the best possible will not only do the code review thing but will also use an external watchdog timer rather than an internal one. It's the kind of thing that really should be outside the processor, if you really care a lot about it. A PIC12F609, which has a built-in BOR (brown out detect) and a separate watchdog timer and enough pins to offer a separate manual reset function, as well. Makes a nice all-in-one external unit. But there are better devices for this, as well. Just not as cheap or as guaranteed to be around forever as this may be.

Jon

Reply to
Jon Kirwan

Okay.

Okay. For this use, the delta-queue is of course quite good. That's what it is designed for.

However, that doesn't take away from the new concept I'm tossing out to you. Let me explain why.

Let's say all you want is the exact same feature that is in the delta queue case. Just set up a thread/function/process that will run some known time in the future and let it die afterwards. It did its job and that is over.

In this case, under the new proposal (the next pattern upwards), you simply do much the same thing. However, in this case you provide a separate stack for it since it will be in its own thread. It's more expensive in that way, I admit. However, you don't have to provide a big stack.

Here is how that might look:

In the code that starts the thread, it might say:

procID pid= pcreate( f1, 10 ); psleep( pid, 1000 );

That's it. The function f1() must exist somewhere, of course. And the '10' is just the number of stack words you want to associate with it. Doesn't have to be large. You could write a different create() function that accepts an array pointer for the stack, instead of asking the create() function to do it. Either way works.

(The example above of pcreate() doesn't include a priority. That could be included, if you want to support them of course.)

Anyway, now the thread exists and the function will start at the associated delay. Initially, the thread was placed in the run queue (at the bottom in this case since no priorities are given.) The psleep() function, let's say, accepts a pid parameter. If the parameter is 0, the current thread is put to sleep (the operating system always knows which pid is the currently running one.) But if a non-NULL pid is provided, then that pid is put to sleep (moved from the ready queue to the sleep queue.)

The function f1() might look like:

void f1 ( void ) { // do something simple.... return; }

In this case, when the 'return' is executed the operating system automatically kills the thread. This is easy to set up. What happens in the pcreate() call is that the new stack is set up so that a return will actually call pkill( 0 ). In short, a 0 and a return address that points at the pkill() function is placed on the new stack before the thread starts. That way, when the thread does a 'return' it will simply call pkill, in effect, with a 0 parameter. The operating system will see a NULL pid, go look at its current pid value instead, and kill that one. Which, of course, means that the process will NO LONGER exist. The stack will be freed up and the pid destroyed and everything will be as though the thread never existed.

Same as with the delta-queue.

As you can see, it's not a lot more complex to achieve the same capability. But it is far, far more flexible than the delta queue.

In that case, your routine can 'give' time back to the operating system by calling pswitch() when it is ready, but not hit the 'return' statement (which would kill it.) All pswitch() does is unlink the current pid from the ready queue, reinsert it back into the ready queue, and then run the top entry of the ready queue by restarting it.

Now... it's important for you to understand a detail. Let's say a thread looked like this:

void f2( void ) { extern volatile int j; for (int x= 1; x >change. In the delta queue case, it's _all_ about timing

Note the above comments. I hope they make it clearer. But the basic idea is that it allows you to in effect consider each thread as a separate program. It's really slick.

Imagine there is thread f1 and thread f2, both which are permanent ones and don't self-destruct. Imagine there are also two serial ports, in hardware. Both ports share the same interrupt and support code but have separate buffers in them. And let's say the sgetc() function accepts the port number. Something like this:

int sgetc( serialport_t sport ) { while ( sport->count buf++; }

Now, before you jump on me about that one, I'm just trying to keep it simple right now. That routine waits until there is a character. While there is no character that can be fetched, it just calls psleep() forever. But once a new character arrives, it then fetches that character, advances the buffer pointer, and returns. No, it doesn't check for the end of the buffer, nor does it operate it as a circular buffer. But the point here isn't these problems and writing the extra code for all that would distract from the point.

So, now imagine thread f1 and thread f2 both use this shared function. Let's say that one of the serial ports is a debug port and the other one is the main command/query port. So the parsing is different because the commands and data used are different. So f1 handles the commands/queries and f2 handles the debug stuff. But they both call this function.

So, your main code does this:

procID pid_f1= pcreate( f1, 100 ); procID pid_f2= pcreate( f2, 100 ); ...

And now the two threads are set up. Within the threads, they do a bit of initialization (open the ports, let's say, and some other unknown things we won't worry about here.) But each one eventually, within their command/query parsing code (which is different as I said earlier) call this function sgetc() with the associated port. What happens?

Well, there might be no characters in either buffer. In this case, f1 executes and calls sgetc which finds no characters and then calls pswitch(), which moves on to f2. f2 runs for a bit and calls sgetc, too, but with a different port. sgetc doesn't find any character in its buffer, either, and then pswitch() gets called again. But this time the operating system switches back (if there are only f1 and f2) to f1, but it does so as a return from the pswitch() call. So now you are back inside of sgetc() but with a different sport parameter -- the one that f1 had used when it called sgetc. That sgetc might, this time, find a character and this time return it. Or not. If it calls pswitch() again, then f2 will also return from sgetc's pswitch() call, but from the one where f2 had called it. So the sport parameter will be different, as it should be. And so on.

It makes for very clean, clear, easy to read and maintain code. And helps reduce replication/copying of code.

Once you get it down, you will never want to go back!

The scheduler _always_ knows what the current pid is. The way I usually write it is that before the scheduler starts or restarts a thread, it first removes it from the ready queue and saves the pid in a single, static variable. At that point, the currently running process isn't on any queue anymore. When the current process calls 'pswitch()' the operating system reinserts the current pid into the ready queue, extracts from the top of the ready queue the next process/thread, and saves that into the current pid variable again. Like that.

So a thread can always call pid() to get its current pid value, if it wants to. For example, if it wants to kill itself, not use 0 to do it, and doesn't want to do it by simply returning, then it might do this:

if ( time_to_die ) pkill( pid() );

I wrote about this in detail near the top of this post. So that is covered, now.

Yes, but as always details are important and must be crafted.

Okay.

Yes. I think you are getting it!

Could be done that way. I sometimes convert the main code loop to something like this:

for ( ; ; ) { // toggle LED psleep( pid(), milliseconds( 500 ) ; }

That way I can see that the LED is blinking okay.

Well, let's let you read this post and see. I think you are precariously close!!

Jon

Reply to
Jon Kirwan

Separate stacks make sense. Any benefit to providing an array pointer? Also, might it be useful in some situations to have a call back if ever a process is killed? I am thinking of cleanup scenarios...

The concept is becoming much less cloudy. I will not pretend I understand with 100% clarity but I think I have a good enough grasp to see some details. One thing that has me stumped is how we actually execute these functions on their own stacks. Seems to me that this might require some assembly.

Finally, somewhat off topic... this whole time I am wondering if their might be an efficient way to profile the CPU usage of each thread. Perhaps storing an elapsed time for each thread and comparing it to an overall elapsed time? This is just the sort of thing I need. I have no handle on what processes consume what. I'd then be able to see how inefficient my code really is! :)

Exciting! I want to start coding now...

--------------------------------------- Posted through

formatting link

Reply to
eeboy

Yes, it's the difference between using heap space which develops at run time and static storage which can be determined at compile time. If you are working on an application where you'd like to have the compiler and linker produce an error if you set aside too much stack space, the arrays make that easier to do. If you are comfortable in your application using heap allocations at run time (and you can get them all allocated earlier than later, if you want), then doing it out of the heap space (malloc, etc) works okay.

Heap space can be returned, too. You can reuse static space, also. So it's hard to argue much one way or another there.

Compilers will generate somewhat better code for initializing arrays than via malloc'd pointers. But usually you wind up using a pointer either way, so that's mostly a wash, as well.

Finally, it's possible to do static initialization at compile time. If you know how many stacks you require and know how to set up the initialization values for each thread using c's initialization, then you probably can get a modest win there.

I don't put a lot of stock on one side or another. But if your application has a known set of threads that will exist over the lifetime of the application's execution, it is worth considering the use of arrays for a moment, at least.

Easily installed, if you want. That can go into the proc structure. Make the pkill() function use it. You may need a function to get and set it, of course. Set it to NULL, to start, perhaps.

c, nor c++. supports either nested functions (which would make thunking less important... another topic) or tasks, threads, and so on, which would make this discussion less needed. Setting up a separate stack is really fairly easy to do. It's just memory, after all, used in a special way. But some c compiler options will automatically install "stack check" calls in the prologue of every function and these calls may cause a fatal error routine to be called if the stack pointers "don't look right" to the code. This can easily be dealt with, but you need to be aware of it in case it happens to you.

The key here is switching. This is the central/core piece that you may need to understand, now. And it's not hard to explain. But as you suggest, it may require either some assembly code or else some very 'interesting' c code that probably isn't at all portable. But it isn't much and it may be the only assembly code you require. Here's an example of pswitch() written in c:

void pswitch( void ) { auto GIstatus_t stat; if ( !pcur ) return; stat= disable( ); qunlink( pcur ); qinsert( readyq, pcur, pcur->priority ); ctxsw( ); restore( stat ); return; }

The first bit of code above first verifies that pcur isn't null. That would be a bad thing, generally. But just in case main() calls this function _before_ initializing the thread system, it helps to check it anyway. Avoids a bad problem. Once that is over, it disables the interrupt system. It's not always necessary to disable the entire thing. But in this example I choose to do it for simplicity. Then the current process is unlinked. In this case, specifying the pid is enough since the pid references a node and that node is linked into some queue (doesn't matter which, but as you might guess probably the readyq.) Then it reinserts it into the readyq, as shown. Once that is done, a context switch takes place. _THAT_ call is to assembly code. Then the interrupt system is restored to its prior state (never assume it was turned ON beforehand -- there is a serious reason for that and I could spend paragraphs talking about why, later -- but for now accept that it is important.)

And finally the function returns. However, it does NOT return to the caller you might think of. It returns to the thread that called this function, which is now at the top of the ready queue. Because once ctxsw() completes, the stack is no longer the old stack of the thread that entered here but is now the stack of the thread that we want to return into. ctxsw() does the work.

Inside ctxsw, is something like this:

push important registers save stack pointer in proc structure of pcur set pcur to top of ready queue load stack pointer from proc structure of pcur pop important registers return from call

The "important registers" are those which the c compiler requires NOT to be changed within a c function. Many c compilers support a number of registers that may be scratched by the callee. Those do NOT need to be saved. But they also usually have some that must be preserved across a call. In those cases, you need to push them onto the current thread stack. Or else save them into the proc structure. Either way works. Then you switch stack pointers. Then you restore registers (either from that stack by popping them or from that proc structure.) Returning then simply uses the new stack's return address, which will take you back to where ever it was that pswitch() was called in that thread.

The mechanics are dead simple and hard to get wrong. You _do_ need to investigate the c compiler's requirements. But it's not complex. Sometimes, you will have hardware state or coprocessor state that must also be saved and restored. But the above is the basic gist of it. And in fact, you may find the ctxsw() function written in assembly to be little more than a dozen lines of assembly -- and even less, possibly.

In short, yes. But yes, and not much.

If you have a running counter that is available in the hardware, that could easily be used. You could set up your pcreate() function to vector to just about any initial function you wanted, which would always take a snapshot of it and place the value in the proc structure. Then, as you suggested earlier, set up so pkill() either calls an ending function or else does the snapshot itself before terminating the thread. You will still want to have some place to put the results. That could be handled any number of ways. One of them might be to leave dead threads with a node that simply isn't linked into anything, but since you have the pid for it you can access the final value with that. But your imagination can probably answer this many ways.

It's a lot of fun, I admit. You've got me thinking about writing a book that lists a great many different operating system patterns in a sequence from the most simple to ones that are fairly complex -- but written entirely for the novice. They could then select the features they are looking for and start at that chapter and if any questions about concepts come up return to earlier chapters to gather up that part. In the end of it, I could provide a fully featured operating system with the ability to #define in or out any particular feature and have the data and code footprint of it automatically resize itself to include only those parts that are necessary for the job. (Already have one here.)

Probably sell 5 copies, at least. ;)

Jon

Reply to
Jon Kirwan

I started to investigate this but I can't find anything in IAR's documentation... unless I am not using the correct terms.

I haven't used anything but IAR, however, I can't say that I am very impressed with their documentation.

I haven't looked to see what books exist out there for the novice but that sounds like an excellent idea. One thing that always helps me is when examples relate to the real world. For example, the thing that really struck me a few posts back was realizing how this cooperative OS would really impact me by ridding of any small delay loops where I just burn CPU or by cleaning up the long waits which I implement with several call back functions.

Also, when you present the idea with a nice description and supplement each concept with just a bit of code (as you have done all along) then the idea seems to stick better (for me at least).

--------------------------------------- Posted through

formatting link

Reply to
eeboy

Not sure if this is everything but I think the following registers are placed on the stack in a typical call:

r0 r1 r2 r3 r12 lr pc xpsr

This info did not come from IAR though.

--------------------------------------- Posted through

formatting link

Reply to
eeboy

Just to be sure, you should note that ctxsw() is a hand-written routine you need to provide. Doesn't exist anywhere if you don't write it. But I think you are talking about finding out the c compiler "contract" about the registers' usage. I would imagine that IAR documents that somewhere. The place where such things are usually found is in the documentation about writing assembly code that will conform to their c compiler -- some kind of 'mixed language' document or else just the assembler guide. I checked these, to start:

formatting link
formatting link

In the second one above, there is a xub-section titled "Passing values between c and assembler objects" and an entire section called "mixing c and assembler." But it is on page 99 or so that they start getting to the meat of the matter.

  • R0 to R3, and R12, are considered scratch registers. * R4 to R11 are preserved registers. * R13 is the stack register * R14 is the link register (return address) * R15 is the program counter * R0 or R0:R1 are used to returning function values

There may also be a PSR (status register) and other information needed, as well. As I said, I'm not familiar with the Cortex-M3, yet.

It's a little unusual, reading the above, to notice that there is no special register reserved for a 'call frame pointer.' But the guide appears to talk a lot about some "call frame" information that can be provided to the debugger

-- discussed starting on page 103 -- that appears to suggest that there is no frame pointer reserved for the purpose but that the information needs to be provided to the assembler in some kind of pseudo ops. A little odd and I'm still trying to consider exactly how/why that method was chosen. Most particularly, I am wondering exactly how a stack unwind might occur, given this lack.

Perhaps someone wiser about this might comment. (If anyone is listening at this point.)

Yes.

Thanks. I think that's a good idea to follow.

Jon

Reply to
Jon Kirwan

I am a little confused about this statement in the Cortex M3 Technical Document.

formatting link

In 2.3 they state "stack point alias of banked registers, SP_process and SP_main". Seems like there is more to the SP?

Also, on page 104 of the second document you provided a link to it states that R13 is also the CFA... which is the SP?

--------------------------------------- Posted through

formatting link

Reply to
eeboy

The DDI documents from ARM will _not_ define what compiler vendors does with general purpose registers. It will, of course, help determine what they do with special purpose registers. So I generally do NOT go to the ARM docs for compiler information, except as a supplemental understanding of what is going on and why.

Regarding this SP_process and SP_main, I think (and I've not read the doc above you mentioned but am instead going upon long experience) must be that there are two different stack pointers in the hardware -- the main one that starts out at reset and another one that is offered to an operating system for "untrusted" applications. Or possibly for interrupts. I wouldn't worry much about it just yet, though it is useful to know if you are writing a genuine, general purpose operating system. Then you'd need to know what is what.

But in the case of something like "just this program I'm writing" context, you probably don't have to know in order to make things like a simple thread-based operating environment work out okay. You mostly need to know what the compiler provides you.

The reason is simple. If there are two hardware stacks, let's say one for "privileged mode" and one for "application mode" then you will only really care about the one that the compiler vendor works with. Chances are, they must generate code that works under an operating system, such as Linux or FreeBSD or VMWare or who knows what else. Those operating systems will care, of course. But they will likely assign just ONE of them for use by the compiler. That's the one you are stuck with, unless you really want to go 'whole hog' and develop something bigger than just a thread-based environment.

For now, that's all we care about really. So that's why I don't think it matters today. But I think if you read those docs with what I said in mind here, it will probably help clear up something for you. I'll look myself a little later on.

Jon

Reply to
Jon Kirwan

Compilers should adhere to the ARM Procedure Call Standard. See

formatting link

Here's the heart of my context switch routine I use for Cortex-M3:

context_switch: push {r4-r11, lr} str sp, [r1, #TASK_SP] ldr sp, [r2, #TASK_SP] str r2, [r3, #SCHED_CURRENT] pop {r4-r11, lr} bx lr

r1 points to old task descriptor. The task descriptor is a struct containing (among a few other things) the stack pointer for the task at offset #TASK_SP

r2 points to new task descriptor that we are switching to.

r3 points to a scheduler struct, which contains a pointer to the current task at offset #SCHED_CURRENT.

r0-r3 and ip (r12) are the caller-saved registers, so they aren't preserved. Status flags are also not saved.

I save registers r4-r11, and lr (r14) on the stack. sp (r13) is saved on the old task struct sp is then read from the new task struct the descriptor for the current task is then stored in sched->current registers r4-r14 and lr are popped from the new stack return into new task.

During the context switch, interrupts are enabled (I don't do context switch in interrupt context).

Reply to
Arlet Ottens

Don't worry, there are still people listening. Keep up the good work! And do post here if you write that book, I may even buy it.

--
Stef    (remove caps, dashes and .invalid from e-mail address to reply by mail)

There has been a little distress selling on the stock exchange.
		-- Thomas W. Lamont, October 29, 1929 (Black Tuesday)
Reply to
Stef

Interesting. Thanks. I had read, and forgotten about, that there was an ABI standard for the ARM7 and the implications of having one. Leaves open the idea of platform differences and that there may be several platform ABIs and not just one. As well as something somewhat different for c vs c++ (see the enumerated types, for example.) I note there is another term in the document, barely used or mentioned, an EABI. That's also interesting because that is the subject at hand here, in my opinion.

And I don't mean to confuse a binary standard (ABI) with a call standard (for c, or c++, or fortran, or ada, etc), but the implications seem ripe to me.

So are there variations that you are aware of that might be interesting to know about? Or does everything, in practice, conform to a single call standard, now? No matter the language, etc?

Okay. My pseudo code for the ctxsw() function looks a lot like that.

The parameters passed in don't need to be preserved. Understood about the status, as well. Most compilers assume that the status is destroyed across a call. I seem to recall running into one or two that didn't assume that, though. (Not ARM, obviously.)

Okay. So. What about a frame pointer? How do you do a stack unwind? Or does the compiler always keep track of exactly where, offset wise, each local variable is at as it generates code so that a frame is not required, nor saved in the prologue? And if that is done, then I don't see an easy way to handle unwinding a stack until an exception handler is uncovered and usable without a completely separate structure.

And thanks a lot for the example and explanations.

Jon

Reply to
Jon Kirwan

I'll try. I suppose I've got the OP (eeboy) convinced to proceed. So proceed, I will. I just wanted to make sure he was ready for more before going on.

It takes work to do well enough to bother. Hurdles are high and time is precious. But if I do write it, I'd probably provide it as a free Amazon book for the kindle or at cost for anyone wanting paper versions. I don't see a reason why I should tie up any emotions imagining an income from it.

Here is how a Preface for it might begin:

direction.

:)

Jon

Reply to
Jon Kirwan

Not sure about all the little differences. I've only carefully looked a tthe procedure call standard to understand up about the caller/callee saved registers. As far as I can see all the versions are identical in that regard. The language shouldn't matter, as the purpose of the procedure call standard is to define a common platform so that different languages can be mixed in a single executable.

Basically I treat a context switch as if it's doing a regular subroutine/function call. When the context switch code is called from within a certain task context, it will eventually return to that same context, and to the same function calling it. As long as I return with the stack and registers looking the same as in a normal function call, it should be okay, no matter what the compiler uses the stack or frame pointers for. This means that the stack should be preserved, as well as the callee-saved registers.

Note that some OS/scheduler implementations run the context switch from both interrupt and user context, and in that case it would be important to save/restore the flags in the context switch code.

In my design (which is non-preemptive) I only run the scheduler in normal execution, not from interrupt handlers, so I don't have to worry about the flags, since they aren't preserved across function calls.

Reply to
Arlet Ottens

Thanks. But still no information about stack unwinds above. By this, I mean perhaps something like finding exception handlers on the way backwards through stack frames. Perhaps this isn't something you are aware of?

Anyway, thanks, Jon

Reply to
Jon Kirwan

If the program looks something like this:

- set up stack frame for exception - context switch - throw exception

I don't see why the context switch code needs to do anything special to allow proper stack unwinding in the exception handler. The stack before the context switch looks exactly the same as after the context switch, so it's like the context switch isn't there.

Reply to
Arlet Ottens

I agree. My question isn't about the context switch. It's about what I perceive as a lack in register assignments in the ABI standard (and perhaps in the compiler docs I read) for a 'stack frame pointer.' Wiki's 'Call_stack' entry discusses it a bit. There is even a nice picture until "Structure" you might look at, though it doesn't show the idea of saving the old frame pointer on the stack, which is what would usually be required for an unwinding process.

Anyway, it doesn't affect a context switch. I'm just curious about c/c++ compiler implementation details for the Cortex-M3.

Jon

Reply to
Jon Kirwan

Okay. So let's get started. I talked about the "next pattern" as being a small, cooperative operating system. Actually, that's not quite true.

As you now know, I spoke about the pswitch() function and the ctxsw() function that is written in assembly and another poster has added their own ARM-based cooperative task switch assembly code to close the deal, so to speak.

To be honest, the next step isn't necessarily to include the delta queue (for sleeping) and add separate stacks to the whole idea. A separate step in the direction of separate stacks and cooperative task switches -- WITHOUT the delta queues -- is technically next, probably. In such a case, there are no sleeping tasks at all. They are all ready to run all the time. All they do is switch between each other, cooperatively, in a round robin fashion. If they need time, they use it. If not, they pswitch() right away and continue the loop. That whole thing can be done without any queues, link-next and link-previous pointers, and so on. Just put the task nodes in an array and walk the index down from one to the next to the next and so on and then back around, again. So it's really very simple and that lets you concentrate on understanding how the separate stacks work and the core routine, that ctxsw() function, does its job. Plus to get a better feel of WHY you should care, too.

So before I head out on a broad lark, do you want me to divert over and spend time on helping you implement a cooperative task switch that doesn't support sleep, at all, so that you can focus your attention entirely on getting that working and well understood, first? (No need for queue insertion, deletion, etc. -- just a fixed set of threads.) Or should I just jump in as I'd suggested before and move towards a small, general purpose operating system that includes the task switch feature with separate stacks for each thread and also the delta queue for sleeping threads? That would require some attention to different queues, plus the context switch process.

(Were I writing a book, I'd take the delta queue and the context switch as two foundational blocks that are to be independently mastered before being combined.)

Anyway, your call. Let me know where you feel you stand on the context switch -- does it make sense, yet? Perhaps more information about the basic memory model for a program would help? (It helps me, anyway.)

Jon

Reply to
Jon Kirwan

In the older ARM standards, there was a special 'frame pointer' register, r11 (r7 in Thumb), but the new standard doesn't provide such a special role. I suspect that the usage of the frame pointer was too compiler dependend, so they've dropped it.

I've never really looked at how some compiler used it. I don't use C++, and I've always disabled frame pointers in C (I don't use the -g option either, so frame pointers only waste a general purpose register).

Reply to
Arlet Ottens

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.