Disk imaging strategy

Grrr... my bad! I kept thinking solely about the netmca and how *it* is used. Forgot entirely about your development environment! :-/

[My comments/reservations should make more sense in the netmca context...]
Reply to
Don Y
Loading thread data ...

(sigh) You *really* should do your homework before shooting your mouth off.

And, if you *think* about it, it is EASY to come up with a "better gzip"! THINK ABOUT IT before you stick your foot in your mouth. If you can't come up with a compressor that achieve rates of 4000:1 ON THE BACK OF A NAPKIN then you shouldn't be writing code. ANY code!

(Remember, *you* can pick the data to be compressed! gzip has to live with whatever data it *encounters*! Be wary of "assumptions" as they'll always trip you up!)

I'll wait for you to post your 4000-fold compression algorithm...

Reply to
Don Y

Hah, nothing that bad about forgetting something, come on. BTW the netmca runs a complete DPS on it, shell windows and all. It even has the development software on it (not the data of course), not that users do use it a lot, not to my knowledge at least. It just has no display controller, relies on the network to be VNC accessible.

Dimiter

Reply to
Dimiter_Popoff

I've been most concerned with *appliances*, here, because they have the most restricted (human) interfaces. E.g., I can create, write and delete (somewhat) arbitrary files on the disks in my *printers*... but, can't run executables, there (well, this is a small lie but not a practical one!).

The same sort of thing is true of my NAS boxes... I can freely and easily -- even programmatically -- create and delete files (e.g., from a remote host mounting. them as foreign filesystems) But, executing an arbitrary executable DIRECTLY on those boxes isn't possible (not the least bit because the OS isn't openly documented -- just like the OS on the printers)

Ah, OK. So, you don't have an "embedded" version of it with reduced capabilities/features.

Reply to
Don Y

No need for that. Much of the functionality even fits in 2M flash.... how thinkable is that (about 1.5 to 2M lines of VPA code which is not generous with CRLF, unlike certain HLL-s :-) ). But booting off flash is intended just to be able to restore your HDD via the net if you mess it up. I have smaller versions of course, e.g. I am not tortured by a small coldfire (mcf52211) which has a tiny derivative of dps (mainly the scheduler and some library calls, about 7 kilobytes total). Bloody thing won't go into low power mode which is sort of specified to at least halve the consumption, nothing of the sort, *zero* effect of entering that mode by the core. Cost me two days so far to zero result. Not that I can't live without that mode but why does it not work, drives me mad.

Dimiter

Reply to
Dimiter_Popoff

FYI: gzip is _not_ the last word in general purpose compression.

gzip uses on-the-fly LZH dictionary compression. gzip is pretty good, but 7z's LZMA usually does better.

However, no on-the-fly compressor can do as well as a tool that performs batch analysis of the file(s) prior to compression and creates a dictionary customized for the batch.

There used to be a number of batch oriented compression tools, but their 2-pass approach made them ever less suitable for handling ever larger batches. When LZ was introduced, streaming compression became "good enough" for general purpose and so the batch approach fell out of favor.

While reserving judgment on whether Don could beat gzip for general purpose, he certainly should be able to beat it for his specialized purpose.

George

Reply to
George Neuner

So, the disk was used as cache, temporarily, while the file was being built... perhaps a different *part* of the media (soas not to interfere with files that were being built "correctly"?)

Yet another case of initial assumptions ("gobs of memory") being "off"! :>

Reply to
Don Y

Exactly. If you are compressing *once* -- and decompressing "often" (often > 1) -- AND can afford the time "up front", you can achieve better compression rates. E.g., brotli, zopfli, etc.

*AND*, if you can choose the data that you want to compress, you can obviously design a compander that exploits that knowledge for higher compression rates!

Goal isn't to be general purpose. Rather, to be good at *this* application!

E.g., gzip can't do better than ~1000:1 (on carefully constructed data sets). If you can choose the data that you expect to be encountering (e.g., even the same data that gzip compresses to

1000:1!), you can easily beat that!

gzip has to be all things to everyone and make tradeoffs because it ASSUMES it has no knowledge of the data. There's no reason to similarly encumber yourself when you *have* control of the data!

Then, fall back on gzip (or any other suitable archiver) for the data over which you have *no* control! The mix of controllable and uncontrolled data determines your OVERALL compression rate. If the controllable data is plentiful enough, then it beats gzip when applied "overall".

[In a few minutes, you can write a trivial compressor that will beat gzip (or any similar compander) even when the balance of controlled to uncontrolled is small! We're just waiting for Jan to take the time to write that code and come to that realization...]
Reply to
Don Y

On a sunny day (Thu, 13 Nov 2014 13:30:18 -0500) it happened George Neuner wrote in :

Only if he knows something about the filesystem, but he claims 'agnostic'. Yes if you make assumptions about the data you can beat gzip IF your assumptions are right.

Long time ago there was a fun discussion in sci.crypt, and wanted to made a joke about infinite long files (with random numbers). 'Just zip it". Now if you think that was simple...I took that question to sci.math, and after finding out about many types of infinities replaced the whole file by 00 one token... There is more to it, let Don fight with it, humanity will apreciate his better compressor when released in the open source. It is not only the filesystem it is also the sort of data stored.

Reply to
Jan Panteltje

OK, I've accumulated data from most of the boxes that I have, here. plus one (make/model) laptop from one of my pro bono gigs. These confirm that the "trivial" approach I mentioned will work without requiring any "filesystem-aware" code *or* a bulky OS installed solely to "restore" the image (i.e., format the potentially corrupt media, recreate empty filesystem(s), restore file content and any special "attributes", verify the filesystem(s)' integrity, etc.). I.e., *MY* restore algorithm is a few KB instead of many MB!

Typical data when processing "binaries" below. Data for the SPARC's and NAS boxes are similar. The trivial algorithm always yields the SMALLEST filesystem-independent image (dump/tar/partclone all require an *appropriate* filesystem to be recreated prior to restore):

+++++++++++++++ Executive Summary (KB rounded up) ++++++++++++++++ medium /Archive /Playpen /Playpen a laptop notes sources 70% full 16% full NTFS only!

"as was" partition 82576160 524633 524633 72742320 dd | gzip 2402297 242157 175348 42242153

filesystem aware live data est 13897220 328698 76335 8516172 tar 13322170 330420 78670 8464310 tar | gzip 2219766 154853 27264 5162615 dump 14083020 332660 80460 [1] dump | gzip 2268348 155175 27447 [1]

fill w/big files dd | gzip 2404488 175370 48030 5532906 walki 14499543 354469 105030 8661819 walki | gzip 2322514 175216 47666 5480802

fill w/many files dd | gzip 2404631 175370 47932 [2] walki 14479387 354466 105032 [2] walki | gzip 2302045 175215 47564 [2]

Clonezilla 3005784 189120 50958 5168376

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ [1] requires a fabricated fstab(5) in the live CD image to test; but dump(8)'s performance tends to be on a par with tar(1) [2] skipped this in order to get the laptop off to a student sooner since "fill many" results closely track "fill big" results. [NB: This is just for a "magic string" of "all zeroes". Results for walki don't vary when that magic string is changed. But, gzip's efforts on the raw device worsen as the string becomes "less regular". FWIW, maximum compression that gzip (without "-9") can achieve is 1029:1 on *long* stretches of "zeroes"; walki does 4096:1 on *512B* stretches!]

The executables involved:

# for command in dd gzip tar dump; \ do loc=`which $command`; \ ls -al $loc; \ done

-r-xr-xr-x 1 root wheel 27150 Apr 12 2014 /bin/dd

-r-xr-xr-x 4 root wheel 35991 Apr 12 2014 /usr/bin/gzip

-r-xr-xr-x 3 root wheel 133411 Apr 12 2014 /bin/tar

-r-xr-xr-x 2 root wheel 64878 Apr 12 2014 /sbin/dump

[NB: aside from dump(8), all executables (mine and the system's) are dynamically linked. However, my library reliance is essentially just fopen/fclose and fread/fwrite. The Windows executables (below) are linked as compact exe's]

Clonezilla is hard to "size" as it relies on a whole OS *under* it (many many megabytes) so it's a silly comparison. Deploying it risks latent bugs consequential to its sheer size! By contrast, walki is intentionally "trivial" for that reason (see below)!

Note tar(1) and dump(8) also rely on other capabilities being in place before a restore can take place. By contrast, walki reflects the complexity of its decompressor, as well! (on bare iron)

# ls -al ~

-rwxr-xr-x 1 root wheel 5839 Oct 21 00:43 const

-rw-r--r-- 1 root wheel 532 Oct 21 00:38 const.c

-rwxr-xr-x 1 root wheel 7757 Oct 21 10:51 fill

-rw-r--r-- 1 root wheel 16587 Oct 21 10:35 fill.c

-rwxr-xr-x 1 root wheel 7757 Oct 25 17:27 fillx

-rw-r--r-- 1 root wheel 3162 Oct 21 10:47 magic.h

-rw-r--r-- 1 root wheel 512 Oct 21 09:24 noise

-rw-r--r-- 1 root wheel 512 Oct 21 00:41 random

-rw-r--r-- 1 root wheel 3162 Oct 21 00:39 random.h

-rw-r--r-- 1 root wheel 512 Oct 21 00:41 regular

-rw-r--r-- 1 root wheel 2835 Oct 21 00:40 regular.h

-rwxr-xr-x 1 root wheel 6414 Oct 21 00:02 walki

-rw-r--r-- 1 root wheel 4312 Oct 21 00:02 walki.c

-rw-r--r-- 1 root wheel 512 Oct 21 00:43 zeroes

-rw-r--r-- 1 root wheel 2871 Oct 21 00:43 zeroes.h

C:\FFFF> DIR

10/21/2014 10:21 AM 48,858 bigfill.exe 10/25/2014 12:52 PM 48,893 manyfill.exe

There are enough pro bono machines to make it worth (my!) while to automate this process. So, I would manually install the OS, drivers, updates, applications and configure the box prior to having the image created and "installed" on an unused partition (along with my "restorer"). A copy of the image can then be archived so that other identical make/model machines can be built, quickly (from the image for the first machine of its type)!

Guesstimating ~8GB for the compressed image, I can handle about

120 different images with a 1TB repository. At 20-40 different images per school year, I could probably cut the repository to a 500G and still be "good" for 3+ years (and migrate the oldest images off the repository each year thereafter). [Note losing the "master image" is just an inconvenience. I could always manually recreate the original image from the notes in my logs. If push came to shove, I could chase down a laptop having that particular image and reclaim the image from its "restore partition". I'm not keen on chasing down homeless students so it may be better to mirror the repository and pray for the best]

Creating an image requires a pass over the partition's contents followed by some crunching. While not a huge undertaking, it is, nonetheless, time consuming. And, much of that time is just spent "waiting" -- relatively easy to automate for UNATTENDED operation (I have no desire to stare at a screen waiting for a system prompt to reappear!)

But, I've got a GREAT opportunity to gather data to quantify the

*actual* performance of this algorithm; I can record the size of the medium, amount of "live data" on it and the size of the final image. Of course, these will tend to be very similar numbers for "Windows machines"; other numbers for Mac's; laptops will differ from desktops; etc. And, there will be some slight differences from model to model owing to driver differences and other per-user "customizations".

This gives me another 20-40 data points (assuming I see 10 of each make/model in the 200-400 yearly donations) to add to the few dozen machines that I have, here.

[Also remember that the algorithm applies to *partitions*, not *machines* or *disks*. As most machines have more than one partition, nowadays, this adds to the number of datapoints.]

But, I can ALSO explore how the other approaches that I evaluated WOULD HAVE performed on each of these machines! I.e., show why my approach is "better" with hard numbers beyond the data that I have collected for *my* machines with *my* (biased?) disk contents.

However, stepping back a bit, I also have access to the machines in their *donated* conditions/configurations! These should exhibit much more variety than the machines that I will be "producing". At the very least, they will typically have *some* "user files" -- even if only things like browser caches! Even machines from the same (business) donor will have differences in the "user files" and usage history that are evident on the media. ESPECIALLY in the "empty" (deleted files) portions -- which pose the biggest problem for a filesystem-agnostic imager: the compressibility of "deleted data"!

So, I would like to run the same sorts of experiments that I've run on my machines on those donated machines *prior* to cleaning them up and formally imaging them. I.e., look at them "as is" and explore the different approaches that I've already evaluated. Then, tabulate the data from those machines in their original condition "as was" along with the resulting data after their formal imaging.

This gives me *another* 200-400 data points to add to those already mentioned! And, another 200-400 NEXT school year... and the year after that... etc.

Instead of just documenting my algorithm and its derivation, I can present data that puts its performance in (some) context... with machines with which I have had no previous influence (i.e., in the "as donated" state)! Instead of just hand-waving other potential performance scenarios as "your mileage may vary".

Given that this *images* drives (i.e., lossless), I can also collect samples from colleagues on additional "non-Windows" machines (e.g., see how it fares on Alphas, SGI's, odddball OS's, other appliances, etc.) -- it's non-destructive so no risk to the data, there!

Now, the problem: Running the experiments that I've run on *my* machines is time consuming. You're not just imaging the medium ONCE but, rather, several times! With different algorithms, etc. Even though you are discarding the resulting images (i.e., after harvesting the statistics from them), the time to create them is something you have to live with.

Imaging 20-40 make/model donated machines, no big deal; 200-400, still manageable -- though painful. But, to evaluate *multiple* strategies on *each* of them is just a MONUMENTAL effort!

In a followup to *this* post, I'll post a sample of my notes from when I started exploring this problem. It illustrates the sorts of computational effort that goes into evaluating algorithms on *one* machine ("partition").

[It also shows the effort I put into a question *before* I post it!] [NB: I'm not really interested in any comments re: my process. I've already made note of most of the obvious improvements to speed things up and figure judicious use of some pipe fittings can make a significant impact in the amount of data moved and processed. E.g., since the compressors evaluated thus far operate in streaming mode, I can pull data off the medium and feed all compressors IN PARALLEL. This allows me to read the disk *once* and tabulate several results.]

Given those sorts of computational efforts JUST FOR THIS 'AS WAS' DATA COLLECTION, where is the best allocation of resources in the "test fixture" to minimize the cost (time) of gathering this data?

Remember, I can tailor the *final* production images to further reduce the size of the images by creating an "empty" partition (D:) in which the students can keep their "user files" thereby reducing the size of the *imaged* (system) partition. *But*, I can't do that with the media contents AS DONATED! I may be stuck imaging

160G, 250B or even larger "single partition" systems MANY times just to see how the algorithms perform! It's "wild data" so I want to exploit it before *losing* (discarding) it!

Keep in mind the costs of "manual intervention". E.g., if I have to pull a drive from a laptop and install it in a fixture, then that adds to the cost (considerably, because I can't do that if I'm not available at the instant that it needs to be done). Or, if I have to cycle power to the test fixture to install that drive then any data collection that is "in process" either has to be restarted or checkpointed.

I see two possible physical configurations:

- laptops/desktops tethered to test fixture via network cables

- (pulled) drives tethered in external USB drive enclosures External enclosures can cheaply be replaced/discarded; installing drives *in* a hot/cold server bay leads to lots of wear-and-tear on the server's hardware. Network connection is relatively low wear-and-tear because the machine is replaced by its successor when done! (the connector on the machine never being needed in subsequent tests on OTHER machines!)

Each of these allows me to support SATA and PATA drives without letting that influence the choice/design of the test fixture. The external USB enclosure route has the downside that the number of such enclosures that I have available PER DRIVE TECHNOLOGY limits the number of drives that I can process "in parallel". And, servers tend not to be known for their "high performance" USB implementations!

(I will have handled the SCA/SCSI/FC drives that I use *here* with continue to other hardware. Nor will I have to deal with different CPU families, endian issues, etc. -- just PC/Mac laptops/desktops)

Remember, I can't babysit this box. Ideally, I want to plug in a bunch of machines/drives and walk away -- returning when I am reasonably sure they are "done"... so *I* am not waiting for *it*! I budget 10 hours per week for pro bono stuff and am not keen on letting that number rise -- especially if it is because I am twiddling my thumbs *waiting* for a test to finish!

I lean towards the "tethered via network" approach as it is easily expandable (add another NIC on the server) in the test fixture. And, can offload some of the processing *to* the laptops. I.e., PXE boot them and let them run the tests on their own drives with their own data using their own CPU's!

But, thinking *harder* about this, laptops tend not to be as "resource-ful" as servers. Less RAM, slower processors, AND SLOWER DISKS! The number of *desktop* systems will dwindle to zero as they just aren't very portable for kids with no permanent place of residence. And, desktops would each require a keyboard/monitor (or, share *one* via KVM/sneakernet) to be available during this testing.

Given that this would require reading the disk multiple times, (i.e., laptops are not likely to have enough RAM to be able to cache the entire disk's/partition's contents!) leaving the only access to that media hiding behind the laptop's CPU may be a poor choice.

OTOH, shipping the data from the disk across the network could put a similar bottleneck in place (e.g., laptops that only have

100Mb NIC's are effectively moving data at USB2 PCI speeds). As well as dramatically increase the amount of primary/secondary storage required on the server (imagine testing 3 or 4 laptops each with 160G drives -- in addition to the server's needs)

This *suggests* using a combination of approaches:

- PXE boot that *starts* the test process

- the first thing that does is dd | gzip -> server this makes a copy of the raw device available to the server and does so at a reduced network utilization factor

- server sucks up gzip'd images and starts *its* processing on them

- meanwhile, runs other tests on the system under test, as resources permit

Ideally, let the server and (each) laptop cooperate to determine which portions of the testing each should perform. Whether you:

- drive the testing from the on the laptop (i.e., the laptop RJE's tests to the server and, if it refuses to perform them, it performs them itself) OR

- drive the testing from the server (i.e., if it's too busy, RJE the test to the laptop) This would allow the resources available (laptop(s) + server) to be reallocated dynamically.

What I envision as my "process" is:

- configure laptop for PXE boot and other "setup" options

- connect to (a standalone!) network

- initial application loads and presents *me* (at the laptop's keyboard) with a menu

- I specify/select an identifier/description for this machine (the server already has its MAC address in DHCP exchange before PXE) The MAC could reduce the list of potential "known machines" that are offered to me as possible choices; I can create a new one.

- I indicate how I want this laptop handled: + get it into production state ASAP (so I can power it down and disconnect it and give it to a waiting student). Offload a copy of its disk image and process that on the server when time permits. + let the laptop do whatever part of the experiments makes sense given the current load on the server by other such laptops + let it do ALL of the experiments (because I think it has ample resources to do so effectively and would rather leave the server's resources for other laptops to use) + allow its resources to be used by others when it's experiments are concluded (i.e., prior to installing its final image)

- I walk away

- as tests are completed on the data present on the laptop's media, the results are added to a database (running on the test server *or* another box) by whatever "ordered" the test (server vs laptop)

- when the tests are complete *and* the laptop's resources are no longer needed elsewhere, the laptop's image is created and/or loaded from the "image server" (which may be another box) and the laptop powered down

Now the question hinted at, initially:

So, what resources would be best to have on the *server*? Gobs of RAM? (i.e., keep an *entire* disk image resident in RAM so that different algorithms can run at "CPU speed" instead of being I/O bound) Raw horsepower (i.e., run the compression algorithms in the least amount of time)? Some balance of the two (e.g., SSD as "slow RAM/fast disk" with real RAM+CPU dedicated to the actual crunching)?

Remember that I want (need?) to process several laptops at a time. I figure I need to *complete* 10 machines each week -- so, probably copy the contents of the first machine onto the server so I can start manually building that "production system" while the other 9 machines are undergoing their "AS WAS" experiments. Then, install the production image on all 10 machines and "call it a day"! :>

[Keep in mind that a test fixture can be exercising them over the course of that entire week, if need be, and *my* time can still be capped at 10 hours! Id prefer not running a power-hungry server any longer than necessary, though!]

Copying the disk contents from *all* machines onto the server would require a secondary store big enough for 10 "as was" images -- i.e., ten times the size of the disks in the laptops, regardless of how much/little of that disk eventually is used in the production image! This can be an issue for a server-side SSD to expedite the operations!

I'm looking for criteria to use in picking a suitable bit of kit to rescue for this job. My notes claim I've got a 64G DL580 tucked away and a 32G? R900. The BladeCenter has gobs of horsepower but the electric bill would be insane (though Winter is coming so we could possibly use it as an "electric space heater"! :-/ ) It might just be easier to find something else with this particular problem in mind instead of trying to fit it to kit-on-hand!

I'll run this by friends who run server farms to see what sort of guidance they can offer, as well. They tend to be *amazingly* good at counting hidden system calls and tweeking scripts to save all those little inefficiencies that creep in "between the keystrokes"! I guess, in their environment, when you're running code millions of times a day, EVERY day, all those "little things" add up!

Hopefully, I can have something in place after the holidays and get back to work on this after the New Year...

Now, back to my holiday baking! :>

Thanks!

--don

Reply to
Don Y

So, a network stack and some utilities...?

Heh heh heh... you'll find it. Then, curse yourself for OVERLOOKING it. Or, the manufacturer for not *documenting* it! :-/

Reply to
Don Y

Oh a lot more than that. The spectroscopy software works too - not its latest version but good enough to test newly baked boards without the hdd attached etc.

I found it. I rarely curse myself, there can always be someone else to blame after all. This time the reason was the period of the "force task out if it does volunteer for reschedule" timer, was set to 100 (or was it 10) uS rather than to the wanted 10mS during initialization. Since no person on Earth can mess with my development tools over the net, nor has anyone been close to my keyboard it must have been some alien bugger. Can't have been me.

:D

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
Dimiter_Popoff

"Gremlins". Had a friend ages ago who was convinced these things actually exist. As proof, she offered up all the *matches* that mysteriously "go missing". She concluded that the Gremlins are fascinated by fire and steal any matches they can find!

This, in her mind, was the ONLY way to explain the sheer volume of matches that she would lose in any given week! :>

I've had other friends similarly claim the existence of Gremlins... but, use *socks* as proof! Contending that they are amazed at these simple foot coverings and always try to steal them from laundry baskets -- hence the reason you ALWAYS have an oddball sock missing its mate!

I'll admit "changing timeout values" had never occurred to me as a similar rationalization for their existence...

Reply to
Don Y

On Sun, 23 Nov 2014 02:52:25 -0700, Don Y Gave us:

You're gonna want to see Mel Blanc's version of a gremlin.

Reply to
DecadentLinuxUserNumeroUno

That with the too many missing matches I would not take that lightly, mad as it may sound, you know. Gremlins or whatever, things do go missing sometimes in an inexplicable way for me, too - usually only to reappear at the location I initially looked for after minutes - sometimes hours - of search. Not very often but times enough to rule out being just in dreamland when looking there first. I just have no explanation about it but it does happen to me. May be not to everybody... Obviously even now if someone would tell me that even I would consider a mental issue but... it does happen to me.

Who knows, may be one day we'll discover that even "changing timeout values" can also go into that inexplicable category :D :D :D.

Just now I had another which wasted me an hour. I wish it were inexplicable but it was just Chinese.... Bought a multimeter from ebay to measure current consumptions of things, looked for an analog one which could do 1A at least; found one with 2.5A. Came with no 2.5A, just 0.25A max. OK, no time to deal with this, just put a negative ebay feedback and moved on. Used it at 250mA for the current thing (the one with the timeout values, it is a HV source). While messing with it(soldering live a 470uF at the incoming 12V past a tiny 10uH choke)the something died, consumption went way above

250mA. Unsoldered quite a few parts from the board to see what did die only to discover the deadman was the shunt resistor or something within the multimeter...... Consumption had not risen at all. I must have shorted the incoming 12V briefly - for a few tens of milliseconds - and the thing had died... They must have used 0402 resistors for the shunt :D (no time to investigate, just made an external one for 1A, back to work).

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
Dimiter_Popoff

Well, when you *need* a match/light, it can quickly lead to PANIC!!

I've recently become distressed over having too many pairs of *shoes*! Seems like I can never find the pair that I am *supposed* to be wearing; always one of the pairs that I'm *not*!

Actually, L has been moving things when you're not looking; then, placing them back just AFTER you've searched a place! If you listen carefully, while you're running around "hunting", she's quietly giggling in the other room!

If you didn't notice at the time, it could have been milliseconds or

*weeks*! OTOH, when you *do* notice, it's within ohnoseconds!

Dunno. I've a couple of cheapie DVM's but rarely use them for measuring current (other than to verify charging current flowing into a battery, etc.). Always amazing how inexpensively they can make the things! There's a place, here, that frequently gives them away so you'll find people with 5 or 6 of the same make/model lying on in their shop and you KNOW where they got them! :-/

Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.