Bargain LTSpice/Lab laptop

formatting link

Last of the Japanese/German-made business-class machines; several years old now but AFAIK they're well-built and not a PITA to work on, and even a refurb should be good for a few more years of service...

Reply to
bitrex
Loading thread data ...

I bought a 17" new laptop with just 12 GB of RAM as a second computer when my first died and I needed something right away to copy data to off the old hard drive. It was very light and nice to use in a portable setting. But the combination of 12 GB RAM and the rotating hard drive was just too slow. I ended up getting another 17" inch machine with 1 TB flash drive and 16 GB of RAM expecting to have to upgrade to 32 GB... but it runs very well, even when the 16 GB is maxed out. That has to be due to the flash drive being so much faster than a rotating drive. I've never bothered to upgrade it. Maybe if I were running simulations a lot that would show up... or I could just close a browser or two. They are the real memory hogs these day.

The new machine is not as light as the other one, but still much lighter than my Dell Precision gut buster. I ended up returning the 12 GB machine when I found the RAM was not upgradable.

Reply to
Rick C

I run 32 GB on my main desktop since I upgraded to Ryzen 5 this year, which seems adequate for just about anything I throw at it.

I'd be surprised if that Fujitsu can't be upgraded to at least 16.

Another nice deal for mass storage/backups of work files are these surplus Dell H700 hardware RAID controllers, if you have a spare 4x or wider PCIe slot you get 8 channels of RAID 0/1 per card, the used to be in servers probably but they work fine OOTB with Windows 10/11 and the modern Linux distros I've tried, and you don't have to muck with the OS software RAID or the motherboard's software RAID.

Yes a RAID array isn't a backup but I don't see any reason not to have your on-site backup in RAID 1.

formatting link
Reply to
bitrex

Isn't a stand-alone backup "policy", rather

Reply to
bitrex

Rick C snipped-for-privacy@gmail.com wrote in news: snipped-for-privacy@googlegroups.com:

Could easily be processor related as well. Make a bigger user defined swap space on it. It would probably run faster under Ubuntu (any Linux) as well.

I have a now three year old 17" Lenovo P71 as my main PC.

It has an SSD as well as a spinning drive in it but is powered by a graphics workstation class Xeon and Quadro graphics pushing a 4k display and would push several more via the thrunderbolt I/O ports. And it is only 16GB RAM. It will likely be the last full PC machine I own. For $3500 for a $5000 machine it ought to last for years. No disappointments for me.

It is my 3D CAD workstation and has Windows 10 Pro Workstation on it. I keep it fooly upgraded and have never had a problem and it benchmarks pretty dag nab fast too. And I also have the docking station for it which was another $250. Could never be more pleased. The only drawback is that it weighs a ton and it is nearly impossible to find a backpack that will fit it. I know now why college kids stay below 17" form factor machines.

Reply to
DecadentLinuxUserNumeroUno

You use RAID for three purposes, which may be combined - to get higher speeds (for your particular usage), to get more space (compared to a single drive), or to get reliability and better up-time in the face of drive failures.

Yes, you should use RAID on your backups - whether it be a server with disk space for copies of data, or "manual RAID1" by making multiple backups to separate USB flash drives. But don't imagine RAID is connected with "backup" in any way.

From my experience with RAID, I strongly recommend you dump these kind of hardware RAID controllers. Unless you are going for serious top-shelf equipment with battery backup, guaranteed response time by recovery engineers with spare parts and that kind of thing, use Linux software raid. It is far more flexible, faster, more reliable and - most importantly - much easier to recover in the case of hardware failure.

Any RAID system (assuming you don't pick RAID0) can survive a disk failure. The important points are how you spot the problem (does your system send you an email, or does it just put on an LED and quietly beep to itself behind closed doors?), and how you can recover. Your fancy hardware RAID controller card is useless when you find you can't get a replacement disk that is on the manufacturer's "approved" list from a decade ago. (With Linux, you can use /anything/ - real, virtual, local, remote, flash, disk, whatever.) And what do you do when the RAID card dies (yes, that happens) ? For many cards, the format is proprietary and your data is gone unless you can find some second-hand replacement in a reasonable time-scale. (With Linux, plug the drives into a new system.)

I have only twice lost data from RAID systems (and had to restore them from backup). Both times it was hardware RAID - good quality Dell and IBM stuff. Those are, oddly, the only two hardware RAID systems I have used. A 100% failure rate.

(BSD and probably most other *nix systems have perfectly good software RAID too, if you don't like Linux.)

Reply to
David Brown

I've had only 17" machines since day one of my laptops and always find an adequate bag for them. I had a couple of fabric bags which held the machines well, but when I got the Dell monster it was a tight squeeze. Then a guy was selling leather at Costco. I bought a wallet and a brief bag (not hard sided so I can't call it a case). It's not quite a computer bag as it has no padding, not even for the corners. Again, the Dell fit, but tightly. Now that I have this thing (a Lenovo which I swore I would never buy again, but here I am) and could even fit the lesser 17 inch laptop in the bag at the same time! It doesn't have as many nooks and crannies, but everything fits and the bag drops into the sizer at the airport for a "personal" bag.

I've always been anxious about bags on airplanes. I've seen too many cases of the ticket guys being jerks and making people pay for extra baggage or even requiring them to check bags that don't fit the outline. I was boarding my most recent flight and the guy didn't like my plastic grocery store bag asking what was in it. I told him it was food for the flight and clothing. I was going from 32 °F to 82 °F and had a bulky sweater and warm gloves I had already taken off before the flight. The guy told me to put the clothes in the computer bag as if they would fit!!! I pushed back explaining this was what I had to wear to get to the airport without having hypothermia. He had to mull that over and let me board the plane. WTF??!!!

I saw another guy doing the same thing with a family who's children had plastic bags with souvenir stuffed animals or something. Spirit wants $65 each for carry on at the gate. He didn't even recommend that they stuff it all into a single bag. Total jerk! No wonder I don't like airlines.

Reply to
Rick C

I'm considering a hybrid scheme where the system partition is put on the HW controller in RAID 1, non-critical files but want fast access to like audio/video are on HW RAID 0, and the more critical long-term on-site mass storage that's not accessed too much is in some kind of software redundant-RAID equivalent, with changes synced to cloud backup service.

That way you can boot from something other than the dodgy motherboard software-RAID but you're not dead in the water if the OS drive fails, and can probably use the remaining drive to create a today-image of the system partition to restore from.

Worst-case you restore the system drive from your last image or from scratch if you have to, restoring the system drive from scratch isn't a crisis but it is seriously annoying, and most people don't do system drive images every day

Reply to
bitrex

It's not the last word in backup, why should I have to do any of that I just go get new modern controller and drives and restore from my off-site backup...

Reply to
bitrex

Depends, of course, on "what you throw at it". Most of my workstations have 144G of RAM, 5T of rust. My smallest (for writing software) has just 48G. The CAD, EDA and document prep workstations can easily eat gobs of RAM to avoid paging to disk. Some of my SfM "exercises" will eat every byte that's available!

RAID is an unnecessary complication. I've watched all of my peers dump their RAID configurations in favor of simple "copies" (RAID0 without the controller). Try upgrading a drive (to a larger size). Or, moving a drive to another machine (I have 6 identical workstations and can just pull the "sleds" out of one to move them to another machine if the first machine dies -- barring license issues).

If you experience failures, then you assign value to the mechanism that protects against those failures. OTOH, if you *don't*, then there any costs associated with those mechanisms become the dominant factor in your usage decisions. I.e., if they make other "normal" activities (disk upgrades) more tedious, then that counts against them, nullifying their intended value.

E.g., most folks experience PEBKAC failures which RAID won't prevent. Yet, still are lazy about backups (that could alleviate those failures).

I use surplus "shelfs" as JBOD with a SAS controller. This allows me to also pull a drive from a shelf and install it directly in another machine without having to muck with taking apart an array, etc.

Think about it, do you ever have to deal with a (perceived) "failure" when you have lots of *spare* time on your hands? More likely, you are in the middle of something and not keen on being distracted by a "maintenance" issue.

[In the early days of the PC, I found having duplicate systems to be a great way to verify a problem was software related vs. a "machine problem": pull drive, install in identical machine and see if the same behavior manifests. Also good when you lose a power supply or some other critical bit of hardware and can work around it just by moving media (I keep 3 spare power supplies for my workstations as a prophylactic measure) :> ]
Reply to
Don Y

I'm sorry, but that sounds a lot like you are over-complicating things because you have read somewhere that "hardware raid is good", "raid 0 is fast", and "software raid is unreliable" - but you don't actually understand any of it. (I'm not trying to be insulting at all - everyone has limited knowledge that is helped by learning more.) Let me try to clear up a few misunderstandings, and give some suggestions.

First, I recommend you drop the hardware controllers. Unless you are going for a serious high-end device with battery backup and the rest, and are happy to keep a spare card on-site, it will be less reliable, slower, less flexible and harder for recovery than Linux software RAID - by significant margins.

(I've been assuming you are using Linux, or another *nix. If you are using Windows, then you can't do software raid properly and have far fewer options.)

Secondly, audio and visual files do not need anything fast unless you are talking about ridiculous high quality video, or serving many clients at once. 4K video wants about 25 Mbps bandwidth - a spinning rust hard disk will usually give you about 150 MBps - about 60 times your requirement. Using RAID 0 will pointlessly increase your bandwidth while making the latency worse (especially with a hardware RAID card).

Then you want other files on a software RAID with redundancy. That's fine, but you're whole system is now needing at least 6 drives and a specialised controller card when you could get better performance and better recoverability with 2 drives and software RAID.

You do realise that Linux software RAID is unrelated to "motherboard RAID" ?

Reply to
David Brown

If you have only two disks, then it is much better to use one for an independent copy than to have them as RAID. RAID (not RAID0, which has no redundancy) avoids downtime if you have a hardware failure on a drive. But it does nothing to help user error, file-system corruption, malware attacks, etc. A second independent copy of the data is vastly better there.

But the problems you mention are from hardware RAID cards. With Linux software raid you can usually upgrade your disks easily (full re-striping can take a while, but that goes in the background). You can move your disks to other systems - I've done that, and it's not a problem. Some combinations are harder for upgrades if you go for more advanced setups - such as striped RAID10 which can let you take two spinning rust disks and get lower latency and higher read throughout than a hardware RAID0 setup could possibly do while also having full redundancy (at the expense of slower writes).

Such balances and trade-offs are important to consider. It sounds like you have redundancy from having multiple workstations - it's a lot more common to have a single workstation, and thus redundant disks can be a good idea.

That is absolutely true - backups are more important than RAID.

Thus the minimised downtime you get from RAID is a good idea!

Having a few spare parts on-hand is useful.

Reply to
David Brown

These are pretty consumer 7200 RPM drives too, not high-end by any means.

Reply to
bitrex

OK.

On desktop windows, "Intel motherboard RAID" is as good as it gets for increased reliability and update. It is more efficient than hardware raid, and the formats used are supported by any other motherboard and also by Linux md raid - thus if the box dies, you can connect the disks into a Linux machine (by SATA-to-USB converter or whatever is convenient) and have full access.

Pure Windows software raid can only be used on non-system disks, AFAIK, though details vary between Windows versions.

These days, however, you get higher reliability (and much higher speed) with just a single M2 flash disk rather than RAID1 of two spinning rust disks. Use something like Clonezilla to make a backup image of the disk to have a restorable system image.

Shocking or not, that's the reality. (This is in reference to Linux md software raid - I don't know details of software raid on other systems.)

There was a time when hardware raid cards we much faster, but many things have changed:

  1. It used to be a lot faster to do the RAID calculations (xor for RAID5, and more complex operations for RAID6) in dedicated ASICs than in processors. Now processors can handle these with a few percent usage of one of their many cores.
  2. Saturating the bandwidth of multiple disks used to require a significant proportion of the IO bandwidth of the processor and motherboard, so that having the data duplication for redundant RAID handled by a dedicated card reduced the load on the motherboard buses. Now it is not an issue - even with flash disks.
  3. It used to be that hardware raid cards reduced the latency for some accesses because they had dedicated cache memory (this was especially true for Windows, which has always been useless at caching disk data compared to Linux). Now with flash drives, the extra card /adds/ latency.
  4. Software raid can make smarter use of multiple disks, especially when reading. For a simple RAID1 (duplicate disks), a hardware raid card can only handle the reads as being from a single virtual disk. With software RAID1, the OS can coordinate accesses to all disks simultaneously, and use its knowledge of the real layout to reduce latencies.
  5. Hardware raid cards have very limited and fixed options for raid layout. Software raid can let you have options that give different balances for different needs. For a read-mostly layout on two disks, Linux raid10 can give you better performance than raid0 (hardware or software) while also having redundancy.
    formatting link

SATA is limited to 500 MB/s. A good spinning rust can get up to about

200 MB/s for continuous reads. RAID0 of two spinning rusts can therefore get fairly close to the streaming read speed of a SATA flash SSD.

Note that a CD-quality uncompressed audio stream is 0.17 MB/s. 24-bit,

192 kHz uncompressed is about 1 MB/s. That is, a /single/ spinning rust disk (with an OS that will cache sensibly) will handle nearly 200 hundred such streams.

Now for a little bit on prices, which I will grab from Newegg as a random US supplier, using random component choices and approximate prices to give a rough idea.

2TB 7200rpm spinning rust - $50 Perc H700 (if you can find one) - $150

2TB 2.5" SSD - $150

2TB M2 SSD - $170

So for the price of your hardware raid card and two spinning rusts you could get, for example :

  1. An M2 SSD with /vastly/ higher speeds than your RAID0, higher reliability, and with a format that can be read on any modern computer (at most you might have to buy a USB-to-M2 adaptor (), rather than an outdated niche raid card).

  1. 4 spinning rusts in a software raid10 setup - faster, bigger, and better reliability.

  2. A 2.5" SSD and a spinning rust, connected in a Linux software RAID1 pair with "write-behind" on the rust. You get the read latency benefits of the SSD, the combined streaming throughput of both, writes go first to the SSD and the slow rust write speed is not a bottleneck.

There is no scenario in which hardware raid comes out on top, compared to Linux software raid. Even if I had the raid card and the spinning rust, I'd throw out the raid card and have a better result.

Reply to
David Brown

Not shocking at all; 'the performance' that matters is rarely similar to measured benchmarks. Even seasoned computer users can misunderstand their needs and multiply their overhead cost needlessly, to get improvement in operation.

Pro photographers, sound engineering, and the occasional video edit shop will need one-user big fast disks, but in the modern market, the smaller and slower disks ARE big and fast, in absolute terms.

Reply to
whit3rd

Ya, the argument also seems to be it's wasteful to keep a couple spare $50 surplus HW RAID cards sitting around, but I should keep a few spare PCs sitting around instead.

Ok...

Reply to
bitrex

I think the points are that:

- EVERYONE has a spare laptop or desktop -- or *will* have one, RSN!

- a spare machine can be used for different purposes other than the need for which it was originally purchased

- RAID is of dubious value (I've watched each of my colleagues quietly abandon it after having this discussion years ago. Of course, there's always some "excuse" for doing so -- but, if they really WANTED to keep it, they surely could! I'll even offer my collection of RAID cards for them to choose a suitable replacement -- BBRAM caches, PATA, SATA, SCSI, SAS, etc. -- as damn near every server I've had came with such a card)

Note that the physical size of the machine isn't even a factor in how it is used (think USB and FireWire). I use a tiny *netbook* to maintain my "distfiles" collection: connect it to the internet, plug the external drive that holds my current distfile collection and run a script that effectively rsync(8)'s with public repositories.

My media tank is essentially a diskless workstation with a couple of USB3 drives hanging off of it.

My DNS/NTP/TFTP/font/RDBMS/etc. server is another such workstation with a (laptop) disk drive cobbled inside.

The biggest problem is finding inconspicuous places to hide such kit while being able to access them (to power them up/down, etc.)

Reply to
Don Y

I don't know of any more cost-effective solution on Windows that lets me have easily-expandable mass storage in quite the same way. And on RAID 0 at least lets me push to the limit of the SATA bandwidth as I've shown is possible for saving and retrieving giant files, like sound libraries.

A M2 SSD is fantastic but a 4 TB unit is about $500-700 per. With these HW cards with the onboard BIOS I just pop in more $50 drives if I want more space and it's set up for me automatically and transparently to the OS, with a few key-presses in the setup screen that it launches into automagically on boot if you hit control + R.

A 2 M2 SSD unit is only about $170 as Mr. Brown says but I only have one M2 slot on my current motherboard, 2 PCIe slots, one of those is taken up by the GPU and you can maybe put one or two more on a PCIe adapter, I don't think it makes much sense to keep anything but the OS drive on the motherboard's M2 slot.

Hey as an aside did I mention how difficult it is to find a decent AMD micro-ITX motherboard that has two full-width PCIe slots in the first place? That also doesn't compromise access to the other PCIe 1x slot or each other when you install a GPU that takes up two slots.

You can't just put the GPU on any full-width slot either cuz if you read the fine print it usually says one of them only runs at 4x max if both slots are occupied, they aren't both really 16x if you use them both.

I don't think a 4x PCIe slot can support two NVME drives in the first place. But a typical consumer micro-ITX motherboard still tends to come with 4 SATA ports which is nice, however if you also read the fine print it tends to say that if you use the onboard M2 slot at least two of the SATA ports get knocked out. Not so nice.

I've been burned by using motherboard RAID before I won't go back that way for sure. I don't know what Mr. Brown means by "Intel motherboard RAID" I've never had any motherboard whose onboard soft-RAID was compatible with anything other than that manufacturer. I'm not nearly as concerned about what looks to be substantial well-designed Dell PCIe cards failing as I am about my motherboard failing frankly, consumer motherboards are shit!!! Next to PSUs motherboards are the most common failure I've experienced in my lifetime, they aren't reliable.

Anyway, point to this rant is that the cost to get an equivalent amount of the new hotness in storage performance on a Windows desktop built with consumer parts starts increasing quickly, it's not really that cheap, and not particularly flexible.

Y'all act like file systems are perfect they're not, I can find many horror stories about trying to restore ZFS partitions in Linux, also, and if it doesn't work perfectly the first time it looks like it's very helpful to be proficient with the Linux command line, which I ain't.

Right, I don't want to be a network administrator.

Reply to
bitrex

And the reason I put a micro-ITX in a full-tower case in the first place is that motherboards nowadays don't come with regular PCI slots anymore, but there are still PCI cards I want to use without having to keep a second old PC around that has them. But if you put full-size motherboard in a full tower there's nowhere to put an adapter riser to get them.

Consumer desktop PC components nowadays are made for gamers. If you're not a gamer you either have to pay out the butt for "enterprise class" parts or try to kludge the gamer-parts into working for you.

Reply to
bitrex

See for yourself, I don't know what all of this means:

formatting link

Got the power-on hours wrong before FBE1 = 64481.

SMART still reports this drive as "Good"

Anyone make an ISA to SATA adapter card? Probably.

Reply to
bitrex

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.