Want recommendations for SSD drive for Pi

Hello The!

Thursday January 13 2022 19:14, you wrote to Chris Green:

Having some experience of SSD's with Linux and my first try with Crucial only to find that their controller needs the system to be idle before it will do a full check for empty and unused clusters which it some what difficult to occur under a multi-user task system.

I switched to Samsung 850, 950+ series SSD's both for SATA and M2 connections along with using sudo fstrim -av.

Works a treat but on a busy system run it via cron say at midnight and noon and you will not have any problems and that is on a very busy system running FTP, web servers as well as a BBS.

Not had a problem since although if you really add a high volume of files you might want to run it more often.

Note that you must also run fstrim after a reboot for safety if doing the above.

For normal usage once per day 'should' be ok.

This is on a normal computer systems and a media system (running Linux and mythtv that can record up to 48 channels at once but usually two at most.

SSD's are available with a USB 2 or 3 interface so minimum connectors required. I am saying that as I have a 3B+ using a geekworm x850 HDD drive board and metal casing.

If I can ever find one I will consider getting a 4B 8Gb and use a USB SSD from

- yes you guessed it Samsung.

As I said above you MUST use FSTRIM with is in the lib Linux-utils for my platform which is standard but who knows with Raspian or Bullseye (Debian) and yes I have upgraded to it over the last 7 days using the buster2bullseye.sh script via the forum.

Vincent

Reply to
Vincent Coen
Loading thread data ...

I'm after getting a small SSD to boot my Pi 4 from, instead of from the micro SD.

Buying USB sticks and drives is fraught with danger from fake devices so can anyone recommend a supplier? I don't need any spare space so even a 32Gb SSD would be fine but nowadays 128Gb is almost as cheap.

Reply to
Chris Green

I've been using a 128GB Sandisk SSD for the last 5 years without any problems, though it is SATA connected rather than USB, but I see they also sell USB connected 128 GB SSD devices. They're also my preferred SD card supplier.

If you install an SSD drive, do install and use "fstrim -a -v" as a weekly cron job to keep the SSD's block structure tidy.

Reply to
Martin Gregorie

Thanks, useful.

Doesn't it tend to second guess the SSD's own wear levelling? Looking at the man page I'm not at all clear what it does:-

fstrim is used on a mounted filesystem to discard (or "trim") blocks which are not in use by the filesystem...

So what does 'discard' mean in this context? An 'unused block' is surely just that, how can you discard it? Or does it mean that it discards blocks allocated because the minimum allocation by the OS is larger than the actual device block size?

... and further, it's run automatically by systemd on my systems.

Reply to
Chris Green

After a duff initial one Ive specialised in Kingston SSDS BOUGHT FROM KINGSTON. Any Chinese knock off can have a Kingston sticker on it

Smallest they do these days is 120GB for £22.75 UK sterling price

formatting link

Of course that will need a USB to SATA adapter of some sort

Reply to
The Natural Philosopher

Pass. My guess is that it compacts and possibly reorders the free block chain. However, I don't understand why they'd calling that 'discarding' blocks since no blocks are actually discarded, i.e. marked unusable.

Fairy Snuff. It wasn't on my system, an old Lenovo R61i running Fedora Linux, where it replaced a 120GB HDD when that died.

Reply to
Martin Gregorie

The more I look into SSDS the more convinced I am that they have all the smarts inside them to make them last and faffing around at the linux level will only reduce performance/life

I have retired my old desktop with a 6 years old SSD in it used daily and for logfiles too, and it was still estimated 97% of life left when I last switched it on. In a few more months after I am sure I wont need it, I'll reinstall it and turn it into a server or something,

Reply to
The Natural Philosopher

As I understand it fstrim essentially passes hints to the SSD based on the filesystem layers knowledge of what's going on.

Once a block has been used there is no way for the SSD to know that it has become unused unless the filesystem tells the SSD so fstrim exists to pass that information between the layers.

Reply to
Ahem A Rivet's Shot

the more I read about it the less I believe it does anything actually useful.

Reply to
Chris Green

Thanks, that's just the sort of info I'm after.

Reply to
Chris Green

Get a small(ish) M2 drive and either an M2 to USB adapter or an M2 to SATA adapter. Ebay is your friend.

Reply to
Chris Elvidge

yes, so the idea is that deleted blocks are not just 'left' there not being wear levelled.

But modern SSDS swap blocks around even when they *aren't* being rewritten All that matters is how often any block gets written to..

Otherwise you would end up with a pool of static data that was written once years ago with bags of life left in it and a diminishing area of disk say occupied by log files, getting hammered.

So the SSD must take static data and move it to higher wear blocks. All fstrim does it make sure it doesn't do that to *deleted* blocks. Although unless its a very very busy little disk, I am not sure how many of those there will actually be.

(How often it does that and using what algorithm is 'implementation specific' to the SSD)

Log files perhaps.

To my mind its a bit like windows and de fragging. yes, it was handy years ago on old FAT based file systems, and modern windows does it to mechanical drives automatically but its seriously bad to do it to a SSD.

And once you SSD has a 'free this block' command, its a piece of cake to make the operating system end that command every time it e.g. unlinks an inode and you can do that using the 'discard' option in fstab.

Anyway, mostly its all enabled by default now so we don't need to worry our pretty little heads about it.

six years ago I got precious about all this and did 'all the right things'. six years later that drive is so far ahead of any mechanical drive in terms of life left that I gave up worrying.

Unless you are hammering SSDS in a data centre the best advice is 'fit and forget'

Reply to
The Natural Philosopher

I think it does, but just how useful is very moot.

Reply to
The Natural Philosopher

My 128GB SSD gets fstrimmed once a week and typically shows that as affecting 1-2GB of storage on each run.

This is a machine that doesn't obviously do a lot: its typical weeks workload is three items:

1) rsync backup (read-only task) 2) weekly dnf update run: its a Fedora box 3) the rest of the time its running the protein Folding At Home application.
Reply to
Martin Gregorie

I should have said that I use Sandisk SDs because they are among the few brands who make what they sell. They are now owned by Western Digital

The manufacturing sources seem to be: Kioxia Formerly part of Toshiba. Kingston, Samsung and Seagate now own part of them

Micron They own Crucial

SK Hynix

Western Digital They now own Sandisk ...which would *seem to show* that genuine Kingston, Crucial and Sandisk SD cards are fine, but who knows what's in other brands of SD cards.

Reply to
Martin Gregorie

Does it do log file rotation?

Even my Pis SD card stores log files. It will kill it eventually

see what ls -l /var/log shows you... And du -sh /var/log

I see around 655Mbytes of log files rotated nightly on this machine (intel desktop with ssd)

fstrim seems to reveal no data to trim, so lord knows what is doing it... Oh. The evil systemd is.

$more /lib/systemd/system/fstrim.timer

[Unit] Description=Discard unused blocks once a week Documentation=man:fstrim ConditionVirtualization=!container [Timer] OnCalendar=weekly AccuracySec=1h Persistent=true [Install] WantedBy=timers.target

Reply to
The Natural Philosopher

Of course - I've never seen a Linux system that didn't, though if you only run it for an hour or so a day, its timers might not have got round to doing log management so soon after booting.

The systems here that run 24/7 swap logs around 01:00 while my usual laptop, which is always on but quiescent, lid shut, when not being used, usually doesn't release the previous day's logwatch report until a couple of hours after I've woken it up.

All RPis have the log management tools installed by default but IIRC I had to install the logwatch system (log analyser and reporter) as well as a Postfix MTA so the Pi could, along with the other systems on my LAN, e- mail a daily logwatch report to my laptop so I'll see them when its woken up the next day, or whenever.

Seems like a lot.

This laptop has around 50 different logs in /var/log and its subdirectories at present. These amount to 26 MB in total, so it seems that you don't have 'logrotate' installed and enabled. It is needed to keep the number and size of logfiles under control.

My logfiles are all organised as the current logfile plus the previous 4 generations and managed by the 'logrotate' overnight task, which gets automatically run when the machine is booted if it wasn't running at 1 AM last night or hasn't been run for more than 24 hours.

If you want to see the daily reports, which I think is a good idea, you should also have an MTA installed on every system: I use Postfix. For convenience I have these Postfix instances configured to route all email through my house server, which runs 24/7. This way, all the machines on my LAN send mail to my house server's MTA , which also receives incoming mail from my ISP after passing it through the excellent SpamAssassin spamtrap. Incoming mail, whether received from my ISP or from other machines on my LAN is held until its wanted.

This laptop runs the Evolution MUA, which collects incoming mail from my house server's mail queue and routes outgoing mail through it.

Reply to
Martin Gregorie

Log rotation is set up automatically for most installed programs, but its not if the user enabled rsyslogging for their own program, or has set up a filter of something to a different file to reduce the amount of crap in syslog. So its worth checking /var/log for anything that's grown huge.

I make sure my nightly Raspberry Pi backups occur after a log rotate, as that normally accounts for most of the data written. I graph the rsync stats as the if the amount of data increases considerably, its normally something thrashing the log files, which mean something is wrong.

---druck

Reply to
druck

True enough - done that myself: have the program make logging calls that append messages to the logfile and extend the logrotate configuration to manage the size of the new logfiles and only keep the appropriate number of old logfiles.

Same here.

Reply to
Martin Gregorie

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.