Running a windows 7 firmwre updater on RaspiOS

That's done and dusted,

The problem I see ahead for SSDs is the move to QLC (4 bits per cell), at the cheaper end, which has an order of magnitude less life than TLC (3 bits per cell).

Although this was said for the move from SLC (1 bit per cell) and MLC (2 bits per cell) to TLC, but that has provided pretty rock solid over the last 6 or 7 years, with the use of advanced controllers, wear levelling an over provisioning.

---druck

Reply to
druck
Loading thread data ...

That spec sheet is truly impressive. The prices are still a little higher than mechanical drives at 1TB, maybe 2X, but much better than I expected. Sustained write powers close to 4W in an m.2 package do give some pause....

True, but a little unfair, those are 7200 RPM 3.5 inch drives. Still, it's implausible that a 2.5 inch 5400 RPM drive can come close to the figures for SSD unless the spindle stops.

Oddly enough, this brings me back to the dilemma of finding an enclosure that interfaces to USB3 with support for UASP and TRIM.

Thanks for rattling my cage, it looks like an SSD is worth a look.

bob prohaska

Reply to
bob prohaska

It might be useful to re-frame the issue in terms of failure mode. If a machine fails gracefully, how long it's going to last becomes much less worrisome. If a storage device gives warning that it's running out of write capacity I might not be afraid of QLC vs lower-density options.

If it just goes all bricklike, with no warning at all, it's a lot less attractive. Just the ability to test how much over-provisioning remains available would be a big help. SMART keeps some track of deterioration in mechanical drives, does it exist for SSDs?

Thanks for reading,

bob prohaska

Reply to
bob prohaska

Agree completely, which is why, so far I've avoided SSDs apart from one case - I have a SanDisk 128GB one installed in an old Lenovo R61i laptop that I'd had from new. After its hard drive died I was going to junk it because it turned out that the HDD interfacing electronics were incapable of supporting any HDD of more than 200GB - and at the time, 3 years ago, it was impossible to buy any HDD smaller than 320GB. So, I picked up the SanDisk, installed it threw 64 bit Fedora onto that and it all just worked. The machine is now by "backup laptop" and has been running 24/7 running protein folding software since the start of the COVID lock-down in March. For desk-toppy stuff the machine is now pleasingly faster that it ever used to be, but for anything that needs grunt, lets just say the

1.6Ghz Core Duo is less than sprightly. Not even a hint of a problem with the SSD though.

That said, Unless you're a scrupulous backer-upper to offline drive(s) its probably worth paying a premium for your SSDs: I've heard that the cheapies do suddenly lie down and die, i.e. loose or corrupt data, while the better, more 'professionally' oriented units switch to read-only mode once they've accumulated enough faults to prevent further writes without causing data loss.

===== Being paranoid about such things, I my laptops ans server all have smartd installed and configured to provide a weekly disk status report: I ran the disk in the Lenovo R61i into the ground and had the disk in my house server fail at almost the same time - both had around 50,000 hours on them according to smartd - and in both cases smartd gave me just enough warning to have replacement disks on the shelf when the live ones died - and I didn't loose ant data either!

BTW, another benefit of using SSDs is that they accumulate runtime hours a lot slower than HDDs - with my usage pattern, smartd shows that both my main Laptop (active around 8-12 hours a day) and the house server (active

24/7) add around 20-30 hours a week to their disk's runtime, while the R61i (active 24/7 doing protein folding) adds 1 hour (sometimes as much as 2 hours) to its SSD runtime per week. If this is a general pattern, and I think it may well be, then SSDs may have a longer clocktime life than an HDD handling a similar workload.

This figures, since an HDD will tend to keep spinning after a burst of activity, to minimise startup delays and head load/unload cycles, while this stuff is not relevant for an SSD.

Needless to say, I'm very curious to know if any of you have had a similar experience of the rate at which an HDD accumulates runtime hours vs an SSD.

--
--   
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

I have had a mix of drives on many Ubuntu Linux desktops and Raspberry Pi over the last 15 to 20 years.

In all that time I think I have had two disk drives (both spinning ones) fail.

Disk drives are *amazingly* robust and reliable. For example my first serious backup NAS was a WD 'my book' with two 1Tb drives, this ran continuously as my backup system with daily backups sent to it for six or seven years. I only retired it because the disks filled up. I recently booted it to try and find some old files of my daughter's, it booted fine (and I found the files, more than ten years old).

I have a Lenovo laptop which is SSD based and I've steadily migrated my desktop system to being SSD based. They're all carefully backed up so if the SSDs die I'll be OK but so far I've not had any SSDs fail, some must be quite a few years old now.

The current backup system is a Pi with an external, spinning, 8Tb drive. This isn't very old yet so I've no data.

--
Chris Green
Reply to
Chris Green

Absolutely. It's mandatory more or less. It gives you error rates on all the things you need to worry about. This drive has been going a shade over 5 years of actual 'on' time as my linux desk top boot drive: data is held on a server so it doesn't get much action. But logs are written to this. So it has in fact more writes than reads! (9780 GB versus 5255GB) It still reports 95% of its useful life left.

============================================

sudo smartctl -a /dev/sdb smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.19.0-32-generic] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke,

formatting link

=== START OF INFORMATION SECTION === Model Family: SandForce Driven SSDs Device Model: KINGSTON SV300S37A120G Serial Number: 50026B774A09C471 LU WWN Device Id: 5 0026b7 74a09c471 Firmware Version: 580ABBF0 User Capacity: 120,034,123,776 bytes [120 GB] Sector Size: 512 bytes logical/physical Rotation Rate: Solid State Device Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS, ACS-2 T13/2015-D revision 3 SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Fri Dec 18 00:31:42 2020 GMT SMART support is: Available - device has SMART capability. SMART support is: Enabled

=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED

General SMART Values: Offline data collection status: (0x02) Offline data collection activity was completed without error. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x7d) SMART execute Offline immediate. No Auto Offline data collection support. Abort Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 48) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x0025) SCT Status supported. SCT Data Table supported.

SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x0032 095 095 050 Old_age Always - 2/30119639 5 Retired_Block_Count 0x0033 100 100 003 Pre-fail Always - 0 9 Power_On_Hours_and_Msec 0x0032 047 047 000 Old_age Always - 47131h+15m+31.470s 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 552

171 Program_Fail_Count 0x000a 100 100 000 Old_age Always - 0 172 Erase_Fail_Count 0x0032 100 100 000 Old_age Always - 0 174 Unexpect_Power_Loss_Ct 0x0030 000 000 000 Old_age Offline - 102 177 Wear_Range_Delta 0x0000 000 000 000 Old_age Offline - 96 181 Program_Fail_Count 0x000a 100 100 000 Old_age Always - 0 182 Erase_Fail_Count 0x0032 100 100 000 Old_age Always - 0 187 Reported_Uncorrect 0x0012 100 100 000 Old_age Always - 0 189 Airflow_Temperature_Cel 0x0000 029 045 000 Old_age Offline - 29 (0 235 0 45 0) 194 Temperature_Celsius 0x0022 029 045 000 Old_age Always - 29 (0 235 0 45 0) 195 ECC_Uncorr_Error_Count 0x001c 107 107 000 Old_age Offline - 2/30119639 196 Reallocated_Event_Count 0x0033 100 100 003 Pre-fail Always - 0 201 Unc_Soft_Read_Err_Rate 0x001c 107 107 000 Old_age Offline - 2/30119639 204 Soft_ECC_Correct_Rate 0x001c 107 107 000 Old_age Offline - 2/30119639 230 Life_Curve_Status 0x0013 100 100 000 Pre-fail Always - 100 231 SSD_Life_Left 0x0013 095 095 010 Pre-fail Always - 1 233 SandForce_Internal 0x0032 000 000 000 Old_age Always - 23531 234 SandForce_Internal 0x0032 000 000 000 Old_age Always - 9780 241 Lifetime_Writes_GiB 0x0032 000 000 000 Old_age Always - 9780 242 Lifetime_Reads_GiB 0x0032 000 000 000 Old_age Always - 5255

SMART Error Log not supported

SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 26355 - # 2 Short offline Completed without error 00% 22020 -

SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

--
Of what good are dead warriors? ? Warriors are those who desire battle  
more than peace. Those who seek battle despite peace. Those who thump  
their spears on the ground and talk of honor. Those who leap high the  
battle dance and dream of glory ? The good of dead warriors, Mother, is  
that they are dead. 
Sheri S Tepper: The Awakeners.
Reply to
The Natural Philosopher

I had one SSD go on me under warranty in my laptop. It didn't die dead, it just started giving control errors. SMART was used by the vendor to validate my claim that it was in fact dying

The other SSD I have in this machine replaced a flaky drive about 6 years ago. It is still faultless. 6 years is beyond what my (rusty) server drives normally do before being junked. I have an old server drive in this PC also, as the server got upgraded. SMART shows way more errors on it than the SSD.

In my limited experience SSDS are ALREADY more long-lived than disks and more reliable. Even under fairly heavy usage. They are just expensive.....

--
"If you don?t read the news paper, you are un-informed. If you read the  
news paper, you are mis-informed." 

Mark Twain
Reply to
The Natural Philosopher

I'm inclined to agree. I use SSDs for boot discs in all my systems and haven't lost one yet (the oldest one in use doesn't even support TRIM).

That being said I don't trust them (or anything else) unresevedly, my NAS uses refurbished 2TB 3.5" SAS drives[1] in four ZFS mirrors and all the boot discs are mirrored to zvols exported over iscsi from the NAS.

[1] These are currently a reliable storage bargain because they were never used in high load environments (the 2.5" 15K rpm drives went there) and so spent their life mostly idle before being replaced due to age.
--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:\>WIN                                     | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

Intel X25-E SSDs 32GB:

Power_On_Hours 54141 Media_Wearout_Indicator 98.

Although to be fair I'm not sure all SSDs are as reliable. I did buy a number of OCZ SSDs (about 4), all of which failed catastrophically, I'm not sure what the wearout indicator was on them.

Reply to
Pancho

I think the consensus seems to be that a reliable brand will these days outlast spinning rust, and given that unless you have a CPU/DRAM failure, that happens is blocks go bad and are mapped out which is a very graceful failure mode.

So provided you check with SMART once a year you should not have

*catastrophic* failure. My failure was pretty hard and happened in less than a year.
--
"Strange as it seems, no amount of learning can cure stupidity, and  
higher education positively fortifies it." 

    - Stephen Vizinczey
Reply to
The Natural Philosopher

You are describing a graceful/designed failure mode. As I remember it, my OCZ failure mode was that the device just failed to be recognised by BIOS, i.e. not blocks wearing out gracefully. I don't think anything showed up on SMART. By the time these disks failed they had been relegated to bin-able laptop boot drives, no important data, so I didn't investigate. My assumption is that a component in the controller board failed. But the effect was a sudden total loss of all data on the disk.

These OCZ drives were bought circa 2013. I've not had problems with any of my other SSDs Intel, Crucial, Kingston, Samsung.

However, I think it is one of those questions were the diagnosis is irrelevant, we all know the correct treatment: good short term backups.

I've using rsnapshot recently, which I have found great, simple enough for an incompetent like myself.

Reply to
Pancho

Is this a 2.5in or a 3.5in? If 3.5in what USB adapter do you use?

cheers Jim

Reply to
Jim Jackson

Mine's a 5TB USB3 Seagate Expansion Drive. Works to keep media files (on NTFS partition) and as backup drive for itself and 4 other Pis (on ext4 partition).

--
Chris Elvidge 
England
Reply to
Chris Elvidge

Currys in September. Complete drive in box with USB interface etc., it was cheaper than a bar drive of the same capacity!

It's 3.5" I assume.

So far it's worked well and I (quite) like it powering down when idle though I'm not sure that this is really better for long term reliability.

--
Chris Green
Reply to
Chris Green

Yes, that was very similar to my failure. but it happened slowly enough to enable me to get SMART readings which indicated some sort of failure to talk to the SATA bus rather than damage to the NAND flash.

Mine failed around 2018 I think. It was a regular branded Kingston SSD Its been replaced with an (apparently) identical unit which is still working fine.

The point I wanted to make, is, that ex of this sort of failure which isn't SSD specific, SSD technology now appears to be more reliable than spinning rust.

All my irreplaceable data is rsynced overnight. Handy when I accidentally delete something - last night's backup is still there.

--
Renewable energy: Expensive solutions that don't work to a problem that  
doesn't exist instituted by self legalising protection rackets that  
don't protect,  masquerading as public servants who don't serve the public.
Reply to
The Natural Philosopher

I run incremental backups of /home and /etc, firstly I do hourly backups to a local (but separate) disk drive on my desktop machine, secondly their are daily incremental backups to the Pi + USB system which is out in the garage (quite a long way from the house).

I used to use rsnapshot but then wrote my own incremental backup software using rsync's --link-dest option which sends the backups to an rsync server process running on the Pi. This makes the backups more secure because an intruder on the backed-up systems isn't able to (easily) get at the backups on the remote system. Since they're incremental backups they can't be overwritten remotely either.

--
Chris Green
Reply to
Chris Green

Try rsnapshot: its fast (uses rsync to make backups) and lets you have more than just the last backup version available. I have it set to make daily and keep 4 weekly backups, which means it keeps 7 daily backups, combining the dailys every week to make a new backup, which is kept for a month. as each 'backup' is simply a list of pointers to timestamped file copies, the rsnapshot backup set seems to be about twice the size of a single rsync backup and making the weekly snapshot takes twice as long to make as the daily one, i.e. 8 minutes compared with 4 minutes for the daily run and the complete set is 183GB.

For comparison, when I previously used compressed gzip daily backups, these took 3.5 hours and I could only fit 4 daily backups on a 320 GB disk.

--
--   
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

To be honest, in the last 10 years while the system has been running, I have blessed it because two disks died in that time, but I have never cursed it because I needed some audit trail of changes. If I were writing software Id use sccs/rcs or equivalent anyway

I have it set to make

Too effin complex for me

When I set up this server back in the noughties, I looked at all the ways of backing up data, and concluded that a sodding big disk was way better than tapes or CDs or DVDS and so that is what I bought.

It is simply a time delayed mirror. Because I am lazy it backs up EVERYTHING. Trying to work out what it didnt need to was more cost to me than the extra 100Mbytes of disk..

As the system data size grows disks get replaced with bigger ones. And enormous rsyncs restart the whole thing...

*shrug* its good enough for SOHO. I am not managing terabytes of other peoples data these days...
--
"First, find out who are the people you can not criticise. They are your  
oppressors." 
      - George Orwell
Reply to
The Natural Philosopher

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.