I read somewhere (forgotten where) yesterday that the SD card in a
RPi malfunctions because log files (which nobody reads anyway)
wear out the write count on the card and so one has to keep
a stock of SD cards to hands with the OS already on.
How true is this?
I've had my Pi 3B+ with a fairly bog-standard Sandisk card running for about
18 months. In addition to normal OS usage, there's also data being written
every few seconds. That's worked fine. I only had one failure to boot, after
a graceful shutdown, when something became corrupted. I reformatted the card
and CHKDSKed it (as FAT32) and no faults were found. Having copied NOOBS
onto it and installed Raspbian again and set everything up again, the card
is still in use. However every month or so I shut the Pi down, and take a
new disc image in case I need to restore the Pi to its initial state (ie
factory build of Raspbian with all my later mods and installations of
packages). I've never (yet!) had to rely on one of those safety disk images,
but at least I'd be up and running a bit quicker than if I had to reinstall
I *could* have used the SD card for writing large video files from the PVR
software that I use, but I decided to write those to a USB spinning disk
rather than to SD, partly to reduce the large number of writes and partly
because I didn't want to fill up 32 GB very quickly.
Over the past few months there have been a lot of non-graceful shutdowns due
to power glitches (crap mains supply to the village) and the Pi has always
been up and running again within a couple of minutes. If things don't
improve, I may need to get a UPS to drive the Pi and router, to cover
2-second outages which are just long enough to make everything reboot. I
think only once has there been a longer break than a couple of seconds, and
that was about 10 minutes: long enough for me to start feeling my way
through an unfamiliar house (we'd only moved in a couple of weeks earlier)
in the pitch black to the bedroom where I knew there was a torch.
That's not heavy use... I had a Pi with camera that took a 1920x1080
still every thirty seconds and added a text overlay and created
thumbnails. Every 6 hours it made a timelapse video of the previous
24 hours of stills. 8 GB SD card could just about hold 3 days of
stills and the time lapse. Card became un-writeable after about 6
weeks. Tried moving the image store to a USB stick, that died in a
similar amount of time.
Which is probably correct with their definition of heavy use...
Op 23-10-2019 om 14:51 schreef Areligious Republican:
There was a bug in debian 7 or 6 that resulted in continue writings to
the log files in 2014/2015 , it is nearly 2020 now and debian is already
at version 10 and that bug resolved.
Careful. That document dates from 2004, when single-bit-per-cell devices
were all that was available. The most recent cards are 4-bit-per-cell
and with cell sizes much smaller to increase the capacity per chip. I
believe that around 1000 writes per cell is all that can be expected
However like many others, I have never had an SD card fail. I have one
which has been in constant use for about 4 years as a Pi system disk.
A 32GB card written to 1000 times is 32TB. Write 1 million bytes per
second to the card, 24/7/365 is 31.536 trillion bytes. So at a million
bytes per second the card will last a year. At only a thousand bytes
per second the card will last a thousand years. Unless you are doing
something really strange you aren't going to write to it that much.
But I had a thought, I'm going to get a new card and set up a Pi on a
UPS and write to it as fast as I can till it dies. It won't really tell
us much but it could be interesting anyway.
On Wed, 23 Oct 2019 16:05:53 -0500, Knute Johnson
declaimed the following:
That presumes you are writing to all parts of the card. It also doesn't
account for the effects of using a journaling filesystem.
Take a card where only part is available for updating files, and in
which the journal updates too, and the number of writes increases rapidly.
Also, the way SD cards work is NOT simple byte counting -- opening a
file to add just one byte requires the card to allocate and erase a
complete allocation unit, then copy the file to the new unit, write the new
data, and put the old unit into the free list for erasure. The erase sets
all bits to "1" -- writing can only convert a "1" bit to a "0" bit.
Streaming a 32GB file to a blank card, erasing it, and streaming
another 32GB is light usage. Instead, try creating 320 million 100KB files,
then start randomly deleting 8 million of them, then create 8 million more
files, repeat, randomly deleting files before creating new ones. Each of
these delete/creates will hit the journal first, before the file itself is
deleting file X
free file space used by X
deletion committed (flag journal entry that this action has completed)
That's three write operations minimum just to delete one file. Creation is
On some cards, the allocation unit could be measured in multiple
megabytes. Cheaper cards, optimized for FAT (a non-journaling) filesystem
may only have the ability to "hold open" two allocation units -- one of
which is like the FAT itself. Anything that jumps from one file to another
could result in closing the other allocation unit and opening another --
and as mentioned, if the opening is to add data, the cards need to obtain a
spare unit, erase it, and copy unaffected sectors to the new unit before
writing the modification. An open unit can continue to have modifications
written to the "erased" portion, but once the unit is close the "knowledge"
of what is used vs what is writable is lost and the next time modifications
are made to the unit requires a new allocate/erase/copy cycle. Better cards
can sustain maybe 4 to 6 open units -- so multiple files can be open in
different units without triggering allocate/erase cycles.
Class 10 cards are rated for single file streaming to freshly
formatted/erased media (eg: video). Class 2/4/6 were rated for multiple
small files and fragmentation (eg: photos with some deleted in camera).
Hence, cheaper class 10 cards with 2 allocation units may perform poorly
relative to a class 4 card with 6 allocation units when used for computer
Checking my archives gave me
which is now a dead link.
Still alive, but not as detailed as the above used to be...
Restrictions on open segments
One major difference between the various manufacturers is how many segments
they can write to at any given time. Starting to write a segment requires
another physical segment, or two in case of a data logging algorithm, to be
reserved, and requires some RAM on the embedded microcontroller to maintain
the segment. Writing [SSD thrashing] to a new segment will cause garbage
collection on a previously open segment. That can lead to thrashing as the
drive must repeatedly switch open segments; see the animation behind the
diagram to the right for a visualization of how that works.
On many of the better drives, five or more segments can be open
simultaneously, which is good enough for most use cases, but some brands
can only have one or two segments open at a time, which causes them to
constantly go through garbage collection when used with most of the common
filesystems other than FAT32.
Only tangentially related:
implies that every file access can trigger changes on an allocation block
level if the filesystem/OS is maintaining the time of last access for each
Wulfraed Dennis Lee Bieber AF6VN
Writes to CD cards are not done per byte, but per block (its not EEPROM, but
If such a block is just a half a KByte (a single sectors worth) and assuming
random access writes (a single byte per block) your "thousand years" drops
back to just two ...
Hence wear levelling etc.
Flash NVRAM is really pretty shit technology.
But it is the best we have
...flash memory has a finite number of program ? erase cycles (typically
written as P/E cycles). Most commercially available flash products are
guaranteed to withstand around 100,000 P/E cycles before the wear begins
to deteriorate the integrity of the storage. Micron Technology and
Sun Microsystems announced an SLC NAND flash memory chip rated for
1,000,000 P/E cycles on 17 December 2008.
The guaranteed cycle count may apply only to block zero (as is the case
with TSOP NAND devices), or to all blocks (as in NOR). This effect is
mitigated in some chip firmware or file system drivers by counting the
writes and dynamically remapping blocks in order to spread write
operations between sectors; this technique is called wear leveling.
Another approach is to perform write verification and remapping to spare
sectors in case of write failure, a technique called bad block
management (BBM). For portable consumer devices, these wear out
management techniques typically extend the life of the flash memory
beyond the life of the device itself, and some data loss may be
acceptable in these applications. For high-reliability data storage,
however, it is not advisable to use flash memory that would have to go
through a large number of programming cycles. This limitation is
meaningless for 'read-only' applications such as thin clients and
routers, which are programmed only once or at most a few times during
From the above SSDS that always have wear levelling are now better than
SD cards that do not, but some SD cards now DO.
Years ago they used to advocate to disable logging entirely for SSDs
to prolong SSD life.
systemctl stop systemd-journald.service
systemctl mask systemd-journald.service
SSDs have gotten better and probably this is not necessary anymore.
Other tweaks are mounting / with the noatime option (default in Raspbian)
and using tmpfs as much as you can.
Raspbian default of /tmp not being in tmpfs is not good for wear&tear.
Putting this in /etc/fstab should correct it.
tmpfs /tmp tmpfs nodev,nosuid,size=50% 0 0
Compiling or broser profiles is something you may want to do in
tmpfs, for performance and SD life.
HOME=$(mktemp -d) firefox
Also Raspbian's default of swap file in the SD card is not a good idea.
I'd disable it or put the swapfile somewhere outside the SD card, like an
external spinning HD.
To prolong SD card life I've tried to put /var, /tmp and /home on a USB
2 stick, but this became too slow or good desktop use.
It seems that the /home partition causes a lot of lag when put on USB 2.
The /tmp partition on USB 2 also causes some lag. It did not seem to matter
if /var was mounted on the SD card or on the USB 2 stick.
Your 'tmpfs' setting (for speed), with only /var (to prolong SD card life)
on the USB 2 stick, seems like it might be a good choice.
A good USB 3.1 stick is much faster than a SD card, even via the USB 2
of Pi's (prior to 4). I recommend putting the whole of the root
partition on it, not just certain directories, leaving just boot on the
SD card. It will be much quicker and the SD card will last indefinitely.
This is what I use
The cost has come down quite a bit recently, and they come in silver or
Thanks for the tip, the large one seems to be the fastest.
I tried putting an Iomega zip drive with USB 3 on the Pi as the root
file system, but unfortunately I didn't get it to work. It hangs on
boot "... waiting for root device PARTUUID=....". I used the Raspbian
desktop tool for copying the SD card to put whatever I had on the SD
card unto the zip drive. I had to redo the UUID with uuidgen and
tune2fs, because it seemed to be the same UUID had been copied although
i checked for new UUIDs. I guess I just messed up something ...
This Iomega zip drive seems to take a bit of power. When I plugged it
in the X-server crashed, which seems strange so maybe it was some kind
of power dip. Plugging it in while the Pi is not on gives no problems.
A Zip drive, from some sort of 1990's timewarp perhaps?
It will be far too slow to use as storage for Raspberry Pi, it's maybe
1000s times slower than the worse SD card you might find in a cheap
Christmas cracker. You wont be able to boot from it, as the kernel wont
wait a couple of microfortnights for it to eventually start dribbling data.
I would not plug it directly in to a Raspberry Pi, the motor in it is
like something out of a washing machine of the same era!