SDCardKiller

Following a discussion on another thread about the life limits of an SDCard I thought I would write a program to write one to death and see just how long it lasts.

I'll post the code below but the general idea is that I have a random number generator that is creating file names. I calculate the number of possible files by taking the Java FileStore usable space, taking 90% of that and dividing it by the file size, 409,600 bytes. That is 10 blocks of 4096 bytes the block size from the Java FileStore. So a randomly selected file name is checked for existence, if it doesn't exist I write

409,600 bytes of 1s to the file. If it does exist and it has 1s in it I write 409,600 0s to the file. If it has 0s in it, I delete it. I do this 1,000,000 times then I delete all the files in the directory and start over. I'm having the program send me statistics every hour so it should be fairly obvious when it dies or the usable space gets really small because of marked off blocks.

I'm running some tests now on an old card I had lying around. I am going to order some new cards once I am sure I have the right test going.

I would appreciate comments on the testing algorithm and on my code if you have any.

Thanks,

package com.knutejohnson.pi.killer;

import java.io.*; import java.nio.file.*; import java.time.*; import java.time.format.*; import java.util.*; import java.util.concurrent.*; import java.util.stream.*; import static java.util.stream.Collectors.*; import javax.activation.*; import javax.mail.*; import javax.mail.internet.*; import javax.mail.util.*;

public class SDCardKiller implements Runnable { private static final String dataDir = "/home/pi/bin/files"; private final Random random = new Random(System.currentTimeMillis()); private final DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy/MM/dd HH:mm:ss"); private final FileStore fileStore = Files.getFileStore(Path.of("/")); private final Thread thread; private final long blockSize; private final long fileSize; private final byte[] ones; private final byte[] zeros; private volatile long usableSpace; private volatile int fileCount; private volatile long filesCreated; private volatile long onesWrites; private volatile long zerosWrites; private volatile long filesDeleted; private volatile long deleteFailures; private volatile long ioExceptions; private volatile long totalBytesWritten; private final Timer timer = new Timer(true);

public SDCardKiller() throws IOException { Runtime runtime = Runtime.getRuntime(); runtime.addShutdownHook(new Thread(() -> { }));

TimerTask task = new TimerTask() { public void run() { String temp = ""; ProcessBuilder pb = new ProcessBuilder("vcgencmd","measure_temp"); pb.redirectErrorStream(true); try { Process process = pb.start(); try (BufferedReader br = new BufferedReader( new InputStreamReader(process.getInputStream()))) { temp = br.lines().collect(joining("%n")); } if (!process.waitFor(10,TimeUnit.SECONDS)) System.out.println("timed out waiting for vcgencmd"); } catch (IOException|InterruptedException ex) { ex.printStackTrace(); temp = ex.toString(); }

String text = String.format( "%s%n" + "Usable Space: %d%n" + "Block Size: %d%n" + "FileSize: %d%n" + "Files: %d%n" + "Files Created: %d%n" + "Ones Writes: %d%n" + "Zeros Writes: %d%n" + "Files Deleted: %d%n" + "Delete Failures: %d%n" + "IOExceptions: %d%n" + "Total Bytes Written: %d%n" + "Processor Temperature: %s%n", LocalDateTime.now().format(formatter),usableSpace,blockSize, fileSize,fileCount,filesCreated,onesWrites,zerosWrites, filesDeleted,deleteFailures,ioExceptions,totalBytesWritten, temp); System.out.println(text);

Properties props = new Properties(); props.put("mail.smtp.ssl.trust","*"); props.put("mail.smtp.port","25"); props.put("mail.smtp.host","xxxxxxxxxx.com"); props.put("mail.smtp.starttls.enable","true"); props.put("mail.smtp.protocol","TLSv1.2 TLSv1.3"); props.put("mail.smtp.from"," snipped-for-privacy@xxxxxxxx.com"); props.put("mail.debug","false");

try { InternetAddress[] recipients = InternetAddress.parse(" snipped-for-privacy@cess172.com",true); Session session = Session.getInstance(props); MimeMessage mime = new MimeMessage(session); mime.setRecipients(Message.RecipientType.TO,recipients); mime.setSentDate(new Date()); mime.setSubject("SDCardKiller Status"); mime.setText(text); Transport.send(mime); } catch (MessagingException me) { me.printStackTrace(); } } };

timer.scheduleAtFixedRate(task,3600000,3600000);

thread = new Thread(this);

blockSize = fileStore.getBlockSize(); fileSize = blockSize * 100; ones = new byte[(int)blockSize]; zeros = new byte[(int)blockSize]; Arrays.fill(ones,(byte)1); Arrays.fill(zeros,(byte)0); }

public void start() { if (thread.getState() == Thread.State.NEW) thread.start(); }

public void run() { // delete all the files IntStream.range(0,fileCount). mapToObj(Integer::toString). map(n -> new File(dataDir,n)). forEach(File::delete);

while (true) { try { // calculate usable space on the disk usableSpace = fileStore.getUsableSpace(); // set file count to use 90% of usable space fileCount = (int)(usableSpace * 0.9 / fileSize);

random.ints(0,fileCount). limit(1_000_000). // limit this to 1M operations mapToObj(Integer::toString). map(n -> new File(dataDir,n)). forEach(file -> { if (file.exists()) { try (FileInputStream fis = new FileInputStream(file)) { if (fis.read() == 1) { try (FileOutputStream fos = new FileOutputStream(file)) { for (int b=0; b

Reply to
Knute Johnson
Loading thread data ...

Here is the status report at hour 1.

2019/10/29 20:47:15 Usable Space: 11028582400 Block Size: 4096 FileSize: 409600 Files: 24232 Files Created: 25506 Ones Writes: 25506 Zeros Writes: 20987 Files Deleted: 13434 Delete Failures: 0 IOExceptions: 0 Total Bytes Written: 19043532800 Processor Temperature: temp=48.3'C
Reply to
Knute Johnson

I'm not convinced by this methodology:

- you're writing files not blocks. The filesystem is almost certainly doing things behind the scenes (eg caching data, coalescing writes, updating metadata when it feels like it) that mean you can't see what's really going on.

- I don't see you syncing, forcing cached writes to complete. (particularly an issue if the size of writes is less than the memory size)

- write amplification will expand writes to the native block size (some power of two). 409,600 bytes isn't a power of two, so you might end up actually writing (for instance) 512KiB. So in that instance your write count would be low by 20%. If your writes aren't aligned with blocks, you could actually be writing 1MiB.

- the data is eminently compressible. You didn't tell us the FS, but some will compress behind the scenes (not if it's FAT though).

- some bad SD cards increase wear levelling for the area where the FAT is stored. You won't observe the effects of that, or conversely wearing out the FAT faster.

- the usable space doesn't shrink due to the number of dead blocks. You formatted the thing as a 32GB partition, and it'll stay as a 32GB partition, even if some of those writes eventually fail. It might get marked as read-only eventually, but it'll never shrink to a 31GB partition. I'm not sure if there's a way to read the number of dead blocks like there is on a SATA device.

Practically, to be useful something like this needs to work at the block not file level.

Theo

Reply to
Theo

No you can't but then you can't ever with an operating system.

Not sure what you mean by syncing but the files are closed which should force the OS to flush the write buffers. In any case the Pi only has

500MB of memory so the OS would have to flush sooner or later.

The OS block size is 4096 bytes. Not sure about the actual card. I picked 10 times the OS block size as that created about 24000 files on the a 16GB card. Maybe more files is better, I don't know.

The file system is ext4. The file size reported by the OS is 409600 bytes.

I'm not writing to the FAT partition.

I don't know. That is one of the reasons I tried this.

The original discussion was about log files and journaling killing the SD card from too many writes. So I devised this experiment with all its limitations to attempt to kill an SD card by writing to it.

I would love to hear some practical suggestions on how to improve the experiment.

Stats as of this morning:

2019/10/30 08:47:10 Usable Space: 11028582400 Block Size: 4096 FileSize: 409600 Files: 24232 Files Created: 230654 Ones Writes: 230653 Zeros Writes: 226492 Files Deleted: 218655 Delete Failures: 0 IOExceptions: 0 Total Bytes Written: 187246592000 Processor Temperature: temp=43.5'C
--

Knute Johnson
Reply to
Knute Johnson

On Tue, 29 Oct 2019 20:25:22 -0500, Knute Johnson declaimed the following:

Since most flash memory erases to 1s, the first half is mostly a no-op. The card obtains an allocation unit from its free-list, erases it, and then essentially does nothing but declare the sectors in use as writing all 1s to a unit filled with 1s makes no changes.

1-bits can be changed to 0-bits but I doubt the SD card controller chip is smart enough to realize it can do an in-place update -- so again the card will obtain a free allocation unit, erase it to 1s (and copying any data belonging to other unopened files to the unit), then write 0s to the sectors of the opened file.

This is also why "secure file erase" can't really be done on flash memory. The common secure erase is to write a random pattern, then invert the pattern and write over the file, and finally write another random pattern -- but on flash memory, each write is going to a different part of the memory as part of wear-leveling and routine allocation unit assignments.

formatting link
formatting link
From the latter URL:

On a Beaglebone Black

debian@beaglebone:~$ cat /sys/block/mmcblk0/device/preferred_erase_size

4194304 debian@beaglebone:~$ cat /sys/block/mmcblk1/device/preferred_erase_size 4194304 debian@beaglebone:~$ df Filesystem 1K-blocks Used Available Use% Mounted on udev 220096 0 220096 0% /dev tmpfs 49496 5508 43988 12% /run /dev/mmcblk0p1 7572696 2847372 4366564 40% / tmpfs 247476 0 247476 0% /dev/shm tmpfs 5120 4 5116 1% /run/lock tmpfs 247476 0 247476 0% /sys/fs/cgroup tmpfs 49492 0 49492 0% /run/user/1000 debian@beaglebone:~$

mmcblk0 is an 8GB SanDisk Edge C-4 SD card, mmcblk1 is the on-board 4GB eMMC.

For an R-Pi 3B+

pi@rpi3bplus-1:~$ cat /sys/block/mmcblk0/device/preferred_erase_size

4194304 pi@rpi3bplus-1:~$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/root 12028584 6626000 4768524 59% / devtmpfs 469544 0 469544 0% /dev tmpfs 474152 0 474152 0% /dev/shm tmpfs 474152 6404 467748 2% /run tmpfs 5120 4 5116 1% /run/lock tmpfs 474152 0 474152 0% /sys/fs/cgroup /dev/mmcblk0p6 258094 53034 205060 21% /boot tmpfs 94828 0 94828 0% /run/user/1000 pi@rpi3bplus-1:~$ 16GB Kingston C-10 SDHC card

If the systems aren't lying about the "preferred" size, then all of these are using 4MB erase sizes -- 10 times the arbitrary file size you've defined -- so on average, you are going to undergo 10 times the erase cycles since each erase unit has 10 files in it (ignoring Linux inode and journal updates).

I did kill an SD card on an R-Pi 3B many years ago -- running the HINT benchmark on it with a swap file on the SD card (I later bought a 1TB USB hard drive to use for swap, reran the benchmark on the R-Pi and then on a BBB). I should spend time some day recompiling the benchmark -- though it does show the impact of Linux (at the time, the main usage was to benchmark a No-OS embedded board, so no overhead from an OS, and the benchmark would shut down when an attempt to malloc a large memory block fails -- on Linux the OoM Killer, uhm, kills the benchmark before it can report results; swap file/disk allows the benchmark to do one pass with swapping at which point it determines performance is falling off too much and shuts down on its own)

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
	wlfraed@ix.netcom.com    http://wlfraed.microdiversity.freeddns.org/
Reply to
Dennis Lee Bieber

So would it make more sense to just write a file of 0s and then erase it? And to change the file size to 4MB? Or does every write to a file whether the same data or not cause an erase and a write to a different block? I which case we could just write 0s to the file over and over again.

--

Knute Johnson
Reply to
Knute Johnson

Hello Knute,

Your test consists of files always the same length, and same contents. I do not think that is a realistic test. First you could use random file-lenghts and contents, i.e. all figures from 0 to 255 up and down etc. But there is a more drastic way of testing. Use two micro-SDcards of the same size and make and two SDcard readers. Fill one completely until it is almost full except a few byts unused. Then make a copy with DD to the other card, i.e. a full backup. After that lowlevel format the destionation card, so its is really empty, and make again a full backup of the first card, etc.. Then count the amout of perfect copies without errors, and also the time every backup copy takes. As soon as you get much longer backup times and/or many errors, you know the destination card died. Then look at the BackUp counter, how many copies were made and in how much time? Does that sound more realistic? By this way of testing, you almost completely mislead the anti wear techniques, as you only copy full discs and erasing them and so on. After the test you know how many backups could be made safely with the same media. Good luck in testing.

Henri.

Reply to
Henri Derksen

Den 2019-10-30 kl. 02:25, skrev Knute Johnson:

looking in the code Arrays.fill(ones,(byte)1); Arrays.fill(zeros,(byte)0);

should that not be Arrays.fill(ones,(byte)255);

if all bits should be 1? Or am I thinking wrong layer here?

And if the filesysem is the wrong layer, perhaps sticking the card in a pc and use a script fiddling with dd is better - it s on block level

Reply to
Björn Lundin

I don't know. If as the other fellow said it erases the block before it is written to and that erase is ones, does writing 1 which has 7 zero bits cause more wear? There are a lot of unkowns here.

The basic idea is to write the SD card to death at some future point taking a measurement to get an estimate of real use card life. That's why I am writing files.

knute...

Reply to
Knute Johnson
2019/10/30 15:47:18 Usable Space: 12762402816 Block Size: 4096 FileSize: 409600 Files: 28042 Files Created: 341120 Ones Writes: 341119 Zeros Writes: 334609 Files Deleted: 326833 Delete Failures: 0 IOExceptions: 0 Total Bytes Written: 276778188800 Processor Temperature: temp=44.5'C
Reply to
Knute Johnson

On Wed, 30 Oct 2019 10:41:31 -0500, Knute Johnson declaimed the following:

Since I don't think the card controller chips check for whether a bit will change, the simple view would be that writing even one bit to a file will require the card to pull a free "unit", erase it, then copy allocated sectors/block/clusters from the original unit that belong to /other/ files, then start adding the new data to what were unallocated sectors in that "unit".

Cards can keep some number of "units" "open" (cheap cards keep 2 units open -- which basically means for FAT format, the FAT/bitmap is in one unit, and one unit can be getting data written to it [typically, one output file, but on a freshly erased card, it might be possible to have multiple output files interleaving at the sector level]). Open "units" are buffered in RAM in the card controller chip. Journaling file systems will do a lot of unit swapping on such a cheap card -- on a card with a 6 open unit ability, it may be possible to keep the journal, bitmap, and multiple output files in different units and the card only really flushes /to/ the flash when it fills a "unit" and needs to fetch a new one.

formatting link
""" Instead of writing data to the disk's data areas directly, as in previous versions, the journal in EXT3 writes file data, along with its metadata, to a specified area on the disk. Once the data is safely on the hard drive, it can be merged in or appended to the target file with almost zero chance of losing data. As this data is committed to the data area of the disk, the journal is updated so that the filesystem will remain in a consistent state in the event of a system failure before all the data in the journal is committed. On the next boot, the filesystem will be checked for inconsistencies, and data remaining in the journal will then be committed to the data areas of the disk to complete the updates to the target file. """

The journal scheme basically means that writing to a file actual writes the data somewhere in the journal first, then writes the journal metadata that describes what is in the journal... Some time later, the OS merges that data with the real file followed by clearing out the relevant metadata. So you have the possibility that even writing one byte to a file triggers up to four unit erase/rewrite operations (all handled by the card controller chip; a card with 6-unit capability may not have written the journal unit to flash, and also may have a unit for the journalled data and a unit for the actual file buffered).

""" EXT4 reduces fragmentation by scattering newly created files across the disk so that they are not bunched up in one location at the beginning of the disk, as many early PC filesystems did. The file-allocation algorithms attempt to spread the files as evenly as possible among the cylinder groups and, when fragmentation is necessary, to keep the discontinuous file extents as close as possible to others in the same file to minimize head seek and rotational latency as much as possible. Additional strategies are used to pre-allocate extra disk space when a new file is created or when an existing file is extended. This helps to ensure that extending the file will not automatically result in its becoming fragmented. New files are never allocated immediately after existing files, which also prevents fragmentation of the existing files. """

Note that EXT4 is actually designed to NOT create adjacent/interleaved files -- so expect to use one "unit" per open output file. Obviously the seek and latency concerns don't apply to an SD card, since wear levelling will scatter the "logical blocks" willy-nilly.

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
	wlfraed@ix.netcom.com    http://wlfraed.microdiversity.freeddns.org/
Reply to
Dennis Lee Bieber

On Wed, 30 Oct 2019 21:12:59 -0400, Dennis Lee Bieber declaimed the following:

Additional information:

formatting link
""" Delayed allocation, on the other hand, does not allocate the blocks immediately when the process write()s, rather, it delays the allocation of the blocks while the file is kept in cache, until it is really going to be written to the disk. This gives the block allocator the opportunity to optimize the allocation in situations where the old system couldn't. Delayed allocation plays very nicely with the two previous features mentioned, extents and multiblock allocation, because in many workloads when the file is written finally to the disk it will be allocated in extents whose block allocation is done with the mballoc allocator. The performance is much better, and the fragmentation is much improved in some workloads. """

So... to exercise the SD card, you may want to explicitly flush the data (presuming a flush will result in moving the cache contents to the media; just closing a file may leave it to the OS as to when to flush the cache).

""" Larger inodes: Ext3 supports configurable inode sizes (via the -I mkfs parameter), but the default inode size is 128 bytes. Ext4 will default to

256 bytes. This is needed to accommodate some extra fields (like nanosecond timestamps or inode versioning), and the remaining space of the inode will be used to store extend attributes that are small enough to fit it that space. This will make the access to those attributes much faster, and improves the performance of applications that use extend attributes by a factor of 3-7 times. """

Since the inode contains information about where the data of a file is stored, it will get updated when the file is closed (well, when the journal is committed). Note the size of the inode structure. That's a lot smaller even than the 4kB "block" size that is default. So the card "unit" holding the inodes will get lots of erase/copy/rewrite operations. Also take into account that Linux "directory" is just a special file containing the file name and a pointer/index to the inode associated with the file. Creating a file will result in opening the directory file, adding a record, with an inode index picked from the inode bitmap (so that's another part of the media that gets rewritten, as will the data [zone] bitmap to identify free/used data regions).

So... Assume an SD card with 6-unit capability.

One unit can hold the inode bitmap One unit can hold the zone/data bitmap One unit can hold the directory file inode One unit can hold the directory file data

So long as those units are held "open" they do not trigger writes to the flash memory itself... that leave two units that can be used for the journal inode and journal data...

If you are lucky, the journal units don't get written to the actual flash, but stay in the open "units" in the controller. But committing the journal contents will result in having to modify a data file inode unit and a data file data unit -- so that means having to close either the bitmap or the directory file units... How the card determines which to close/write is unknown. I'd hope it is the directory file, since that gets just the file name and initial inode index -- likely when the data file is named in the open call. The bitmaps will be affected as the data is journalled and committed.

In contrast, visualize a cheap SD card (one that was optimized for FAT operation with video cameras -- which are rated for streaming video to a freshly formatted card, no jumping around in the filesystem). Such a card only handles two open units. In my hypothetical example, one would have needed a card with 8 open unit capability to minimize unit erase/rewrite. With only 2 units available, a lot of unit activity will take place (I'm not even accounting for the journal in this list).

Open directory file inode (to find directory data location) Open directory file data Open inode bitmap to get a new inode for the new file (whoops, have to close one of the other units) Write file name/inode record to directory Reopen directory file inode to update metadata (access time, etc. -- closing some other unit) Open the new data file inode (close one other other units) Open the data/zone bitmap to get space for new data (whoops have to close another unit) Open data space unit (close one of the open units) Write the data and close the data file Reopen data file inode to update the metadata

EACH of those Open/reopen operations could result in the card allocating a free unit, copying data from the original unit into the open buffer, where it can be modified, and written to flash when it the unit is closed. So -- without the journal, that card has already cycled at least six erase units PER FILE CREATION. The other card, with 6 open units, assuming the journal [which was not included in the 2-unit example] and bitmaps stay in four of the open units and don't get flushed to media, undergoes the six erase units initially, but then each subsequent file is only cycling four units (2x directory, 2x data, ...) The journal and bitmaps only get flushed when the card is dismounted.

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
	wlfraed@ix.netcom.com    http://wlfraed.microdiversity.freeddns.org/
Reply to
Dennis Lee Bieber
2019/10/31 08:47:10 Usable Space: 12762402816 Block Size: 4096 FileSize: 409600 Files: 28042 Files Created: 581355 Ones Writes: 581355 Zeros Writes: 568338 Files Deleted: 551364 Delete Failures: 0 IOExceptions: 0 Total Bytes Written: 470914252800 Processor Temperature: temp=39.2'C
Reply to
Knute Johnson
2019/10/31 20:47:14 Usable Space: 12972314624 Block Size: 4096 FileSize: 409600 Files: 28503 Files Created: 744117 Ones Writes: 744117 Zeros Writes: 721517 Files Deleted: 695024 Delete Failures: 0 IOExceptions: 0 Total Bytes Written: 600323686400 Processor Temperature: temp=44.5'C
Reply to
Knute Johnson

I wanted to write as many bytes as possible and do it quickly because I figured it would take a while.

I wanted to use a scheme that was similar to regular use except a lot more of it so I could kill it in a shorter period of time. I think your idea would certainly give you valid data but I'm not sure how to apply that to what we want to find out which is how long will a micro SD card last in normal use.

Thanks for looking at it Henri.

Reply to
Knute Johnson

This command pipeline writes 233MiB/s to /dev/null on my main home system (nearly a decade old). I suspect it would be faster than any practical SD card or USB stick could handle. It writes decimal text representations of numbers but with no repetition.

yes '' | cat -n | tr -d ' \t' | pv -prb

HTH

--
Robert Riches 
spamtrap42@jacob21819.net 
(Yes, that is one of my email addresses.)
Reply to
Robert Riches

Needs: sudo apt update && sudo apt install pv

Reply to
A. Dumas
2019/11/01 20:47:12 Usable Space: 12986429440 Block Size: 4096 FileSize: 409600 Files: 28534 Files Created: 1044754 Ones Writes: 1044754 Zeros Writes: 1011039 Files Deleted: 978312 Delete Failures: 0 IOExceptions: 0 Total Bytes Written: 842052812800 Processor Temperature: temp=45.1'C
Reply to
Knute Johnson
2019/11/02 13:47:13 Usable Space: 12986429440 Block Size: 4096 FileSize: 409600 Files: 28534 Files Created: 1246873 Ones Writes: 1246873 Zeros Writes: 1214867 Files Deleted: 1178886 Delete Failures: 0 IOExceptions: 0 Total Bytes Written: 1008328704000 Processor Temperature: temp=44.5'C
--

Knute Johnson
Reply to
Knute Johnson
2019/11/05 15:47:16 Usable Space: 12985520128 Block Size: 4096 FileSize: 409600 Files: 28532 Files Created: 2116325 Ones Writes: 2116325 Zeros Writes: 2055453 Files Deleted: 1991113 Delete Failures: 0 IOExceptions: 0 Total Bytes Written: 1708760268800 Processor Temperature: temp=48.3'C
--

Knute Johnson
Reply to
Knute Johnson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.