JFFS2 - flash lifetime estimate

I'm using a JFFS2 filesystem on a NOR flash.

these are the features of my system:

- the flash size is 32MB with 256 erase blocks;

- block size is 128KB;

- erase per blocks are 100,000;

- most files on filesystem are static;

- there is only one dynamic file;

- this file is fixed size (about 512KB);

- the file is binary and record based;

- every record in the file is 44 byte;

A single operation on the file writes a record sequentially. When it reaches the end of the file restarts to write from the first record.

If I have 10,000 operations per day, how can I estimate the lifetime of my system?

Reply to
gcartabia
Loading thread data ...

A quick and dirty way is to add a counter in the block erase routine, and see how many records you can write for each block erase. You could even maintain a histogram and see how well the wear levelling is implemented.

Reply to
Arlet

Can JFFS2 not mark dying blocks and decline to use them in the future. So maybe the file system will get smaller instead of suddenly dying. Or perhaps JFFS2 sets aside some blocks for future use as a replacement. If so this count should be definable and you might be able to tune the flash lifetime against size.

Maybe for your dynamic "file" it's a better idea to do your own storing functions on top of MTD instead of using a file system.

-Michael

Reply to
Michael Schnell

Assuming the flash is full and there are no spare sectors, a very rough back of the envelope calculation would suggest that you can write a total of ~11915 (512KB/44) records before you need to erase all four blocks and start again. Let's round this up and say that you will need to erase the sectors about 6 times during every 5 day period.

Since you've got 100,000 erases available, that suggests 100,000 /6 =

16666 five-day periods or 83333 days (= 228 years).

In practice it will be a little worse than this due to the overhead; on the other hand, JFFS2 does compression. The zlib algorithm that it uses works quite well even on small numbers of bytes so, depending on the entropy in your data, you may find that you do even better. And, if there are spare sectors on the flash, JFFS2 will do wear-levelling so the lifetime will be even better.

In short, it sounds like what you're doing should work just fine.

GWC

Reply to
Geronimo W. Christ Esq

yes, this could be ok, I have a on-chip debugger and I can trap it. But I don't know the name of the erase routine.

Reply to
gcartabia

Looks like jffs2_erase_block in fs/jffs2/erase.c would do.

Instead of trapping it in the debugger, I would just put a printk() in there, and let it run for a while (possibly at increased speed), and capture the kernel log.

Reply to
Arlet

I forgot to say I don't use compression. Anyway, I know the wear leveling algorithm in JFFS2, to optimize the blocks lifetime, uses all the blocks of the filesystem even the ones allocated for static data. So the lifetime could be longer, but I'm afraid for the overhead of every single write operation.

giovanni

Reply to
gcartabia

Compression is built-in to JFFS2. In the versions I've seen, it is not easy to disable it. Did you hack the JFFS2 code in the kernel ?

Try asking on linux-mtd, but I would be surprised if the overhead was much greater than the data size you're writing.

One possible way of estimating would be :

  1. cat the MTD device containing the JFFS2 MTD device to a file
  2. execute one record update operation.
  3. cat the MTD device, this time to another file

by doing hexdumps of the files generated in steps 1 and 3, and diffing, you should be able to see how many bytes changed when one record was updated. Rinse and repeat to get an idea of the average change.

Reply to
Geronimo W. Christ Esq

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.