CF/SD flash filesystem

Hi,

I'm trying to figure out what state of the art linux filesystem/scheme I could use, if I have to use CF or SD card and primary storage.

Part of my dataset rarely changes and I can make a read-only partition for that. Another part is quite dynamic, potential small updates every 10 seconds. I've gone quite far to make sure writes are only done when needed at user level. And device could lose power any time and should recover quickly.

My naive approach is ext3 and sqlite with pargma synchronous = normal. Startup is normal linux startup plus sqlite pragma quick_check(1). Failure rate of this approach is >5% per year :-(. Some of it may be due to elevated temp, still it sucks.

What would be better? Can LogFS be practically used on flash cards now? Btrfs with some special settings? Ext2? How to make fsck fast? Vfat? Can sqlite manage on vfat? Something custom?

Should I split the card into partitions and RAID it? I would wear faster, but at least I would know about errors earlier...

Is it conceivable or even likely that errors in one partion could cause errors in another?

Is there any practical difference between CF and SD? Do consumer grade CF/SD cards still perform wear levelling in chunks of 1000/1024? Do consumer cards commonly use static or dynamic wear levelling? What about industrial grade?

previous discussion:

formatting link

Reply to
dimaqq
Loading thread data ...

Here, it has often been discussed, that most FC and SD-cards are unusable for systems hat can loose power, as - for wear-leveling handling - they need an unforeseeable amount of time to write any updates to the internal Flash medium. The result of a power loss can be unrecoverable data loss, not only of data recently written, but of all data on the card (even in another partition, if it holds multiple partitions) and I even was told about cards becoming completely unusable, not even partitioning / formatting of them was possible.

There are some (far more expensive than usual) cards that are specified for "industrial applications" and seem to use an internal storage scheme and capacitor-fed hardware that avoids this problem. They are said to reduce the problem to trashing the latest written sector(s).

The Linux file system that accesses the FC/SD card has no influx on this. Especially a wear leveling "Flash" file system or a journaling file system does not help. With "industrial" cards, using a journaling files system (such as ext3) might speed up the recovering, but OTOH it increases the number of writes - and thus potential problems - a lot. So I don't know if this is really a good idea.

Of course it makes sense to do as few writes as possible (e.g switching off the "last access time" storing.

If you use a database, additionally to data and file system corruption, you can have have database internal corruption. IMHO it's not a good idea to use a database with _any_ system, that is in danger of power loss. Maybe some database brands may have recover schemes for that, but I'm not an expert on this at all.

So (especially if you really want to use a database), IMHO the only decent way to go is to provide the hardware (battery and controller) that always lets the system do a proper shut down (first close the data base and after that dismount the file system) when power loss is due.

If this is not possible, you should avoid a database and implement a kind of cyclic record storing scheme (I once used layered cycles, fast ones and slow ones that compress the data of the faster ones in order to create a history), best in a Flash file system on a naked Flash chip, or in an "industrial" card.

-Michael

Reply to
Michael Schnell

Yeah that's what I was afraid of.

Flash will most certainly be sd or microsd. I'll try to get some "industrial" flash cards if budget allows.

I'll make sure that partions are well aligned and of course I'm not going to use access times and will try to change metadata as little as possible. And I'm going to verify the format of db/files on every poweron, at least to some extent.

For now my options for file system are:

  • ext4 without journal and with extent size =3D flash [logical] block size
  • exfat with cluster size =3D flash block size
  • logfs, although I'm not sure I can tweak that yet
  • raw if I really have to

And for the database:

  • sqlite3, although it's disk page size is Here, it has often been discussed, that most FC and SD-cards are
Reply to
dimaqq

I don't know much about logfs or nilfs, but btrfs is a copy-on-write filesystem. This has many advantages for reliability - if something goes wrong during writing, you typically have the old version of the file intact. But it also means a lot more writing when you are updating a file - all, or at least large parts of the old file must be copied while making the change. As far as I understand it, this makes it a poor performer with sqlite (and other databases, and also things like virtual machine disk images). And with sd cards you don't actually get the reliability benefits, since the card itself can decide to re-arrange things for wear levelling and thus leave the disk in an inconsistent state.

Given the problems and limitations of sd cards, I think your best bet is a big battery and a simple file system - ext2 (or ext3/4 with journalling disabled - you want to avoid the extra writes) or perhaps fat32. Fat32 has the advantage that it is easier to transfer files to non-Linux machines. You would have to test the performance of sqlite on fat32 first, of course. Reliability is not an issue - by definition, an SD card system is unreliable, and if you lose power within a second or so after starting a write, things can go badly wrong.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.