Reliably using memory cards and file systems in embedded systems

Hello all,

What is the best way to use non-user-serviceable memory cards with file sys= tems in embedded systems that can lose power at any time? The way I see it,= there are two problems that need to be addressed:

1 - File system corruption due to sudden power loss immediately after the f= ile system has erased a sector on a device. 2 - File system fragmentation (especially if you're not running an OS like = Linux. and so don't have access to defragmentation software. In my case, my= firmware is on the metal).

So far, I can only think of the following ways to solve the above two probl= ems:=20

1 - Regarding the power supply issues, either use a UPS so that the system = is never turned off, or have a short term backup supply that can provide en= ough power for the system to safely close all file handles and write everyt= hing it needs to the memory card after it detects that main power has been = lost. 2 - Regarding the fragmentation issues, the system could be programmed to r= egularly reformat the memory card at suitable times (e.g. for my applicatio= n, a remote datalogger, once all the data files have been successfully sent= to the server).

What other problems would one need to address when using memory cards and f= ile systems in embedded systems, and how would one overcome them?

Regards,

Amr Bekhit

Reply to
amrbekhit
Loading thread data ...

systems in embedded systems that can lose power at any time? The way I see it, there are two problems that need to be addressed:

system has erased a sector on a device.

Linux. and so don't have access to defragmentation software. In my case, my firmware is on the metal).

never turned off, or have a short term backup supply that can provide enough power for the system to safely close all file handles and write everything it needs to the memory card after it detects that main power has been lost.

regularly reformat the memory card at suitable times (e.g. for my application, a remote datalogger, once all the data files have been successfully sent to the server).

Use maximum cluster size to minimize fragmentation. Also, if possible, pre-allocate logfiles to a large size after opening them (create file, seek to max size, and write a zero byte). Deleting all logfiles will be enough to clear any fragmentation, you don't have to reformat the card.

Reply to
Arlet Ottens

It doesn't just have to be in connection with an erase - non-journalled file systems can get corrupt if power is lost during any write.

Also note that some memory cards can get seriously corrupt from unexpected power loss, even if you have a good filesystem on it. In particular, I believe that SD-Cards can occasionally lockup completely from power fails during writes.

Filesystem fragmentation is virtually irrelevant on non-spinning media (and it is not nearly the issue it used to be on spinning media).

That's the only way to get reliability - avoid power fails while writing.

That's a waste of time. Ignore fragmentation.

Reply to
David Brown

Fragmentation on memory cards can be harmful for write access. The cards are typically optimized for large sequential writes. If you're writing random blocks, the card may have to erase large blocks (typically around

4MB) for every small write, which makes write access slower, and will wear out the flash sooner.
Reply to
Arlet Ottens

systems in embedded systems that can lose power at any time? The way I see it, there are two problems that need to be addressed:

file system has erased a sector on a device.

Linux. and so don't have access to defragmentation software. In my case, my firmware is on the metal).

problems:

never turned off, or have a short term backup supply that can provide enough power for the system to safely close all file handles and write everything it needs to the memory card after it detects that main power has been lost.

regularly reformat the memory card at suitable times (e.g. for my application, a remote datalogger, once all the data files have been successfully sent to the server).

That won't do anything useful on any modern filesystem that supports sparse files (e.g. anything designed in the last 20 years or so).

--
Grant Edwards               grant.b.edwards        Yow! Eisenhower!!  Your
                                  at               mimeograph machine upsets
 Click to see the full signature
Reply to
Grant Edwards

FAT is still widely used, especially on memory cards. It's also a fairly good choice for embedded work: it's simple to implement, works well with memory cards, and is widely supported.

Reply to
Arlet Ottens

Fair enough, but I wouldn't use FAT (or anything similar) on a memory card unless the card vendor will guarantee that it does built-in wear-levelling and bad-block remapping. The last time I talked to CF vendors, none of them would...

--
Grant Edwards               grant.b.edwards        Yow! Could I have a drug
                                  at               overdose?
 Click to see the full signature
Reply to
Grant Edwards

There are flash-specific filesystems that handle this sort of thing, but they are only good for raw flash devices - not memory cards with their own mapping.

You can do a bit better by using a decent filesystem with journalling - but if we assume that the target system here is a small embedded system rather than an embedded Linux system, a journalled file system is far more work to implement. And of course if you want to be able to take the card out of the system and read it from any PC, then the only sane choice is FAT (the insane option for a near-universally implemented filesystem being NTFS).

It would take all day to list the shortcomings of FAT - yet it is the only realistic choice.

Reply to
David Brown

Memory cards are seldom "optimised" for anything. And they are normally used for small random writes.

It is true enough that writes that match erase blocks (which are invariably much smaller than 4 MB - 128 KB is more realistic) will be a little faster, and will cause less wear on the flash. But it is a reasonable assumption that the OP is not looking for the fastest possible solution (since he didn't say so, and most embedded systems can accept slow writes), and no matter how hard you try you will sometimes get very long delays in writing. You have to deal with that anyway - being careful with block sizes won't change that much.

With FAT, you can't choose to align your blocks unless you want to write your own system - and even then it will only help a bit with some blocks. It's just not worth the effort.

And fragmentation doesn't come into this - it makes no difference that you will be able to measure.

Memory cards, of all kinds, are sub-optimal. FAT is the only realistic choice of filesystem, and it is definitely sub-optimal. But that's what you've got to work with - trying to tweak it is going to make negligible difference.

So either a memory card with FAT is good enough and you use it - or it is not, and you find a completely different solution.

But the problem of power failures while writing /is/ something you can fix - and something that you /should/ fix.

Reply to
David Brown

A major market for memory cards is digital cameras, which almost exclusively use big sequential writes.

I've found several references that mention the 4MB size. Here's an article, for instance:

formatting link

The article also explains the different access modes, and how the card optimizes multiple writes the same allocation unit.

The easiest solution is to leave the manufacturer formatting in place, assuming they know what the block alignment is, and have formatted the filesystem accordingly.

Reply to
Arlet Ottens

Dear All,

Thanks for the useful replies so far. Just for clarification, in my system = I'm using a microSD card formatted using FAT. The system is powered by an N= XP LPC1752 with 64KB flash and 16KB RAM, so in my case space is at a premiu= m. I ended up using the FAT file system because there's plenty of free embe= dded implementations on the web.=20

As I understand it, although some other file systems are more robust than o= thers, they all are at risk of corrupting the file system on a sudden power= loss, so making sure the system has a reliable source of power would be so= mething that would need to be done regardless of the file system being used= .

@Arlet: I like the idea of preallocating log file space. Since I do have a = growing log file in my system, I can see this being useful in minimising fr= agmentation.

Amr

Reply to
Amr Bekhit

There are maybe some memory cards like this - but most NAND chips have much smaller erase block sizes.

That's certainly true - there is no benefit to be gained by re-formatting the card, and if it is one of these referred to in the article you mentioned, then changing the format might make it slower in theory, or marginally decrease its lifetime. (I say "slower in theory", because the OP's system can't write nearly fast enough to saturate large, fast memory cards.)

Reply to
David Brown

Good.

Just make sure the "free implementation" you use is actually licensed in a way that lets you use it - details are important. Here's one that I know you /can/ use:

Correct.

Don't bother - ignore fragmentation. You might make a few percent difference on average in speed - but timings will vary many times more than this anyway.

Reply to
David Brown

systems in embedded systems that can lose power at any time? The way I see it, there are two problems that need to be addressed:

file system has erased a sector on a device.

Linux. and so don't have access to defragmentation software. In my case, my firmware is on the metal).

problems:

is never turned off, or have a short term backup supply that can provide enough power for the system to safely close all file handles and write everything it needs to the memory card after it detects that main power has been lost.

regularly reformat the memory card at suitable times (e.g. for my application, a remote datalogger, once all the data files have been successfully sent to the server).

Just be aware that pre-allocation can take quite some time on some FAT systems. If you pre-allocate a few GB of data, you only have to write the one marker byte at the end, but your system may have to write out several megabytes of FAT data to the card. If your logger uses slow spi-mode writes that might take many minutes.

Mark Borgerson

Reply to
Mark Borgerson

I've also seen references to SD cards whose internal controllers are optimized for FAT filesystems in that they do more frequent wear- leveling on the first part of the disk where the FAT(s) and directory sectors are located.

Whether such techniques are universally used is an open question. Most SD and Micro-SD cards are being sold to camera and cell phone owners who may only fill the memory a few times in the lifetime of the device.

Mark Borgerson

Reply to
Mark Borgerson

With a big cluster size (which I'd recommend anyway) it's not so bad.

Assuming a FAT32 system with 32kB cluster size, and pre-allocating a 2GB file, you need to initialize 64k FAT entries. Given 4 bytes per FAT entry, that's is 256 kB worth of data.

Reply to
Arlet Ottens

OK. So extend that to a 32GB SDHC card and multiply by 4. (I recently developed a long-term logger that uses an array of 4 SD cards and collects several KB/second for up to 6 months.) So you get 256KB *16 *4 or about 16MB to initialize. That might well end up at several minutes! ;-) That's one of the reasons that I used a custom sequential file system. The downside is that it takes a special application using raw device reads to move the data to the PC.

I forget what the default cluster size is when you get a new 32GB card. Does anyone know? All my large cards have been reformated many times, and I've lost track of the original configuration.

My own opinion is that 160MB of data per day times 180 days ought to add up to several MSc theses!

Mark Borgerson

Reply to
Mark Borgerson

FAT is limited to 4GB files, so you'd never have to initialize more than

500 kB per file. And, you also don't have to pre-allocate the whole thing at one time. You can already start logging to a file, and extend it by multiple 1MB seeks as you go along. That way, fragmentation is still possible, but it will be very limited.

A few recent cards I've looked at had 32kB cluster sizes, but that's not a big sample.

Reply to
Arlet Ottens

That would work if the files could miss a few seconds (or more) between them. My problem was that the customer wanted 6 months of data with no breaks. I was using an MSP430 MPU for the logger, and it didn't have enough buffer memory to handle much more than the time needed to update the sequential file directory and start a new file at the end of each day.

Mark Borgerson

Reply to
Mark Borgerson

All you need for reasonable pre-allocation is some time to write a single sector worth of FAT entries. Combined with 32kB clusters, that's enough to write a 4MB chunk of data. When you get near the end of that

4MB chunk, you pre-allocate another 4MB chunk.
Reply to
Arlet Ottens

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.