Throughput question with CF/DiskOnChip

Hello all, I have a small system with an Intel StrongArm processor running a

2.4.11 kernel. This board contains both a DiskOnChip as well as a CF for non-voltaile storage. The root file system is in a separte flash chip, the DOC is mounted as ext2, and the CF is mounted as vfat. The DOC and CF cards are used for applications/general purpose data storage.

In my application, I need to store streaming data to disk (i.e. CF or DOC), at a speed of approximately 176 Kbit/sec. This is typically in the form of 16 separate files that have been opened (with 11 Kbits/sec of data going to each file). I originally found that streaming the data to DOC caused a large "blip" in the CPU usage at various times (could be after 1 min or 10 mintes or 45 minutes). This "blip" was detected by running "top" on the system, and the DOC driver would end up using 99% or the CPU cycles during this blip which lasted about 20 seconds or so. Since the DOC driver runs at a higher priority (-20) than my application (0), it essentially locked up the rest of the system and which caused an eventual failure in the system due to data buffers overflowing.

I then attempted to store the streaming data to the CF card, thinking the IDE driver for the CF card may work better than the DOC driver. I am using a 4 GB CF card. However, I am finding the same anomoly, where the system seems to lock up while data is being synched to disk, which consequently causes data to overflow in the collection buffers in my application.

I have tried several different methods of actually storing the data to disk:

-calling fwrite() after every 16 bytes on each of the 16 files

-buffering up 4K per stream in a RAM buffer, then calling fwrite for the 4K...one of the problems here is that all the writes happend almost simultaneously, which took a long time

-buffering up 64K in a RAM buffer, then calling fwrite for the

64K...same as above, but the disk writes were huge

I am wondering if there are other ideas/solutions to enable me to stream ~176 Kbits/sec to either DOC or (preferably) CF. Stepping back for a moment...am I crazy for thinking this should work OK?The CF specs claim Megabytes/sec throughput, so this shouldn't be the limiting factor. This system would be looking to sustain this transfer rate (176 Kbits/sec broken up into 16, 11 Kbits/sec streams to files) for hours on end (it can go for a long time with a 4 GB CF card). What do you think?

If you've made it this far, my personal thanks for reading. Any help is appreciated here. Let me know if I can provide any other details that would help.

Regards, John O.

Reply to
jro
Loading thread data ...

A CF can't be used for that at all unless you can make absolutely sure that power is always provided to the system until -say- a minute after the last write action. The internals of a CF sometimes need some absolutely undetermined time after the start of a write request. If power is removed within this time the card may be damaged and unusable.

A DOC should not be accessed with an EXT2 file system. this will cause wear out effects and damage part of the storage cells very soon and make the data unusable and the system unwritable.

Hopefully the DOC is just a Flash array accessible in memory blocks with no internal intelligence.

If so, you can use a special Flash file system (e.g. JFF or JFF2) on an appropriate flash media driver.

Same should take care of wear out effects and supposedly will not do the write buffering that causes the said "blip".

-Michael

Reply to
Michael Schnell

sure

after

unusable.

Hmmm...I have not read about this. Of course I can understand the possibility of losing a portion of a file that is in the process of being flushed to disk if the CF is removed with the write in progress (and this is an acceptable constraint for this particular application). However, I don't understand how the CF card could be damaged and made unusable if it is removed at the wrong time. My PDA has a CF card, and it has no knowledge of when I would remove the CF card...but I can remove the card whenever I want. How does it solve this problem?

cause

make

The ext2 files system on top of the DiskOnChip appears to be a fairly common activity:

formatting link
formatting link
formatting link

Why exactly do you discourage this activity?

with

The DOC has a considerable on-board intelligence.

an

IIRC, JFFS2 was in the works for supporting the DOC, but at the time the system was being developed it was not ready yet. So we passed on it...it could be considered again if there were a problem with this. We have been using the DOC with ext2 as is for the last year or so, and doing variable throughput usage scenarios (scaling from a few Kbits/sec all the way up to the full 176 Kbits/sec), and we haven't seen the device fail (unless, of course, the "blip" we are seeing is due to the number of bad blocks increasing, and thus the DOC driver/hardware taking longer and longer to find valid flash blocks to write to).

the

Thanks for the insight...I'll be curious to hear your response to the above questions....any other ideas from anyone for the original questions?

Reply to
jro

Three points:

  1. Flash media is generally very slow, especially for writing data, and especially if it has to erase previously written data before writing.

  1. The DOC is probably not ideal for what you are trying to achieve; there is simply too much going on outside of your direct control.

  2. Using magnetic disk filing systems, especially journalling ones, with flash media is generally bad news.

When designing a system recording data onto flash media - MultiMedia Card (MMC) in my case - we took a very different approach. Rather than using files, we simply divided the MMC address space into a number of different areas and treated those to which data was being streamed as ring buffers. The application firmware writes directly to the media and does not go through a filing system.

We also wrote a static FAT-16 file system structure onto the media to make it easier for other application to read the data. This is not used by our firmware.

A further performance enhancement is to pre-erase a number of blocks ahead of the data to be written. It is also a good idea to have the largest possible RAM memory buffer between the data source and the media.

All of this may not be necessary in your case. (We were working on a

16-bit 20MHz microcontroller, communicating with the MMC using a slow SPI interface.) However, you could, for example, pre-allocate the space for your files in advance, to avoid multiple file extensions and associated file structure updates. I suspect that this activity may be triggering wear-leveling in the DOC driver to avoid wearing-out the flash memory blocks holding filing system data.
--
Chris Isbell
Southampton, UK
Reply to
Chris Isbell

This has been discussed several times in this forum. Please see the back-log. To handle wear out effects, a CF internally monitors write and erase times and replaces dying blocks with fresh (spare) ones. To do this is uses some blocks as reference table. If such a table is changed (which can happen within any a normal write request), part of it is erased and then rewritten. When power goes down in that moment the table is damaged and the CF is unusable as it can't find the internal memory block associated to an external address range. It can't even be formatted with normal means. There will be special propriety IDE commands that allow to revive the CF by rewriting the address allocation table to a standard default allocation (loosing all data).

I was speaking about a DOC with no hardware intelligence (like that of a CF) but that is a quite naked flash chip.

EXP2 and all "normal" file systems rewrite some blocks of the "disk" (e.g. the FAT with a DOS file system, more complex structures with e.g. EXP2). Writing a single byte in a Flash causes a complete block (size depending on hardware 128 Byte ... 128 K) to be erased and rewritten. A flash block can only be rewritten a certain number of times (depending on hardware 10.000 ... 1.000.000 times). When using the chip for some kind of log file, this number can be reached quite fast. A dedicated flash file system knows about the flash blocks and (re)uses them cyclicly to manage this "wear out" effect. Moreover it will automatically remove blocks that get unwritable in spite of the rotation scheme.

If so, you are in bad luck, as now the same applies as with the CF. With a "dumb" DOC you can use a Flash file system, while with a CF-Card type of device this does not help, as only the manufacturer knows about the block size and handling.

JFFS2 is ready since ages, but of course you need a media access driver for the device in question, too. If that is available now and specified for JFFS usage, this seems the way to go.

I don't think so. (There should be a way to read the count of bad blocks from the media access driver.) I suppose the "blip" is just a cache write-back, that EXT2 does. This is not a good thing, either, as a large cache will make you loose a lot of data when power fails and the file system is not dismounted before. Here a journaling file system could help a lot, but same creates even more write accesses and increases the wear out problem.

-Michael

Reply to
Michael Schnell

I did something similar to read and write a PC compatible FC with a (non-Linux) embedded system. here we needed a more complex structure, so that multiple files and directories were necessary.

I wrote a driver that can read and write Data in FAT-16 Files, indeed using the File allocation table, but not able to write same, so that files only can be overwritten, not created, deleted or modified in size. So we use standard Windows means to format the FC and create files filled with predefined data.

On an FC you can't be sure what to write into a file to have the internal memory be filled with the "erased" pattern.

-Michael

Reply to
Michael Schnell

progress

application).

made

and

and

changed

table

memory

allocation

Thanks for the heads up on this. I went back through the archive and found some of the references. In my current system, there is no way to gracefully power down the board to ensure that the power to the CF card is maintained for up to a minute after the last write (as you suggested). I still don't totally understand this though. Its hard for me to believe that if the cause of this issue is during an updating of a flash block that takes too long to write, the manufacturerers could write the new flash block with the new table; once it was successfully written, they could update the "pointer" to the bad-block table...thus if power ever went down, there was always a usable version of the table. But maybe I'm oversimplifying the issue (or don't totally understand it...). To mitigate this for now, I read that some manufacturers of CF cards put super capacitors in their cards to provide juice when needed if power is removed. I looked around and couldn't confirm any manufacturer who does this. Does anyone know of one? ST Micro was mentioned in the thread where I found this...

fairly

of a

I don't think a DiskOnChip exists without hardware intelligence...my understanding is that the DOC utilizes something called TrueFFS, which is a flash file system that provides wear-leveling etc. underneath any normal file system being used (ext2 in this case). It provides an interface that appears to be a standard block device to the OS, while still providing this capability. See the following for a brief overview (pg. 7 specifically):

formatting link

So, I guess it seems like ext2 on top of TrueFFS won't cause any burned-out flash cell issues. Am I wrong here? What am I missing?

e.g.

A

(depending

rotation

Ok, I understand here that you are referencing the flash block that would be continually being updated, such as the block containing the FAT in a normal DOS system. However, if this is on top of the TrueFFS, it shouldn't be an issue (see above). Now if this is on a Compact Flash card where there isn't a driver taking care of things like wear-leveling, I can understand that this would be a problem. I've looked around some and tried to understand what smarts commonly come in a Compact Flash card, but there doesn't seem to be much out there (or it is totally up to the manufacturers, as long as they provide the IDE interface they need to). I've got to believe that people have had the need to stream data to flash devices on the order of ~150 kbps before, and that they were able to make it happen. I'm still playing around, and will probably start doing a bunch of test with dd to dump /dev/zero to a test file in different block sizes (bs=2k, bs=4k, etc) and see if I can figure out the sweet spot as far as efficiency goes.

time

driver

specified

You can also check out this cool calculator for the DOC, which gives me much better hope as far as wearing out the flash device:

formatting link

the

blocks

large

the

Thanks for the input, and I'll be awaiting your collective responses :-)

John

Reply to
jro

BTW.: there are new chips that combine RAM and flash functions. here neither wear out effects nor great data loss on power down will hit you (when using a journaling file system and no big memory cache).

These chips work like a RAM (no wear out at all) and when power goes down they automatically store all data into a flash area (a million power cycles before wear out).

I'd use those for logging purpose.

-Michael

Reply to
Michael Schnell

I'm not sure of what the performance is for disk-on-chip, but comparing Linear MultiMedia Card (MMC) to ATA based CF cards is a bit like comparing floppies to hard drives.

When choosing a CF card the you should choose either a Sandisk 'Extream' or 'Ultra' Card or a Lexar 80x (or better) device. On any CF card you can run your favorite disk drive benchmark utility (from your PC of course) and see if it is in fact capable of sustained transfer rates of 10MB/s or better.

A word of warning here though, some companies, including Sandisk use Multi-Level flash in products that do not advertise a speed advantage and these products have no where near the performance of the above mentioned cards. Additionally, MLC flash does not have the life expectancy of single level flash (SLC). Just another case of Buyer Beware!

Another surprise you will find if you look at the benchmark data, is that small file and random file access on a CF card is a fraction of the performance of large multiblock access. This is just the nature of flash. Seek time is nothing, but each block must be erased before it can be over written which makes for a lot of data getting moved around in the flash when you just want to update one line of data on the card. If you are doing a lot of small file accesses and have the power budget, look into a Hitachi Micro drive which is a true HDD in a CF package. But, be ready to support a 350-500 mA draw during disk activity. YIKES!!

Cheers,

--Alan

Reply to
yaipa

better.

By the way, that's for transferring to the on-board cache only. We could never get confirmations from manufacturers for ##x write speeds to flash.

(SLC).

Unfortunately, SLC CFs beyond 128M are VERY expensive. Some of our clients use a small MLC (64M or less) primary boot drive and mount a SLC secondary drive for additional storages.

Reply to
linnix

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.