I have an input data stream at 2 MByte/s that I need to write to a mass storage device. I need to store 10 GB worth of data. I guess that a modern 16 GB CF card, which states 20..30 MByte/s of write speed, could be used for this. Since I don't have much buffer memory, and I cannot afford to lose any data, I must be absolutely sure that the CF card will be able to handle the input data rate under all circumstances. Can I use a CF card like the Sandisk CF5000 for this application, or is this a disaster waiting to happen?
The title of my post is a bit misleading. Ideally, I would like to start reading out the CF card before the file has been completely written, which will result in interleaved read/writes, each at datarates at least twice my input data rate. Something like this:
- write 32 sectors to address A at > 4 MByte/s
- read 32 sectors from address B at > 4 MByte/s
- write 32 sectors to address A+32 at > 4 MByte/s
- read 32 sectors from address B+32 at > 4 MByte/s
I forgot an important piece of information: there is no file system. I'm writing the card in "raw" mode using an FPGA. I'm writing only the data bytes, starting at sector 0, until the 10 GB file is complete. There is nothing else on the card.
You can expect a huge delay (~100ms) at the very beginning of the process, and random delays (~10..100ms) while in the operation. This is a typical behavior of CF card. Although the average sustained transfer rate can be 20-30MB/s or even higher.
Vladimir Vassilevsky DSP and Mixed Signal Design Consultant
That's why CF cards don't specify minimum write speeds, right? What about SDHC cards? Do they exhibit similar random delays? If not, is there any technology (be it a harddisk drive) that will allow me to write these 10 GB of data without unpredictable interruptions?
On a hard drive with some luck, yes. If you verify the areas you will write to in advance to be fast writable (i.e. continuous, the drive has not reassigned any sectors) you can be pretty sure things will work for some time. But you will not know when you will run out of luck, of course...
For 100% uninterrupted write I vaguely remember a discussion on the T13 reflector perhaps 10 years ago when some new "streaming" features were being introduced or were about to be introduced. I have never needed them so I don't know what happened and if they are any good for your purpose (I believe they were just error tolerant, drive would not bother retrying a write if it failed) since my memories on that are very vague, but it may be worth a glance, probably they are in the ATA standards.
But as Vladimir suggested, just buffering for 100+ mS and using normal command sets may be the simplest and sanest way to go.
the system is connected to a PC via a gigabit ethernet interface. Ideally, the data will be transmitted in real-time to the PC and logged to the card for backup. In case something goes wrong with the network or the PC during the data acquisition, the data (which can be worth tens of thousands of dollars) will still be readable from the card (via ethernet) after the acquisition has completed.
As you said, I could also use a few DIMM modules to store the whole file, but flash gives me additional immunity against power interruptions. Also, since this is for an embedded application that will be built for the next 10 years, I'm looking for solutions that will be available a bit longer than the few years these DIMM modules typically have.
Are these random delays of up to 100 ms specified somewhere? To design a large-enough buffer, I need to know at least the maximum duration of these pauses, and the minimum interval between two consecutive pauses.