Dosfs and 8G SDHC card

Hi All,

I am writing a data logging firmware using dosfs. The size of data I am logging is more than 10Meg. I tested the firmware with 2G SD card and 4G SDHC card. Both of these cards are working properly. When I put these cards into my PC I can read the filenames and creation dates. However, when I tested the same firmware against a 8G SDHC card, after logging about 1Meg of data, the DFS_WriteFile return an Error. When I try to use my PC to read the card, it reported that the media is corrupted and won't open. I have to re-formate the card ( allocated unit size of 64K). I have formated the 8G SDHC in different allocated unit size and the problem appear to be different. On smaller unit size say 16K , the filenames and creation date are corrupted, I can read the directory but I cannot open the file.

Does anybody out there encounter the same problem? Can somebody give me some pointer where I should look at. I am quite sure it is dosfs that causes the problem.

Best regards and thank you on advance. Tony

--------------------------------------- Posted through

formatting link

Reply to
TonyDeng
Loading thread data ...

If you are using dosfs then the file system is FAT. That filing system has a limit of 2G as the maximum partition size, see .

Andrew

Reply to
Andrew Jackson

Actually, I remembered that DOSFS does support FAT32 so the 8GB size should be fine. However, when I used dosfs there were a number of problems with it (certainly in the FAT12 implementation) and which may also have affected the FAT32 implementation. If I get time I will look through my notes.

Andrew

Reply to
Andrew Jackson

There are indeed a number of problems with the FAT12 part. I have e-mailed some fixes, but never got any reply. Those errors are very specific to the FAT12 part, with it's awkward 1.5 byte FAT entries, so I don't think they have an impact on the FAT16 and FAT32 part. I also had some problem in creating a file (the first file) on the root directory using FAT16 (wrong cluster number). My current fix works in the current application, but I still need to verify if it is always correct.

In my current application, I log data to 2GB SD cards. I just checked and have files over 2MB without error.

If you want to debug your dosfs, it's best to do it on the PC with an image of the card. You don't even have to use a complete image. I tested the FAT16 with an image of only the first 16MB of a formatted 2GB SD card. Works fine as long as you don't write more than 16MB (minus the FAT) to it. :-)

--
Stef    (remove caps, dashes and .invalid from e-mail address to reply by mail)

I want to be the white man's brother, not his brother-in-law.
 Click to see the full signature
Reply to
Stef

I also emailed various fixes but never got any sort of acknowledgement.

The problems that I found were:

[1] Writing to a blank drive caused memory corruption problems (DFS_GetNext didn't increment currententry). [2] div() used instead of ldiv(), resulting in potential numerical overflow. [3] DFS_SetFAT increments scratchcache but should be incrementing what it points to. [4] Various issues with the FAT12 implementation.

Andrew

Reply to
Andrew Jackson

This is probably the same problem I found. I (temporarely) fixed it in DFS_OpenFile by removing the "-1" from currententry in the assignment to diroffset. Your fix to DFS_GetNext is probably better, I will check as soon as I have the time for it.

Need to check, there is div() and ldiv() in the code. Should all div() be ldiv()?

Found that too, also in the FAT12 part.

I found a few, all missing ">> 8" operations on setting the FAT, besides the above increment problem.

[5] Return value of DFS_GetFreeFAT() not checked in DFS_OpenFile()

Would this be a good place to discuss fixes to DOSFS?

I think Lewin did a good job on it and was very generous making it available for all. It's a shame the development is on halt since 2006 (?) and there is no response to e-mailed improvements. He probably as other priorities.

--
Stef    (remove caps, dashes and .invalid from e-mail address to reply by mail)

Deliberation, n.:
 Click to see the full signature
Reply to
Stef

Suddenly, "having time for it" changed to "must solve it now". :-(

We saw filesystem errors on some windows versions. Never got one volunteered by my version of W7. But I did see them when when I forced a check. Turned out my 'fix' now handled some other case (not empty cards with delete files) wrong and destroyed a file and left some lost clusters.

I now have added the increment in DFS_GetNext() like this:

if (dirinfo->flags & DFS_DI_BLANKENT) { dirinfo->currententry++; /// DOSFS 1.03 BUG, currententry was not incremented in this case return DFS_OK; } else return DFS_EOF;

But this fix alone did cause no visible file when creating one on an empty volume. Checking the FAT revealed a gap between the new entry and the volume label entry, causing windows to stop reading the FAT. Turns out there was a fix needed in DFS_GetFreeDirEnt() as well:

//di->currententry = 1; /// DOSFS 1.03 BUG // since the code coming after this expects to subtract 1 di->currententry = 0; // tempclus is not zero but contains fat entry, so next loop will call // DFS_GetNext(), which will increment currententry // This is OK for for the code coming after this, which expects to subtract 1 // Starting with 1 would cause a 'hole' in the dir.

Do these fixes seem OK (or at least familiar ;-) ) to you? I'm testing now and so far empty cards, cards with files, cards with deleted files seem OK. All testing in root dir so far.

--
Stef    (remove caps, dashes and .invalid from e-mail address to reply by mail)

The future not being born, my friend, we will abstain from baptizing it.
 Click to see the full signature
Reply to
Stef

Suddenly, "having time for it" changed to "must solve it now". :-(

We saw filesystem errors on some windows versions. Never got one volunteered by my version of W7. But I did see them when when I forced a check. Turned out my 'fix' now handled some other case (not empty cards with delete files) wrong and destroyed a file and left some lost clusters.

I now have added the increment in DFS_GetNext() like this:

if (dirinfo->flags & DFS_DI_BLANKENT) { dirinfo->currententry++; /// DOSFS 1.03 BUG, currententry was not incremented in this case return DFS_OK; } else return DFS_EOF;

But this fix alone did cause no visible file when creating one on an empty volume. Checking the FAT revealed a gap between the new entry and the volume label entry, causing windows to stop reading the FAT. Turns out there was a fix needed in DFS_GetFreeDirEnt() as well:

//di->currententry = 1; /// DOSFS 1.03 BUG // since the code coming after this expects to subtract 1 di->currententry = 0; // tempclus is not zero but contains fat entry, so next loop will call // DFS_GetNext(), which will increment currententry // This is OK for for the code coming after this, which expects to subtract 1 // Starting with 1 would cause a 'hole' in the dir.

Do these fixes seem OK (or at least familiar ;-) ) to you? I'm testing now and so far empty cards, cards with files, cards with deleted files seem OK. All testing in root dir so far.

Found this link with (part of) this solution as well:

formatting link

--
Stef    (remove caps, dashes and .invalid from e-mail address to reply by mail)

The future not being born, my friend, we will abstain from baptizing it.
 Click to see the full signature
Reply to
Stef

incremented in this case

after this expects to subtract 1

entry, so next loop will call

currententry

this, which expects to subtract 1

dir.

Yes I had the first part of the fix but not the second. This may be because of the way in which I was using DOSFS or because of my test environment.

Andrew

Reply to
Andrew Jackson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.