SPI communication protocol for micro-sd

Hi friends,

I am doing a data logger project, in which I have to log the 16 channel ADC's reading with time & date stamp. this requires huge memory up to 3 Mbytes. So i decided to use micro-sd card.

I am using Silab C8051F340 micro-controller to interface with micro-sd card with SPI. So I need SPI communication protocol.

Thanks in advance, Kishore.

Reply to
kishor
Loading thread data ...

Google will find you dozens of sample implementations in C and a variety of assembly languages to access SD/MMC cards in SPI mode.

Reply to
zwsdotcom

LOL! one of my customers is looking for something like this:

16 channels at 16 bits Time stamps and digital compass data Continuous collection without interruption at 120 samples/second Logging duration from 6 months to 1 year. Average power consumption under 15mA.

One thing you soon discover is that the time to write a block of 512 bytes to the SD can vary quite a bit---especially when the card has to erase a new block of internal flash. If you're collecting at high rates, you will have to implement a fairly large buffer to account for those delays.

Mark Borgerson

Reply to
Mark Borgerson

Depending on how the filesystem works, the worst case (non-application- optimized FS) is when ALL of the following need to happen in order to write the next byte:

1) need to find next free cluster and it is not contiguous with current cluster 2) need to update FAT chain for the file 3) need to erase the block containing the new cluster
Reply to
zwsdotcom

I'm writing my file system to avoid #1 and #2. I think I can handle #3 with about 32K of buffer space.

In earlier tests, #1 turned out to be a real killer---especially near the end of an SD where the FAT may be a megabyte or=20 more. The problem doesn't have reasonable bounds if your're looking for that one free cluster in the middle of a 2G disk filled with other files.

Mark Borgerson

Reply to
Mark Borgerson

Ya I got some sample code and documentation's also. I am very new to this micro-sd. I have some doubts......

  1. SPI is optional for micro-sd card. How can I know SPI supported micro-sd cards????
  2. Is it compulsory to implement FAT for storing data in memory????

Thanks, Kishore.

Reply to
kishor

You can't but in practice I have not yet seen any cards that don't support it.

No but if you don't you will have additional effort in extracting the data from the card and transferring to a PC.

Reply to
zwsdotcom

You may be able to implement a sort of dummy fat where you set up a directory structure containing one huge contiguous file, and then simply write your data into that without paying much more attention to fat semantics.

You would need PC software to parse that file, and of course if anyone writes to the card from a PC they will ruin your structure, forcing your device to "reformat" it (uncompress an empty copy of your customized dummy fat)

Reply to
cs_posting

This demonstrates the fundamental weakness of the FAT system. While it was nearly perfect for a floppy disk of 1.44M (FAT12), when it comes to gigabytes a FAT32 is just too large. A filesystem which uses a bit per cluster for allocation purposes is 32 times more efficient (at least on some processors, those which have a "count leading 0s" opcode). For MS compatibility purposes the remote system can be shown a FAT image, built based on the more efficient internal filesystem image (this is how I have been doing DPS FAT for > 10 years).

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Reply to
didi

Sorry, my recent post went composed as a reply to the wrong message, my mistake. Here it is again, quoting the message I initially intended to quote.

--
This demonstrates the fundamental weakness of the FAT system.
While it was nearly perfect for a floppy disk of 1.44M (FAT12),
when it comes to gigabytes a FAT32 is just too large.
A filesystem which uses a bit per cluster for allocation purposes
is 32 times more efficient (at least on some processors, those
which have a "count leading 0s" opcode). For MS compatibility
purposes the remote system can be shown a FAT image, built
based on the more efficient internal filesystem image (this is
how I have been doing DPS  FAT for > 10 years).

Dimiter

------------------------------------------------------
Dimiter Popoff               Transgalactic Instruments

http://www.tgi-sci.com
------------------------------------------------------
http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/
Reply to
didi

Not in terms of storage use it isn't. The information about cluster linkage (which clusters, in which order, form a file's data), has to be kept *somewhere*. FAT systems keep that info in the FAT. Keeping the free-cluster information as an in-band signal inside those data you need anyway means that for the majority of medium sizes, you need _less_ than one bit per cluster to store this information. I.e. you get it essentially for free unless the medium has almost exactly 2^n clusters.

FAT has the potential to be spectacularly efficient in terms of storage. It's only the somewhat artificial limitation to just a few allowed FAT entry sizes (12, 16 or 32 bits) that breaks this.

Reply to
Hans-Bernhard Bröker

I did not say it was. In terms of storage there is no difference worth the comparison.

It is 32 times more efficient in the context I posted it into, i.e. during space allocation - which is where most, practically all, of the CPU overhead a filesystem uses, goes.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Original message:

formatting link

Reply to
didi

One solution to this is to amalgamate all of the free clusters into a single file at start-up. Allocation and release then becomes a matter of push/pop to a linked list, rather than having to scan for a free cluster.

Reply to
Nobody

The problem is that FAT16 has very poor efficiency, needing 32KiB clusters for a 2GiB disk (it would be worse if you could use FAT16 on larger drives), while FAT32 requires either a lot of RAM (if you store the entire FAT in RAM) or very slow access.

Essentially, FAT is designed for sequential access, small disks, and few files. For modern systems, it's the worst filesystem still in use. Modern filesystems are designed for large disks, many files, and fast access without having to store an entire partition's metadata in RAM.

Reply to
Nobody

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.