But, you have *8* of these running. So, you're really dealing with
8ms, effectively. Are you going to insist the user *dedicate* the machine to your application? Never try to access the SD card while it is running (i.e., delete or preprocess *any* of the files there)? Never start a new desktop session... or serve up web pages from the same box? Insert a "new" USB peripheral? etc.Big, wired down buffers will allow you to *approach* the card's maximum performance. You should tailor your accesses to the block size of the device so you can get the most out of the controller
*in* the card. Perhaps even deferring accesses until you *know* you can get a complete block in the "next access".If the OS is performing the read/write, remember that it is copying into it's local buffers before *you* see the actual data. E.g., if you request 37 bytes, chances are, it will read some default amount ("block" which will probably differ from the "block size" in the NAND FLASH on the card) and give you the first 37 bytes while holding onto the balance in its buffer. Your *next* request will first be satisfied from that buffer before the OS again turns to the physical device for any additional data required to satisfy your request (again buffering any "left overs")
If it runs for days, you don't have a problem. The problem comes when it wants to run "much quicker" (i.e., higher sample rate)
Ever *expect* to "enhance" the product by allowing files to be sourced from "magnetic (or SS) disk"? I.e., if so, you should reflect those future changes in your initial design.
Well, a freeze on *one* channel could cause the others to intentionally similarly freeze. Depends on how you *want* it to perform (i.e., maintain lock sync between all channels REGARDLESS... *or*, let each channel run however it can!)
Note that the possibility of different channels (and combinations of channels) "starving" can occur.
I'm not convinced that you can even make that guarantee. Your environment (OS) gives you none.
Think of it this way: you've got a BJT that can handle Icc of 20A.
*YOU* only need to accommodate a 15A load.But, someone *else* is pulling 0-20A *while* you are trying to control
*your* load.Granted, protection circuit can keep the Q from cooking itself when you
*two* guys aren't cooperating with each other. But, just because you were able to draw X amps *now* (before the clamp kicked in), doesn't mean you *will* be able to draw that X amps in the future! That "someone else" could elect to pull 19.8A at that unfortunate time... [The "someone else" may not even have direct control over how much he is pulling; his *load* may be dynamic and "decide for itself" that it needs to do something different, *now*, that results in greater power consumption]An RTOS makes guarantees to its "users": "This operation will happen in *this* manner. COUNT on it!" An MTOS (Windows, Linux, etc.) just says: "I'll try my best to make this happen as 'good' as possible" (for *some* definition of "good")
My "network speakers" are essentially doing the same thing that you are -- except over much larger physical distances. Pulling "samples" off the network and reproducing them (audio) in sychronization with each other -- despite the large distances involved. (imagine running your hardware on ten different PC's and expecting them all to remain in lock sync with each other in different offices!)
I can do this because my RTOS lets me *know*, a priori, what level of performance to *expect* from it. It *will* deliver data at the required rate -- unless there is a failure in the network fabric, "noise" on the line (corrupts packets and the retries don't happen in the required amount of time, etc.).
[i.e., your SD card is my "file server PLUS network transport"]As I have local intelligence on the receiving end of the link, when any of those clients (speakers) see that data is just not arriving "in time", they can shut down the audio cleanly (you definitely don't want the audio to sputter and pop as data trickles in sporadically while the system recovers).
[In my case, when the system recovers, you probably want the "playback" to resume. If "recorded content", then you can pick up from where you left off -- or, rewind a bit for some sense of continuity (remember it may not be *music* content so resuming in the middle of a spoken *word* is probably not as desirable as rewinding several seconds so the listener can recall what *was* being said leading up to the interruption. In *live* content, what's past is probably "past" (though even that can be negotiated)]
Gen can *be* the "file daemon" as well.
I.e., if you were building a single channel device, you could envision something like:
# gen file.dat 500KHz gobbledygook Gen v1.0 Done!
to configure that one channel and starting it.
Or, one command to configure and another to start. Or:
# gen file.dat 500KHz gobbledygook Gen v1.0 Configured for 500KHz. Loading file.dat; please wait... (Jeopardy theme song plays) File found. Press ENTER to begin synthesis...
Whether your configuration stuff results in actual tweeks to the hardware *as* each command is executed (i.e., specifying the sample rate causes gen to *immediately* tweek a PLL/divider)
*or* whether it causes those requirements to be *noted* somewhere (e.g., in a temp file) and *imposed* on the hardware the instant you type "go" is an implementation issue. The UX/UI doesn't really change. [E.g., a user might specify a file name and defer specification of the sample rate until a later time. Perhaps you only preload the FIFO with 10K samples if the sample rate is 50KHz instead of 500K? So, that action can be deferred until it absolutely *must* happen.]But gen can sit there AFTER it has filled the FIFO and wait for the "command" to "go".
[Note that gen can detach itself from the controlling terminal and run in the background *as* a daemon. So, the user types "gen" and gets a command prompt *immediately* -- or, after gen has preloaded the FIFO. You have to get used to thinking about more than "sequential commands" as the machine can "keep something running" even while it is allowing you to specify *other* actions.]Welcome to *my* world! :>
Yup. But, you have to instrument in all of the cases that you expect the system to operate!
E.g., how does performance change if the SD card encounters an error and has to remap a bad block? Or, if the user elects to defragment his hard disk while running your board? Or, starts an OpenOffice session so he can type up his observations on the "experiment" that your board is running? Or, something starts hammering on his network interface while your "app" is trying to keep up with the FIFOs? Or...
I.e., deciding to pull a bit of Icc while *you* are expecting to have a certain GUARANTEED level of collector current available for your load...
My point is to "preread" data from the SD card to further decouple the SD card's performance -- and the OS which acts as your intermediary -- from the REQUIREMENTS of your hardware (FIFO size + sample rate).
I.e., imagine you could read that entire 128GB card into "RAM" in the PC
*before* the user types "GO". Now, the speed of the SD card is not material to the REAL-TIME performance of your device. (it may ANNOY the user if he has to wait an hour for all that data to transfer but that's a separate issue).By buffering data *in* the PC's memory, you enhance your operating margin wrt the SD card + OS performance. You've taken one piece of variability out of the equation (i.e., you've already got the data *off* the card... or, at last, have EFFECTIVELY enhanced the size of that 32KS FIFO by another 10K, 100K, 1MB, etc. -- whatever you can afford to set aside *in* the PC's memory space)
*You* don't have an option. The OS decides when/if swapping. (unless you literally eliminate the swap partition so there is no place *to* swap).You want this to be VERY VISIBLE to the user! I.e., if he complains that the output waveform had "severe distortion... as if it had STALLED at points during playback", *he* wants to have SEEN a message on the console saying "buffer overrun on channel X" (whether he sees that while playback is occurring *or* after it has completed). I.e., you want his call to be "why did I get this message?" instead of "why did the output have so much distortion?". This saves you valuable steps in sorting out the problem.
"Is the big red LED blinking?" "Um, I didn't notice." "Can you rerun the experiment?" "Yeah, I just did. Now it SEEMS to be working..."
(because you have no performance guarantees, you also have no NONperformance guarantees! I.e., it can fail now... and run correctly for the next 100 invocations. Then, suddenly start choking, again. User's can't reliably tell you what is happening *inside* the OS, applications, SD card's controller, etc. You don't want to give them an excuse to blame your hardware or
*their* choice of OS!)A friend has offered me an older Tek Logic Analyzer. Windows 98 based (!). You can bet the design either prevents the user from installing "foreign" (yet 100% valid) W98 applications as they could interfere with the performance of the instrument. *Or*, the design takes into consideration the lack of guarantees from the OS and operates *independant* of it!