Hi.
Since a while I'm thinking about how to efficiently solve a specific error correction problem.
The problem:
An array of N data words is to be stored in M cells of a memory. Up to F cells are faulty, and the words stored to them might not be readable. However, the solution must be able to recover all N data words.
Most parameters are not fixed yet. Only F is given: 40. My desire to keep memory storage efficiency high lets me wish that N should be at least ~70% of M. The word size can be any (single bits to blocks of data).
Internet research shows me just one possible solution: Reed Solomon coding. However, there are no examples of how to implement such a high fault correction capability. Also, benchmarks of other implementations (with less redundancy) suggest that execution time increases dramatically with the number of faults corrected.
I do count with faulty cells as part of normal operation, ie execution time of the solution should not increase much as the number of faulty cells approaches the upper limit F.
The solution will be implemented on a 100MHz 32bit processor with several MB of spare memory (eg available for lookup tables). There's also a (small) FPGA that could be programmed to serve as accelerator co-processor.
My questions:
- Do you know of any Reed-Solomon implementations with similar correction capabilities, that I could have a look at?
- Do you know other algorithms, besides Reed-Solomon, that might be able to solve my problem?
Thanks in advance!
Regards, Marc