I'm using OCR processing (optical character recognition) on a 3.2 Ghz PCU. It is not fast enough (10 minutes for each single newspaper page). Do you think that a FPGA solution could increase the speed of the processing? Must the software be programmed in order to be used with FPGA's or do solutions exist regardless of the soft?
I don't understand this question. You want a hardware implementation, not a reimplementation of software. Most likely the algorithm will be completely different.
I recommend looking at dynamic programming algorithms and systolic array processors. If those work, FPGA's are a good choice.
"glen herrmannsfeldt" a écrit dans le message de news: cjes0g$842$ snipped-for-privacy@gnus01.u.washington.edu...
Why not a reimplementation of software? Is it more difficult to develop a version for a FPGA solution (if the publisher can do it) than to programme the FPGA in order to adjusted it to the soft?
Why? Is that comment true in general, or just for OCR problems?
I'd expect there are some sequential problems that would be much faster in a FPGA.
Suppose we assume the clock on the CPU is N times as fast as the FPGA and the CPU. Then any problem that takes more than N cycles on the CPU will be faster on the FPGA if the FPGA only takes 1 cycle. How about something like computing a CRC?
Better would be doing a CRC type calculation while searching for a pattern. I think the Fire codes use something like that for error correction.
--
The suespammers.org mail server is located in California. So are all my
other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited
In the announcement for the ERSA conference (Engineering Reconfigurable Systems and Architectures) they say:
The advances in reconfigurable computing architecture, in algorithm implementation methods, and in automatic mapping methods of algorithms into hardware and processor spaces form together a new paradigm of computing and programming that has often been called `Computing in Space and Time'.
On a CPU things happen more or less sequentially. Algorithms are coded fairly dense in terms of transistors used per operation. An add instruction might take 32 bits. In an FPGA, an add instruction is implemented as a 32 bit adder. Efficient algorithms on an FPGA should do many things each clock cycle, and tend to be very different than the algorithms that are efficient on CPUs.
As for CRC, the table lookup works very well on CPUs, but there are some very different algorithms that work well for FPGAs, especially for larger word sizes.
I believe this could be sped up by the use of an FPGA coprocessor. Your OCR s/w would definitely have to be recoded to run on the FPGA, but would provide an excellent starting point.
In order to achieve a significant increase (order of magnitude, or better) you have to do one of two things (or both):
Parallelize the existing code.
Pipeline the existing code.
I would suggest profiling your code to find out which part(s) are taking 90% of the execution time. Its probably a small amount. This small part of the code can be converted to FPGA for acceleration.
By way of example, I am examining some atmospheric modelling code (written in Fortran). This code has tens of thousands of lines of code, but the part that is taking the bulk of the time is less than
100 lines. Accelerating this part could realize a big performance boost.
Besides VHDL and Verilog, you have the option of writing your FPGA code in C or Java. Feel free to contact me at work for more info (t h o m a s DOT s e i m AT p n l DOT gov).
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.