Optimizations for embedded systems

Hi,

I'm reading a research paper that describes a loop optimization technique that removes nested conditional blocks. The authors claim that their transformation yields a speed-up of 30% on average applied to some embedded systems software (referring just to the code segments containing the nested loops). Since their approach duplicates the loop iteration space, they also provide the resulting code increase that is almost 80% on average.

What do you think, is this approach promising? I mean an execution speed-up of 30% for the nested loops is a great improvement, but on the other hand you almost double the loop code size. So, in my opinion, such a large code size increase for embedded systems with limited memory is very crucial and I wonder if embedded systems vendors would accept it or would rather completely forgo this optimization.

Thank you for your opinion.

Best regards, Christian

Reply to
Christian Christmann
Loading thread data ...

| I'm reading a research paper that describes a loop optimization technique | that removes nested conditional blocks. The authors claim that their | transformation yields a speed-up of 30% on average applied to some | embedded systems software (referring just to the code segments containing | the nested loops). Since their approach duplicates the loop iteration | space, they also provide the resulting code increase that is almost 80% on | average. | | What do you think, is this approach promising? I mean an execution | speed-up of 30% for the nested loops is a great improvement, but on the | other hand you almost double the loop code size. So, in my opinion, such a | large code size increase for embedded systems with limited memory is very | crucial and I wonder if embedded systems vendors would accept it or would | rather completely forgo this optimization. | | Thank you for your opinion.

It's going to depend on your application (like any other optimization technique). If you have a lot of time spent in a small space, you would do well to improve that with respect to time even if it costs space. Doubling 256 bytes to 512 bytes on a 4MB machine most likely won't matter much. OTOH, if you're dealing with 16K of loop on a 64K machine, that is not the way to go.

Memory tends to usually be a hard constraint. Either it fits or it does not fit. Of course, in some cases it can be soft where larger code space eats up data memory making the application able to handle less of what it does. Time is usually soft (just slower) though not always (all could be lost if a response doesn't happen before some other event). But only you knows your application's needs.

--
|---------------------------------------/----------------------------------|
| Phil Howard KA9WGN (ka9wgn.ham.org)  /  Do not send to the address below |
 Click to see the full signature
Reply to
phil-news-nospam

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.