I'm reading a research paper that describes a loop optimization technique that removes nested conditional blocks. The authors claim that their transformation yields a speed-up of 30% on average applied to some embedded systems software (referring just to the code segments containing the nested loops). Since their approach duplicates the loop iteration space, they also provide the resulting code increase that is almost 80% on average.
What do you think, is this approach promising? I mean an execution speed-up of 30% for the nested loops is a great improvement, but on the other hand you almost double the loop code size. So, in my opinion, such a large code size increase for embedded systems with limited memory is very crucial and I wonder if embedded systems vendors would accept it or would rather completely forgo this optimization.
Thank you for your opinion.
Best regards, Christian