Tweaking a code snippet

Hi,

Observe: x RELOP y ==> (x - y) RELOP 0 -1 * (x - y) ==> (y - x) i.e., (paraphrasing): -1 * [ (x - y) RELOP 0 ] ==> y RELOP x

(RELOP is ">", "=", etc.)

One of my server templates relies on this sort of thing to drive itself from tables of "constraints". E.g., sense * (trial - reference) RELOP 0 where trial and reference are **expressions** indirectly driven from the table and sense is trivially related to a table entry.

In my first "instantiation" (port?), resources are plentiful. I.e., I can use doubles everywhere as space and time are not an issue. So, I can store "sense" as "+1.0" or "-1.0" *in* the tables, etc.

But, I will have to port this to a variety of other processors -- some being severely resource constrained (space *and* time).

Some of these "optimizations" are obvious: e.g., store sense as a signed char and cast it to a double when actually called upon for use (in fact, it is probably *more* logical to treat it as a bivalued *flag* than to assign special values to it -- like +1 and -1).

Other optimizations are a bit more involved -- e.g., using fixed point math in lieu of the doubles, etc.

Still other optimizations are at the whim of the compiler (common subexpression elimination, etc.).

As a rule, I *don't* like trying to outthink the compiler. It leads to more obscure code (which can ultimately lead to BROKEN code as your "organic optimizer" screws up!).

So, I am looking for a nice balance that goes "far enough" to help even poor compilers without unduly obfuscating what the code *wants* to do.

if (sense * (x - y) RELOP 0) { ... }

if ( (sense ? +1 : -1) * (x - y) RELOP 0) ) { ... }

condition = sense ? (x RELOP y) : (y RELOP x) if (condition) { ... }

if (sense) { if (x RELOP y) { ... } } else { if (y RELOP x) { ... } }

etc.

None are particularly appealing :< (note that commentary can clarify the intent of the code fragments; I am more concerned with "maintainers" introducing typographical errors that are harder to catch (e.g., mentioning x and/or y more than once invites a typo in one of those instances to introduce an asymmetrical error that requires additional testing to identify)

I would *really* like to avoid littering the code with #ifdef's designed to tweek it for each particular port :-/

Thx,

--don

Reply to
D Yuniskis
Loading thread data ...

The best thing you can do to have very portable and fast code is concentrate on the application code and not the implementation.

Keep the statements short and clear. This not only makes the code easy to maintain but will likely generate the best code from a compiler. When our customers send us benchmark code the easiest code to beat is code written to take advantage of obscure optimizations without regard to the overall effect for the application.

Regards,

-- Walter Banks Byte Craft Limited

formatting link

Reply to
Walter Banks

Correct. That's why I spent the bulk of my efforts designing the server to be table driven and choosing consistent data types (since a compiler would never have been able to make those sorts of transformations)

But these criteria aren't always mutually compatible. E.g., people make mistakes -- compilers can't compensate for those.

And, changing the writing style to discourage those sorts of typos can make the code more difficult to read. E.g., "x - y" becoming "difference(x, y)", or introducing other assignments ("x = ; y = ") just to minimize the possibility of two instances of an ending up "different".

Reply to
D Yuniskis

The easy fix is just to leave everything in the data "type" of the values in question and abstract the "operators" that work on those (e.g., don't use infix notation *anywhere*).

I.e., write the code AS IF you were defining a new data type in C++ and rely on "operator()" throughout. (though, in this case, you can use an "operatorCOMPARE()" that returns the sign of the difference -- ignoring the magnitude).

I'll finish recoding it and see how much that cleans it up...

Thx,

--don

Reply to
D Yuniskis

In article , D Yuniskis wrote: }Observe: } x RELOP y ==> (x - y) RELOP 0 } -1 * (x - y) ==> (y - x) }i.e., (paraphrasing): } -1 * [ (x - y) RELOP 0 ] ==> y RELOP x } }(RELOP is ">", "=", etc.)

I'm not too clear what exactly this is saying, but if I understand it correctly, it's not true. uint32_t x = 4, y = 5; a = x > y; // 4 > 5 == 0 b = x - y > 0; // 4 - 5 > 0 == 65535 > 0 == 1 c = -1 * (x - y > 0); // -1 * (4 - 5) == -1 * 65535 == -1

Essentially, the problem is that "x - y" might overflow the representation. For unsigned values, this is well-defined but might not be what you want; signed or floating -point values are even worse.

Also, '==' and '!=' behave differently because they're symmetric: a == b is the same as b == a, while a < b is the same as b > a.

}E.g., } sense * (trial - reference) RELOP 0 }where trial and reference are **expressions** indirectly }driven from the table and sense is trivially related }to a table entry.

How about: (trial RELOP reference) != sense

where sense is 0 or 1.

Alternatively, to help a simple compiler you might have: _Bool b = trial RELOP reference; // or char or bool if (sense) b = !b; // or b ^= sense

If the expression is selected at runtime, you could have a table of functions:

int cmp_gt(TYPE trial, TYPE ref) { return trial > ref; }

int (*cmp_table[])(TYPE trial, TYPE ref) = { cmp_gt, ... };

and call cmp_table[x](trial, ref) for the appropriate x.

If they're selected at compile time, maybe a script which generates the code from your table would be best.

Reply to
Charles Bryant

Sorry, you missed my "paraphrasing" comment on that last step.

IGNORING BOUNDARY CONDITIONS (e.g., INT_MIN vs INT_MAX, comparing two floats for equality, etc.), do you agree with the first two "observations"?

4 > 5 ==> (4-5) > 0 4 < 5 ==> (4-5) < 0 4 == 5 ==> (4-5) == 0 4 != 5 ==> (4-5) != 0 4 >= 5 ==> (4-5) >= 0 4 (4-5) (y - x).

I.e., this is grade school "arithmetic"

That would be a buggy implementation on my part, eh? :>

(i.e., not going to happen)

Correct. But, they still follow the rules above. I.e., if RELOP happens to be "==" at some point in the code, then all my "sense" variable did was swap the left and right sides of the operator. So, it deliberately converted "a == b" into "b == a". Note that I am not trying to "complement" the operator but, rather, "reverse it".

So, I want to convert ">=" into "

Reply to
D Yuniskis

If you are trying to find a good way to write this sort of thing that produces optimal code on a range of compilers and targets, then you are out of luck. For smaller processors, code will be smaller and faster if you write "if (condition) ..." and branch on the condition. Multiplying by +1 or -1 will be expensive. For larger processors that have higher costs for branches (especially when mispredicted), so a formulation that can use arithmetic instead of conditionals will be smaller and faster.

/If/ you can be sure that users will use a decent compiler, and enable appropriate optimisations, then you can do as Walter suggested and write clear and simple code, and let the compiler sort it out. My suggestion would be to formulate the code in a "if (condition) ..." style - compilers for smaller targets will handle that well. Good compilers for bigger targets will be able to re-arrange the code (see for example). To give the compiler the best chance, make sure you always give it as much information as you can - for example, keep your variables at the tightest possible scope, declare functions as "static" if possible and use compiler enhancements such as __attribute__((const)) so that the compiler knows what optimisations are valid.

Reply to
David Brown

No, not "optimal" but, rather, "non-terrible" (?) -- see above.

If the data type is simple enough (e.g., "int"), the cost of that multiply can still be low.

As I mentioned in another post, I "solved" the problem (actually, "side-stepped" is a better verb! :> ) by keeping all of the "processing" in the chosen data type instead of trying to promote it to something that could handle all types, etc. E.g., do the comparison *as* an operator supported by the "special type".

This lets me keep the invoking code clean and consistent (i.e., no need for anyone to muck with it). And, it also pushes the "work" required for making that implementation choice into the very hands of the developer opting to create that different data type! I.e., presumably, he/she understands the performance issues related to the design of that data type so, now, he/she can further optimize that implementation by dealing with clever ways to "do the comparison", as well.

(and, if the comparison can be suitably trivialized, it can always be inlined!)

Comes a time you have to "shoot the engineer" and see how things fall. On this little issue, I'm gambling that I'm already close enough to a "reasonable" solution...

Reply to
D Yuniskis

That's a reasonable aim!

That depends on how the toolchain implements it. Few toolchains, if any, will be able to see that one side is either a 1 or a -1, and optimise accordingly - they will do a normal int * int multiply. If you've got a nice fast processor that does that in a cycle or two, that's fine - if it's an eight bit processor with no hardware multiply, this means a long slow library call.

It's possible to do a few odd tricks when you know that one side of the multiply is -1 or +1 - but I don't think a compiler will see them. For example, you could do something equivalent to this:

int multPMone(int pmone, int x) { pmone >>= 1; // Turns 1 into 0, -1 into -1 int carry = pmone & 0x1; // Either 0 or 1 return (x ^ pmone) + carry; }

For some processors, that's probably the fastest method.

It can be useful to define minimum requirements (such as 16-bit cpu, or hardware multiplier) - it makes it a lot easier to get

:-)

Reply to
David Brown

Hi David,

[8>> If you are trying to find a good way to write this sort of thing that

I can't see how it would be possible to find an "optimum" implementation for a wide variety of targets and a loosely constrained "problem". :<

What I strive for is a "reasonably good" implementation that discourages folks from "dicking with it" (otherwise known as REbugging!). E.g., the core code in this section of the algorithm is now *so* heavily documented that the idea of having to "update" the documentation to coincide with any changes a person *might* be tempted to make to the code should be "discouragement enough" -- especially when the code wraps around "stuff" that the developer is *intended* to tweek (i.e., "Change *this* instead of *that*!")

Yes. But, nowadays, even an integer multiply is reasonably commonplace/"cheap". OTOH, forcing everything to be double can eat your lunch in a heartbeat! :<

But, on an 8 bit processor, the developer could have opted to use a different data type -- short int, etc. (as it is now rewritten, the developer could even use a nonlinear encoding driven through yet another table, etc.)

Unfortunately, you can't protect against incompetence. But, if you make the hurdles along the "wrong" path HIGHER than those along the RIGHT path, you can *hope* to coerce folks down that right path! :-/

Reply to
D Yuniskis

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.