But I _thought_ that in C++ 'A' was a char. Looks like compiler bug to me. The justification I once read for that change between C and C++ specifically said it was so that overloaded functions would do what you expected when passed a literal character constant like 'A'.
Grant Edwards grant.b.edwards Yow! Is my fallout shelter
at termite proof?
I generally count it as a mistake to assume that the compiler is going to be an absolute stickler for conforming to standards. You can generally count on correct behavior on the well-trodden paths, but not in the corner cases.
It's sad, but it's saved my ass numerous times.
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
If the char type is not exactly the same as the u8 type and the s16 type is the same as the int type on your platform the compiler might conclude that with an implicit promotion to int (it doesn't take much to trigger this) the s16 version of foo is a better match than the u8 version of foo.
I tried it with Visual Studio 2010 with the following code:
void foo(unsigned char x); void foo(int x);
I see that foo('A') calls the int version. If I replace unsigned char with signed char it still calls the int version. Only if I leave out unsigned/signed the char version gets preferred over the int version.
I'm not a language lawyer, but I'm inclined to believe it is your mistake ;-)
Oops! Reading is *fun*damental - you said ECPP - in C++, 'A' is a char. I imagine the IAR compiler prematurely promoted it to a short. I don't trust *any* of the language features in *any* of the IAR compilers.
Might be worth a #ifdef __cplusplus #error YesC++ #endif
to see what mode the compiler thinks it's in there. Also print out "sizeof('A')".
A 'char' is most likely not an 'u8', which I assume to be a 'unsigned char'. If 'char' is signed, converting to 'unsigned char' is a conversion, which is a worse candidate than promoting to a 16-bit 'signed short' because that cannot lose data.