As you have said yourself, it sounds like this could be an issue with the linker setup for this particular chip - it's nothing to do with the compiler as such. But you will get far more helpful answers from the avr-gcc mailing list - there is no need for workarounds like this until you have confirmed that it is a bug and not something else.
If it /is/ a bug in the linker setup, then the avr-gcc team would very much like to hear about it - that way it can be fixed for everyone.
And if a workaround of some sort is needed, then there may be better or more convenient alternatives - your fix here may be relying on the luck of the link order, and you could get the same problem later with something else.
Other possibilities are to use different -mmcu flags for compilation and linking - use "-mmcu=atmega32u2" during compilation (that ensures you get the right IO headers, etc.), and a different 32K micro when linking (to work around this possible bug in the atmega32u2 link setup). I've done similar things in the past to work with devices that were not yet supported.
Every C compiler I've seen does not require a cast to get the integer part of a float.
The compile just generates magical code in the background for it
myinteger = myfloat;
Or course, this implies that the C compiler is loading it's default RTL so that it knows how to handle floats, since AVR's to my knowledge, do not have float operations in the cpu..(FPU) etc..
You said that negative float values convert to an unsigned value of 0. Here's what your sample programs shows:
-5.627771
251
-9.410577
247
-9.602361
247
-6.129940
250
Negative float values do not convert to unsigned values of 0. IOW, it doesn't work the way you said it did, and the example program you posted proves it.
I agree that the OP's problem had nothing to do with float/int conversion. The OP's problem is that he ran out of code space.
No, it doesn't. That's been clarified days ago in this thread.
Yes, but that only applies once you _have_ an unsigned char. The problem under consideration doesn't have one --- it's trying to build one.
You think incorrectly, there.
No, it doesn't. It instructs the compiler to generate code that does exactly what the code says: convert the float to an unsigned char. That's another thing that has been clarified days ago.
Yes, and we will scale f to fit within 255 first. We are only interested in relative magnitude of an array.
.
The compiler is smart enough to understand "u =3D f or i =3D f" as well. The casting is for human reading the code. It may be unnecessary, but not brain damaged.
It is mainly for storage. Unsigned char (8 bits) takes less space than float (16 bits) or double (32 bits). We don't really care how the compiler does it, but storage is important for a 1K SRAM chip.
It works on any integer. Unsigned char's are supposed to work modulo their range.
Now I think u = (unsigned char) f; is brain damaged.
This tells the compiler to convert the float converted to an integer, then take it modulo 256.
The OP should have had:
{ int i; i = f }
He would have discovered that his code was shorter and faster than his determined attempt to confuse the compiler. (Admitting that the compiler should never get confused.)
Groetjes Albert
--
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
Last time I knew, that just cast the object into what you want, but it's not going to magically convert a FLOAT to a unsigned char.. Its only going to instruct the compiler that the contents of memory the "F" occupies is a unsigned char and simply moves the data from that from ever it is in memory to "u" simply speaking.. u = f; isn't the same as u = (unsigned char) f; where I come from..
And the last time I knew, AVR's do not have native support for floating point math. UNless I've been out of the loop here? Instead, the compiler should be loading a default RTL that contains low level code to give you that results.. This of course is compiler linked and thus, makes it look natural to you, the coder.
And if the C compiler is standardized like most other compilers..
u = (unsigned char) f; simply tells the compiler to treat the object of "F" as if it was a char instead. No conversion is taking place here. As far as how the floats are implemented for the binary format of this compiler will dictate what value of data will be returned..
I can only assume it may support the Mantissa and other elements of a floating point number, in which case, the above will not work as you may think.. But who am I, I don't know anything..
stick with the "u = f;" and the compiler should be happy with that and call one of it's floattointeger conversion TRL functions.. But If you want to know how the float if constructed to represent a floating point number, I guess you could hack it with the cast. I think only the authors of the compiler would understand it's meanings..
You are getting this entirely wrong. When you assign an expression of one type to a lvalue of another type, there is always a cast. These two assignments are always the same:
x = exp; x = (typeof(x)) exp;
The OP wrote out the cast explicitly to make his code clearer - that's good programming practice.
You are mixing this up with pointer casts. Casting a float to an int (of any size and signedness) causes a proper conversion. But casting a pointer to a float to a pointer to an int accesses the memory contents without any interpretation. To get the effect you are worried about here, you must write :
u = *((unsigned char *) &f);
But of course that's not what the OP wanted, so it's not what he wrote.
It doesn't make any difference whether floating point is implemented in a hardware FPU or with software functions. The compiler (and library) give you the same functionality for floats and doubles regardless of the underlying implementation.
Clearly the implementation makes a difference for speed. It /may/ make a difference if your code relies on the underlying format and your toolset uses a non-IEEE format. But that would be a weird piece of source code, and an unusual compiler - most compilers, including avr-gcc, use IEEE format for their software floating point. It /may/ also make a difference if you are looking for IEEE corner-case functionality - NaNs, signed zeros, rounding modes, etc. But again, that's not the case in most software.
avr-gcc is as standard as it gets - but what you are describing is /not/ standard C behaviour.
I'm not going to generalise, but you are certainly getting this one wrong.
6.2.1.3 When a value of floating type is converted to an integral type the fractional part is discarded. If the value of the integral part cannot be represented by the integral type, the behaviour is undefined.
So undefined would be invoked outside 0.00000-255.99999999999999999
Groetjes Albert
--
--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
I have seen block scaled integer FFT that had almost no artifacts. I=20 wish i could remember where i saw it. I think it must be related to=20 some VAX or PDP-11 integer FFT code that i have seen. Must have been=20
Really? I've never run across a 16 bit float. Every compiler I've worked with uses 32 bits for float (sign, 8 bit exp, 23 bit mantissa w/ implied MSB) and 64 bits for double. It's been a long time since I read IEEE-754 and it wouldn't surprise me that it allowed for other formats, I've just never seen them.
SHARC DSP from Analog Devices supports 16-bit float in the hardware; it is supported by tools as well.
Me uses self made 24-bit float class (16 bit mantessa + 8 bit exponent) routinely; it is very handy when you work with 8-bit or 16-bit integer machine.
Vladimir Vassilevsky DSP and Mixed Signal Design Consultant
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.