Hello, group.
I'm having a heck of a time trying to figure out why my MicroBlaze implementation is presenting *supposedly* single-precision floating point values as double-precision values truncated to 32 bits.
Here's what I mean:
float test = 22; // for example
printf("test = 0x%08x\r\n",test); // prints "test = 0x40360000" printf("test = %d\r\n",test); // prints "test = 1077280768" printf("test = %f\r\n",test); // prints "test = 22.000000" or some precision
22 in single-point floating point (hex) is 0x41B00000, and in double- precision is 0x4036000000000000. So, truncate the double-precision value and you have what's printed to the screen. I'm completely baffled. Am I missing something? A gcc flag? How is this even possible for a single-precision instruction set?