Is there a convention for negative hex numbers?
-0xFEEE
or
0x-ABCD ?John
Is there a convention for negative hex numbers?
-0xFEEE
or
0x-ABCD ?John
I've always just thought of them as register contents. What they mean depends on context. I have 8 LEDs connected to a port; I want the odd-numbered ones lit, so I write 0xAA to the output register. Is that negative or positive? Or, I do a SUBB and it returns a value of 0x80 in the AC -- is that positive or negative? Depends on the state of C.
Back in high school we did math with hex numbers, but we didn't use any radix notation since it was stated that all numbers were base(x). So it just would have been -FEEE then.
-- Gordon S. Hlavenka
I would use the first one, because this would be the convention for using in C programs. But it depends on the language. E.g. in Forth you can switch the context and any numbers are interpreted and printed with the new radix:
hex ok
-babe cafe + . 1040 ok
PS: -0xfeee is not possible, if you use 16 bit, because usually they are saved as two's complement.
-- Frank Buss, fb@frank-buss.de http://www.frank-buss.de, http://www.it4-systems.de
-0xfeee is not possible _at all_ if you use 16 bit, unless your 'unsigned' is negative, or some other bizarre mapping of bits to sign (I suppose the '1's bit could be the sign bit...). 0xfeee takes 16 bits to fully specify, and the sign bit makes 17...
-- Tim Wescott Wescott Design Services
Just remember to distinguish between what you're doing on paper (-0xFEEE) with what's going on in digital-land. In a microprocessor the most likely signed number coding is 2's compliment, but that may not be what's done in an FPGA (signed magnitude is, I'm told, often more sensible).
-- Tim Wescott Wescott Design Services
Thanks; I'll use the first one.
Sure it's possible. Poke 0xFEEE into a 16-bit register, then negate it. So -0xFEEE = 0x0112.
John
The 0x-ABC will likely confuse any parser, so do not use it. If you are subtracting 2 unsigned numbers, I mean if you are doing something like
0xffff - 0xfff0, then in this case the result would be +15 decimal. However if you are subtracting 2 _signed_ numbers, say in 2 complements notation, then what would 0xffff - 0xfff0 be? If your number somehow represents the US national deficit then the - is likely implied. If it is countdown to a black hole, then you can leave the sign out too I think:
Yes sir, I'll try to remember that.
In a microprocessor the
Microprocessors don't make value judgements about bits in registers; only people do that. 0xFEEE is just a pattern of bits; whether it's positive or negative is up to you. A uP negates by doing a bit complement followed by register increment, which is a pure binary add of 1 lsb. Again, you can interpret all that any way you like. Most uPs and all FPGAs will cheerfully do anything you ask - like negating FEEE
- without throwing exceptions; they don't care.
I've never seen a uP or an FPGA design treat integers as sign-magnitude [1]. We do everything 2's comp, and Xilinx has some neat saturating-adder blocks that clamp overflows if you think that's appropriate.
John
The LINC minicomputer, 1960's, worked in sign-mag. So there were two zeroes, +0 and -0, and the biggest single-word integers were +2047 and
-2047. Nice symmetry.
I have an instrument, a waveform generator, that's controlled by serial ASCII commands. Some things, like frequency, are always decimal. Commands look like...
3Freq 21.7692M sets channel 3 to 21... MHz 2Phase 359.9 sets ch 2 phase lag, degrees 5Amplitude 4.55 sets peak volts; short form is 5A 4.55but there are pure integers, things like control registers with bit fields, and duty cycles, (16 bits) and raw DDS frequencies (32 bits) so it's nice to optionally do them in hex. And since negative frequencies are allowed, I figured when I did the number parser I'd include negative hex.
7Raw -0x7FFFFFFF sets channel 7 frequency to 31.999999985 MHz 1Raw -0x20000000 sets channel 1 frequency to -8 MHzI did the -0x80AB7F3E format, and I think I won't bother supporting the embedded - sign; you're right, that's a nuisance to parse. I'm pushing 6000 lines of code already.
John
Oops, correction
Doesn't 2's comp confuse everybody now and then?
John
Not really- years spent programming control algorithms etc. in assembly has made that just about impossible. These days I try to save temporary (?) confusion for fancier stuff like Kalman filters.
Best regards, Spehro Pefhany
-- "it\'s the network..." "The Journey is the reward" speff@interlog.com Info for manufacturers: http://www.trexon.com
Well, it depends on the context - in C, for example, there is an "int" data type, which is usually twos complement with a sign bit, and an "unsigned" data type, which is just 0 thru the machine's word size.
In other words, say we have an unsigned int:
0000h = 0d 8000h = 32,768d FFFFh = 65,535dvs. a signed int:
7FFFh = 32,767d 0000h = 0d FFFFh = -1d 8000h = -32768dHope This Helps! Rich
Sign-magnitude is generally speaking faster to implement but more difficult to design. IIRC u-law and a-law telephony 'compression' formats use sign-magnitude. So you have 2 zero values.
I did some sign magnitude FPGA designs and I'm thinking about re-designing a two's complement IIR filter to sign magnitude to make it operate faster. Xilinx has some oddities in their multipliers which require you to extend the sign bit so all 18 bits are used when dealing with positive and negative two's complement numbers. This adds unnecessary delay in the multiplier if you are only interested in multiplying two 14 bit numbers.
-- Programmeren in Almere? E-mail naar nico@nctdevpuntnl (punt=.)
Hz
Same here, though the first professional interaction I had with software engineers, I did not know what 1's complement was, and when I got them mixed up, one of SE's asked,
"Do you even know what 1's complement is?"
I said, "No."
Lot of people heard and they all laughed.
-Le Chaud Lapin-
ble).
Yup. Almost all modern CPU's avoid sign-magnitude for precisely this reason, since it would add unnecessary logic to ALU.
I more or less finished a big-integer library in C++ a couple of weeks ago for multiplying arbitrary-precision numbers. Operations include ADD, SUBTRACT, MULTIPLY, DIVIDE, POWER, POWER WITH MODULAR REDUCTION, LCM, GCD, etc. Typical numbers are between 1024 and 4096 bits, though
1,000,000 bits would not be unreasonable for experimentation. In DEBUG mode on 2.4 GHz Dual Core, million-bit x million-bit multiplication takes about 40 seconds in my (unoptimized) implementation.Everyone who implements such a library quickly learns that, unlike with ALU, 2's complement is extremely painful to implement in software, especially multiply/divide/etc. Instead, we use sign- magnitude for the entire 4096-bit number.
-Le Chaud Lapin-
Which is why I use 3's complement, which confuses everyone all the time. Much more consistent, don't you think?
John, I've never seen the neg sign preceeding a hex number, as you've shown.
The usual convention is 2's complement numbers used for signed variables, so any time you specify a signed integer with the MSB set, you're specifying a negative number.
Anything beyond that would be implementation-specific.
Tom
PS... since I currently have MPLAB open, I just gave it a try.
In C18, I tried the following:
int testnum;
testnum = -0x0123;
To my surprise, that didn't generate a compile error. So even though I've never negated a hex number or seen anyone else do so, apparently it's an accepted convention.
The sim's debugger shows that value a 0xFEDD, or -291 when represented as decimal.
So, I guess it's acceptable to negate a hex number. It's just really weird and confusing to do so.
I'd recommend staying with decimal representation if you're going to do stuff like this.
Tom
Yes. So don't make it more confusing. :-)
Finally, some sanity strikes this thread! :-)
Finally, some sanity strikes this thread! :-)
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.