FFT Speeds

That's not the main data format on any machines I know of. No one uses flo ating point for logic or flags. It was an issue in Forth because of the wa y people perform logic using non-flags or math using flags. A one's comple ment or a signed magnitude data format breaks any assumptions you make abou t mixing the two.

Some people complain that the practice happens making programs potentially non-portable so it looks like they are going to standardize on two's comple ment so these programs will be compliant and the griping will stop. No one is complaining the change will break anything.

--

  Rick C. 

  --- Get 1,000 miles of free Supercharging 
  --- Tesla referral code - https://ts.la/richard11209
Reply to
Rick C
Loading thread data ...

by simply inverting all bits in a word. Thus inserting the inverters in one data path and both inverting and some other operation can be used in a single instruction.

must be added to the result, requiring an adder and the ripple carry can propagate through all bit positions. Thus complementing a number needs an ADD instruction and takes as long as an ordinary ADD, Of course these days with carry look-ahead logic, the ADD is nearly as fast as AND/OR instructions.

With linear ADCs the zero line can be freely selected. One could put the zero at 1/3 FSD and hence large numeric range on one side.

However, in floating point ADCs the range just around zero is of most interest, thus the sign/magnitude is more appropriate than biased unsigned or 2'c complement. The floating point sign/magnitude is used e.g. in digital telephony, so that weak signals are reproduced better.

Reply to
upsidedown

Two's complement dominates entirely for simple signed integers. But other formats are used in a variety of different situations. IEEE floating point, for instance, uses sign-magnitude for the mantissa, and offset for the exponent.

And internally in processors, various redundant forms are used in hardware to reduce latencies for carry chains.

In software, different representations are often used for extended arithmetic, as they can be much more efficient for some types of operations.

At the basic level - for types like "int" in C and the base types in Forth - two's complement has emerged as the undisputed winner.

(I think it was Knuth who said the biggest reason to stop using ones' complement is that so few people get the apostrophe in the right place!)

Reply to
David Brown

tion.

ys made me recall that in the Forth language group there is a discussion of standardizing on 2's complement... finally.

ters running anything other than 2's complement. There is an architecture running 1's complement, but it is no longer a hardware implementation but o nly simulated on other machines, lol. There must be some commercial niche for software written for that machine.

er

"1"

Exactly, and the 2's complement doesn't have the various issues the 1's com plement does. It only has a single value for zero. I can't recall how the y deal with that. If you subtract 2 from 1 you would get an all ones word which is zero, so you have to subtract another 1 to get -1 which is all one s with the LSB zero. So it's not really simpler, just messy in different w ays.

Actually the invert and add 1 is pretty easy to do in an ALU. You already have the adder, just add a conditional inverter in front and you have a sub tract, oh, don't forget to add the 1 through the carry in. Easy and no dif ferent timing than the ALU without the inverter if it's done in the same LU T in an FPGA. In fact, I've seen macros for add/sub blocks.

The same sort of trick can be used to turn an adder into a mux. In the ins truction fetch unit of a CPU I designed there are multiple sources for the address, plus there is a need for an incrementer. This is combined in one LUT per bit by using an enable to disable one of the inputs which only pass es the other input like a mux. I think I use that for the return address.

in an FPGA there could be anything. Some ADCs are still signed magnitude, so that conceivably could be carried through the chip.

You are talking about u-Law/A-Law compression. Yes, very familiar with tha t. Not really relevant to the native data format on CPUs though. I know i n u-Law there is a bias which can muck things up if you aren't careful. So it's not even signed magnitude either.

--

  Rick C. 

  --+ Get 1,000 miles of free Supercharging 
  --+ Tesla referral code - https://ts.la/richard11209
Reply to
Rick C

Sounds like an xkcd joke.

--

  Rick C. 

  -+- Get 1,000 miles of free Supercharging 
  -+- Tesla referral code - https://ts.la/richard11209
Reply to
Rick C

Yes - ones' complement has several complications, and not many benefits. Sign-magnitude also has the "negative zero" issue, but has nice symmetry and is good for multiplication and division. These formats also avoid the asymmetry of having a larger negative range than positive range, and odd things like "abs(-128) == -128" (using 8-bit to keep the numbers small).

Generally the "borrow" flag for subtraction is the inversion of the "carry" flag for addition. This means that a "subtract with borrow" instruction is implemented as an "add with carry" where the subtracted value is complemented first. That is, to do "A - B - borrow" you do "A

  • ~B + carry". That is (another) nice feature of two's complement that lets you re-use existing hardware without having extra carry propagation delays.
Reply to
David Brown

I don't have a reference to it, but it is certainly believable as the kind of thing Knuth would say. He had both the sense of humour ("I can?t go to a restaurant and order food because I keep looking at the fonts on the menu") and the obsession about typographic details to have said something on those lines. And people regularly spell it incorrectly.

It is, of course, equally believable that the quotation is a myth, or a misattribution.

He certainly /did/ write about the apostrophe:

Donald Knuth, that doyen of computer science, says in Art of Computer Programming, Vol 2.:

Detail-oriented readers and copy-editors should notice the position of the apostrophe in terms like "two's complement" and "ones' complement": a two's complement number is complemented with respect to a single power of 2, whereas a ones' complement number is complemented with respect to a long sequence of 1s. Indeed, there is also a twos' complement notation, which has radix 3 and complementation with respect to (2...22) (base 3).

Reply to
David Brown

Negative-zero can be a nice thing to have in floating point. Given that IEEE has both Infinities, I'm surprised they don't have proper infinitesimals too, but +ve and -ve zero is a close substitute.

Pointless in integer arithmetic, except to use up the assymetrical extra code.

CH

Reply to
Clifford Heath

s,

At least with sign magnitude it is easy to detect zero, just ignore the sig n bit.

he

I don't have a problem with asymmetry. I've never found a situation where it was a problem.

--

  Rick C. 

  -++ Get 1,000 miles of free Supercharging 
  -++ Tesla referral code - https://ts.la/richard11209
Reply to
Rick C

I've previously disallowed 0x8000 as a digitised 16 bit analog value so I could use it as an escape word in the data stream.

--
Cheers 
Clive
Reply to
Clive Arthur

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.