Fixed point Vs Floating point

In some cases, especially in floating point math, division can be treated as a variant of multiplication and therefore will run at multiplier speeds.

Reply to
Richard Henry
Loading thread data ...

With constants, the reciprocal can be used, but not for variables:

formatting link

Best regards, Spehro Pefhany

--
"it\'s the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany

Write a bit of code:

Fill a table with all ones

loop:

save a copy of the table randomly increment or decrement one element best fit a sine to the new table if the fit is better keep the new table if NOT bored yet goto loop

Reply to
MooseFET

The floating point division by a variable X is usually done as multiplication by 1/X. The 1/X can be computed either sequentially by Newton-Raphson iteration or as a parallel computation using series. Both methods are quicker then the direct division. BTW, the mistake in the series computation was the cause of the famous Pentium division bug.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

That's just an inefficient successive approximation to the already-known sine curve, which it will eventually exactly match. The better way is to FFT every random iteration and see if it improves the fundamental and minimizes harmonics.

John

Reply to
John Larkin

You could also compute 1/x by bitwise successive approximation, especially if you have a fast multiply handy. Square roots, too.

John

Reply to
John Larkin

With bitwise successive approximation, the gain is one bit per iteration, whereas the Newton-Raphson basically doubles the number of known bits. So the number of iterations will be dramatically lower for the cost of the additional complexity, of course.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

The systematic approach to this problem is the noise shaping and the dithering. You can generate the table in the way so the junk will be either evenly spread around the spectrum or concentrated in the particular frequency band, whatever is required.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

I'm just this week (I hope) finishing up the firmware for this...

formatting link

and the follow-on version, one with channel-channel modulations. My sine table is just the ideal math points, rounded to 16 bits. That data goes through a digital gain multiplier, then a gain calibration multiplier[1], then an offset correction adder, and finally a 14-bit dac. I have no idea of how that math might be optimized for signal quality.

We did include an optional linear interpolation function on the memory data (thank you, Xilinx, for making all rams dual-ported!) which cleans up close-in DDS spurs pretty nicely.

I'm working on the serial protocol, the command parser, the realtime aspects of synchronizing waveform updates, BIST, the HELP system, managing modulation, the manuals, the calibration software, and testing the whole mess, so squeezing the last bit out of signal quality isn't too big on my radar.

But fact is, in the 30 MHz sort of range, the output amplifiers are the major source of distortion. It's not unusual to see commercial arbs, and even sine-only RF signal generators, that have THD values worse than 30 dBc. Certain parties have suggested futzing the waveform tables to correct for the amp distortions; that sort of insurrection must be sternly suppressed.

Oh well, back to the code. Or maybe a latte first.

John

[1] which I suppose ought to be shuffled from

SIG = SIG * GAIN SIG = SIG * CAL

to

X = GAIN * CAL SIG = SIG * X

I'll add that to the NEXT file. We keep a NEXT file on the server for every rev of every product. Anybody in the company can add wishes.

Reply to
John Larkin

Alas, not the 68020, nor the 68030; it was the MC68040 that sported onboard floating point (the hardware support for floating point on the '020 and '030 was in the form of coprocessor compatibility).

The floating point unit was omitted on the MC68LC040, so there were Macintoshes where a 'floating point unit' upgrade consisted of replacing one '040 CPU with another, more expensive one.

Reply to
whit3rd

Just to amplify on this, the numerator (A) and the denominator (B) are normalized (shifted) until each is in the range of (1, 0.5], then one repeatedly computes

A :=3D A *(2-B) and B :=3D B*(2-B)

note, A/B =3D A*(2-B)/(B*(2-B)) so the ratio is unchanged. Note also, B rapidly approaches "1" (this is a consequence of the normalization condition, in that B is nearly 1 to begin with).

At the first step, you can substitute a table value for the "2-B" approximation, to good effect. Typically, for a 24-bit result, use a 64-entry lookup table and make three pairs of multiplies, and B will be exactly 1 (to the first 24 bits), thus A is the ratio.

See, for instance, "An Algorithm for Division", Svoboda, Inf. Process. Mach. _9: 25-32 (1963)

The (infamous) Pentium divide bug resulted from a fault in transcribing the lookup table. They didn't test B, just took a number of steps based on the worst-case value.

This assumes there is a fast (integer) multiplier, which is true in most modern microprocessors...

Reply to
whit3rd

No, it is always integer.

The RMS error is the harmonics so no FFT is needed. Once you fit to the sine, what is left is all the harmonic content because you sweep through the whole table exactly once per cycle of the fundamental. It is a very different situation to the case where the sample rate and the frequency are not related.

Reply to
MooseFET

Or given a fast byte multiply and the need to do a multi-byte divide, you can take advantage of these facts:

1/1 = 1 (kind of obvious I know) 1/(2^N) = 2^-N 1/(1-e) = 1+e when e is very small

A*B*C*D *( 1/(A*B*C*D*X) = 1/X

If this is integer: First you normalize by sliding the number up. For floats that is already done

The first step needs a small table.

A is a small number in the form (1+N/256) that is obtained from a table look up using the upper bit of X. The table values are set so that they move the value of X towards the 0FFFF...FF value, but are certain never to go beyond that. The actual math is done as X+X*N/256 so the bytewise multiply works nicely.

B is another small number in this case it is in the form of (1+N/8192) (IIRC) producing this number does not require a table the N is made by complimenting the bits of X.

C is the same thing only 6 more bits down.

D is again 6 more bits down.

Now you use the 1/(1+e)=1-e rule to invert the number very near 2^N.

Next you multiply the result by the A,B,C,D values.

I hope I explained that well enough that you can see the point. It works nicely on an 8051.

Reply to
MooseFET

If the table is walked point-by-point, the RMS error from a perfect sine function will show up as various mixes of harmonics, depending on the values. I'd probably go for minimizing the worst harmonic, or maybe minimizing THD, which probably aren't exactly the same thing.

In the case of DDS, we don't walk the table in successive points, but sort of hop and skip all over the place, so math errors create harmonics, non-harmonic spurs, and the equivalent of wideband noise. That could be improved by some numerical analysis/iteration process, if we had a few billion years to spare.

I'm sure work has been done on optimizing DDS waveform tables for spectral content; I don't have any refs handy. It gets worse when downstream gain and offset factors are applied digitally.

We have found that hardware interpolation of "ideal" sinewave lookup tables improves the spectra a lot.

John

Reply to
John Larkin

The technique Analog Devices uses in thier DDSes ("SpurKiller technology") should work fine in your FPGAs as well (see the link at

formatting link
unfortunately it requires measuring what the system does so that you can then go back and attempt to kill off the spurs optimally, which is a process that realistically you'd have to automate to use effectively.

Reply to
Joel Koltner

This problem is well known, and there are the standard approaches to it. Add a random (or quasi-random) dither to the LSB of the DDS phase to break the periodicity, and the trash will be evenly distributed over the bandwidth. The quantization of the amplitude is the different story; the way to deal with that is the interpolation and the noise shaping.

You can optimize the DDS table for the frequency. You can't optimize a DDS table for a frequency.

It certainly does. The interpolation of the N-th order is equvalent to the increase the number of table points by the power of (N+1).

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Minimizing THD is the same as minimizing the RMS error if the waveform is really periodic. If I were buying a synth, I'd probably want to minimize the worst spur, which would probably be a SINAD spec. Minimax approximations like that usually have to be generated iteratively in some fashion, e.g. Remes' algorithm for polynomials and rational functions.

Cheers,

Phil Hobbs

Reply to
Phil Hobbs

Yikes, that's a lot of chip. It only nulls two selected harmonics; if you have access to the entire 360 degree sine table, or a little 4-bit or so adder ram off to the side, you could fix them all, more or less.

In our case, at higher frequencies, most of the THD comes from opamps, and amplitude is programmable, so there's no one best sine table correction that could be applied.

They say on the datasheet that best spur rejection results from

*manual* tuning.

Oh, in fig 48, shouldn't the spur fixes be added, not multiplied? And the multiplier has three inputs but no outputs!

John

Reply to
John Larkin

Yeah, I believe what they're really saying is that, "when you turn on the SpurKiller, everything going to move/change a bit, so we can give you pretty solid guidelines on how to ascertain the appropriate settings, but software to automatically do this is not entirely trivial and our summer interns were already working on other projects."

Good catch, I believe you're correct. I imagine the draftsman erroneously copied and pasted one of the multipliers for the "cancellation magnitude."

Wait for Rev. B? :-)

Reply to
Joel Koltner

Gosh, just after i named multiple ones. Go study the databooks before you respond.

Reply to
JosephKK

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.