floating point calculations.

Hi Friends,

I have tried a lot about this. I donno if my mind is not good at this.

I am trying to do an arithmetic calculations that involve multiplying a 32 bit integer with a floating point numbers. For example:

200 * 5.25.

How can I write an assembly code for this on ARM7 LPC2292 boards. Please help me out with this. I am a started in assembly language programming.

Thanks knight

Reply to
knightslancer
Loading thread data ...

knightslancer schrieb:

= 200 * 21 / 4

It's fastest if you can reduce your problem to integer operations. Please check if you _really_ need floating point operations.

--
Mit freundlichen Grüßen

Dipl.-Ing. Frank-Christian Krügel
Reply to
Frank-Christian Krügel

200 * 21

Right shift the answer by 2.

(remember, this is assembly...)

--
http://www.wescottdesign.com
Reply to
Tim Wescott

2

do it in C and look at the asembly output

Reply to
joepierson

Either the chip has floating point hardware or it doesn't.

If it does have floating point hardware, then you need to figure out how to use it (and if you have to be told, in detail, how to do it for this chip, then assembly language programming isn't for you). Most of the FP hardware that I've seen involves an engine that's sorta kinda separate from the chip's regular ALU. You'll need to load some special floating point registers with the numbers, then call a special floating point instruction* and do whatever you need to do to retrieve the result.

Note that floating point hardware is often quite loosely coupled to the rest of the processor, which can cause synchronization problems if you're not careful. If your assembly is coexisting with C, and if the hardware isn't tightly coupled, you may screw up the C floating point library's state machine by twiddling with the processor's floating point hardware

-- I have _absolutely no idea_ how the ARM handles this particular task, so I can't say if this is an issue or not.

If it doesn't have floating point hardware then it's up to software to provide floating point operations. If you're lucky you'll have access to a floating point library (often these will be called "emulation" libraries), and you'll just have to figure out how to use it. If you're _not_ lucky you'll have to write your floating point library. I would expect that there's _something_ floating around out there in Gnu-land for software floating point for the ARM, but you'll have to find it.

  • I don't think this applies to ARM chips in general, but for less well integrated processors there may be no separate "floating point" instructions. Instead, you have to actually tickle the floating point hardware somehow (sometimes the write of the second operand will do it), and poll the hardware (or get interrupted) to see that it's done.
--
http://www.wescottdesign.com
Reply to
Tim Wescott

3=

Please

programming.

Hey guys thanks for your posts. But, I had to represent data in Q24.8 format and integer in 32 bit format. using DCD directives only in the data section as like:

x DCD 100 a DCD 525E-2

Only after doing this I had to manipulate to make room for the calculations. Please help me.

Thanks knight

Reply to
knightslancer

Then it's not floating point! Say what you mean.

"Using only DCD directives" That can only be homework. What did your professor say?

--
http://www.wescottdesign.com
Reply to
Tim Wescott

this.

multiplying

Humm... then what can be the best way to do it ?

Reply to
knightslancer

If the processor does not have a floating point instruction set, just convert the integer to the same floating point notation as your floating point numbers (whatever notation you have chosen). Doing the actual floating point multiplication is just multiplying the significands and adding the exponents and correcting the bias.

But as others have said, the best thing in most embedded systems is getting rid of the floating point calculations entirely.

Paul

Reply to
Paul Keinanen

I have to disagree here. Although the floating point math is typically somewhat 15 times slower then the native math of 8/16 bitter, it greatly simplifies the development and makes the code much more readable and portable. As for the code speed and size, it matters only in the few cases when it matters.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

optimizing compiler should already convert /2^i to >>i. Useless to make the code less readable if the compiler optimize in the right manner. ;)

Bye Jack

Reply to
Jack

That would be true if he were using a compiler, but he wants to use assembly (for homework).

Reply to
David Brown

It is not just a matter of 15 times slower - software floating point can be a great deal slower than that compared to well-designed integer algorithms, especially on smaller micros. I still agree with your principle, however - there is no point in forcing an inherently floating point algorithm into integer maths if the size and speed of the code is not important. Correct code is more important than fast code!

Reply to
David Brown

Not true.

Still not true.

Peter

Reply to
Peter Dickerson

The code becomes slow and bulky on a small micro if you demand full IEEE-754 compliance :-).

In small systems it can be more practical to use some other format more suitable for the available simple instruction set. One example was the 6 byte Turbo-Pascal real data type format with 8 bit exponent and 40 bit mantissa.

For really small systems a 3 byte format with 8 bit exponent and 16 bit mantissa is often enough and easy to implement. Such format give about 4-5 significant digits, which is often enough, when the application is interfacing with the external world with 12-16 bit A/D and D/A converters.

Paul

Reply to
Paul Keinanen

Do you mean that it is not the case that the compiler will optimise a /2^i to a >>i instruction (or instruction sequence)?

*Roughly* speaking, any decent compiler *will* do that optimisation. If you want to be pedantic, then the appropriate strength reduction transformation for the /2^i is dependant on a number of factors, including the signedness of the operands (signed division involves a little more in addition to the shift), the cpu in question (maybe it has a fast divider), other parts of the code (maybe the result can be pre-calculated, or calculations can be combined or omitted), and so on.
Reply to
David Brown

Yes, such extra formats can be very efficient. However, you'll probably lose all benefits of having clear and understandable source code (which is often the reason to choose floating point in the first place), unless your compiler supports such formats directly.

Avoiding full IEEE-754 is almost always a good idea - embedded systems (and non-embedded systems) seldom have use for NaNs, etc., in real programs.

Reply to
David Brown

Yes, I want to be pedantic. A shift is not sufficient for signed integers unless the compiler knows it will be shifting a positive integer. As you say, there is an extra offset to add for negative integers, plus a test if you don't know which.

Peter

Reply to
Peter Dickerson

Are you sure they mean floating or do they mean fixed point, like

24-bits used for the integer part and 8-bits for the decimal part (of course with (1/2)^8 smallest step)? In the fixed point case the problem is very easy. Just left shift your 32-bit integer by 8 bits, multiply it by your 32-bit point as is and right shift the 64-bit result by 8 bits.

5.25 would be as a fixed point (24.8) something like

00000000 00000000 00000101 . 01000000 (bit 7 = 1/2, bit 6 = 1/4 etc etc downto bit 0)

and 200 would be

00000000 00000000 00000000 11001000 === left shift by 8 bits ==> 00000000 00000000 11001000 . 00000000

So, you multiply two 32-bit numbers at the end 00000000 00000000 00000101 01000000 x 00000000 00000000 11001000 00000000

------------------------------------------- ....0 00000100 00011010 00000000 00000000

thus by right shifting by 8-bits

00000000 00000100 00011010 00000000

or as fixed point (24.8)

00000000 00000100 00011010 . 00000000

of which the integer part is 1050 (the upper 24-bit parts) and the decimal 0 (the lower 8-bit part)

Note that this is for unsigned numbers and with you integer limited to

24-bits (because of the 8-bits)

Best regards GM

Reply to
GM

That's why it's best to let the compiler do the optimisation - it will get it right!

But if you are writing critical code, it's good to know *how* the compiler will optimise the code, so that you can help it (for example by choosing signed or unsigned data appropriately).

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.