Any way you want to represent it. You have to store the numbers spread over several words of memory, and manipulate them carefully.
Double and triple precision arithmetic and floating point arithmetic are all perfectly practical (if tedious and bulky). For extra credit, try arbitrary precision arithmetic, where the numbers can get as long as they need to.
Turing machines can do almost everything, though they may take a very long time, and a great deal of memory, to do it.
You don't, PIC16 chips do 8-bit addition subtraction and logic and not much else, multiplication division etc have to be implemented as subroutines. 16-bit math sub libraries are common but never needed floating point on such a small processor, if I did I'd make up my own fixed-point format (or just store as an integer multiple). When I need true floats I use a PIC18F part like the PIC18F2525 and program it in C, the free SDCC compiler works for me.
You can find floating point libraries for the PIC16 family. Some use the IEEE 754 representation.
It might be less painful to write the application in C if you really need floating point. You can download a free XC8 compiler that has deliberately poor optimization.
Fractional integers (integers with a radix point somewhere appropriate) are often better speed-wise, but there is little percentage in such optimizations in 2018.
In addition to the good advices given above, on should also consider, if true binary floating point representation is justified.
Especially with 4 or 8 bit data word length and primitive instruction sets and/or primitive addressing modes and only a few floating point operations required, one should consider, if a full decimal to floating point and vice versa conversion is justified.
In many 1950's and 60's "commercial" (i.e. accounting) computers used "decimal" (BCD) representation internally. The same conventions is still used in calculators.
For example 1.623 can be represented as 0.16230000E+01 which could be represented in 8 bit bytes in BCD as
In fact floating point addition/subtraction is harder, since you have to deal with normalization or demoralization.
If an 8 bit processor is so primitive that it doesn't have any DAA (Decimal Adjust Accumulator) instruction. it might be easier to use base 100 (0..99) to store two decimal digits in a byte, in this case
Although you *can* do this - today it would make much more sense to use an ARM core that has native floating point support!
PICs are better suited to ultra low power minor control functions.
It depends how much you know about the inputs and outputs that the system has to handle. Floating point is often used as a crutch by people who don't really know what they are doing to disguise the fact.
(I once wrote a one of these for AVR, for decimal conversion. The divide-by-10 operation takes only ~30 cycles, thanks to the fast hardware (8x8) multiply, and using this trick instead of long division. Yes, quotient and remainder are produced as usual. Downside: the operation had to be a full 24 bits, to maintain exact results for all inputs. That is, for a 16 bit input, it's multiplied not by 6553, but a bit more, and shifted.)
It is hard to justify using some of the cruder devices in the PIC line for new designs, but I expect this is not a new design. Who wants to redesign a board and/or retest and possibly rewrite code for a new processor?
If redesign is an option I would encourage looking at the MSP340 which is a very nice low power family as well. But yes, it is hard to want to work w ith anything other than an ARM these days. They cover a wide range of capa bility although working with an application processor is so much different than working with a CM0 that the common points seem pretty limited. So I'm not clear why that is an advantage I suppose.
On Tuesday, April 24, 2018 at 3:38:19 PM UTC-4, snipped-for-privacy@downunder.com wrot e:
ssembly for pic16, for example if i wanna save a value of 1.623 in memory, how can i save it using the 36 assembly instructions in pic16?
I recall the 8080 had a "decimal add adjust" instruction (or some similar n ame) that would correct the result of an addition for adding BCD numbers in a byte. All in all, if the inputs and outputs are going to be decimal, th en in this case using BCD arithmetic might be a useful way to go. Certainl y it will be easier to debug.
I don't understand why anyone would want to write their own arithmetic rout ines for a processor. There have to be a number of sources for such code, mostly free I expect. Doesn't Microchip have floating point libraries?
That strongly depends on the requirements for the floating point. Most of the smaller ARM devices only support single precision floats in hardware wh ich have less precision than integer arithmetic. If you need double precis ion floating point, you are back to using libraries or you need to use a mu ch larger and power hungry device.
That was only possible/sensible because the status register saved the nybble carry-out.
Clifford Heath.
All in all, if the inputs and outputs are going to be decimal, then in this case using BCD arithmetic might be a useful way to go. Certainly it will be easier to debug.
Although true relying on double precision reals to make a bad algorithm work is not good engineering practice. Most signal processing can be done in scaled integers or fixed point if you know what you are doing.
But a PIC16's memory isn't going to hold very many double precision reals anyway so I think it is a mute point.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.