Small, fast, resource-rich processor

Excellent! Sounds like the right solution - good system engineering.

--
Randy Yates 
Digital Signal Labs 
 Click to see the full signature
Reply to
Randy Yates
Loading thread data ...

I hadn't noticed that - the double precision stuff would need to be done in software. Still, the iMX.6 is fast enough to handle that without trouble.

Reply to
David Brown

gcc for ARM has direct support for "long long fract", which is 1.63 fixed point (exact sizes vary according to target - stupidly there is no equivalent of for fixed point in C).

Dropping the requirement of floating point doubles will make life much easier.

Reply to
David Brown

The requirement comes probably because he will be DSP-ing. Because of the accumulate part of MAC, 32 bits is just not enough - and 32 bits FP (24 bits before information begins to get lost) is even worse.

On the coldfire parts I have used there was a MAC accumulator though - did not use it but it was wider than the normal 32 bit registers. TI-s DSPs which I am familiar with (54xx) have 48 bit accumulators for their 16*16 MAC.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Reply to
dp

Yes, I know why he wants accurate values. But the simple matter is that hardware double-precision floating point is only available on a few relatively expensive microcontrollers and DSPs, while single-precision hardware floating point is quite common (such as on larger Cortex-M4 devices, and lots of MPC chips), and for many microcontrollers there is direct compiler support for 64-bit fixed point arithmetic. So if you can use 32-bit floating point, or alternatively use fixed point arithmetic, then the choice of microcontroller is far wider and far cheaper.

Of course, you can always do the double precision floating point stuff in software - that's fine if the processor is fast enough.

Reply to
David Brown

Dimiter,

This is not really accurate. A typical DSP signal path will be 16 bits, or perhaps 24 bits for processors that work like the old Motorola 56K. So even though the accumulators are large, you ultimately have to quantize back to 16 or 24 bits.

The reason for the large accumulators is so you don't overflow (or saturate) the integer accumulation, i.e., to afford a temporarily large dynamic range. Doing the operation in floating point (even single-precision FP) provides the necessary dynamic range.

However, it is true that the intermediate multiply-accumulate in a fixed-point machine is performed to several more bits' of precision, so the resulting 24 bits (e.g.) is more accurate than the 24 bits resulting from the equivalent operation in single-precision FP.

Actually the TI TMS320C54x DSPs have 40-bit accumulators, 32 bits for the 16x16 multiply plus 8 "guard bits" for the accumulation.

--
Randy Yates 
Digital Signal Labs 
 Click to see the full signature
Reply to
Randy Yates

Why would there be such a thing? After all, there are no fractional integers in the language to begin with, so what use could a header documenting their non-existing properties be?

Fixed-point integer types exist in C only as non-standard extensions, like the proposed "Extensions for the programming language C to support embedded processors". It's up to the implementors of such extensions how they publish their properties. The above-mentioned proposal has , other approaches will have their own, similar headers.

Reply to
Hans-Bernhard Bröker

Which is exactly what I said.

No, single precision FP is nowhere near sufficient for an accumulator. Just 24 bits of mantissa. This is why on normal processors you do need either dual precision FP or some specifically built accumulator.

Well it's been over 10 years since I did that this with a 5420 (

formatting link
)so I may have forgotten. Which is somewhat strange, I spent almost 3 months writing the assembler & debugger I used for the 5420 afterwards so I would expect my memory to have served better.... :-).

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Reply to
dp

Can you point to documentation for that? I didn't find any reference to "fract" in the header files of my version (4.7.4) -- is it really build- in, like "int"?

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

Your typical DSP path, maybe. If you're doing IIR filtering with 16-bit input data you'd better use plenty-o-bits, and you'd better make sure that your "plenty" is plenty enough. I usually go by n = log_2(sample rate / filter bandwidth) as a guide for the minimum number of bits to add to the input data width for 1st-order filters, at least twice that for resonant 2nd-order filters.

I'm often doing control systems work where, between the required precision, the sampling rate, and the required integrator gain (low), even 24 bits is inadequate -- in that case I either use 32 bits (which _is_ adequate for nearly all control systems work) or double-precision floating point.

In this case, because of the Kalman filter, one could probably flog the hell out of it and get it to fit into 32-bit fixed point data, or easily use 64-bit fixed point.

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

They are covered in the ISO TR 18037 "Embedded C" technical report. This doesn't quite hold the status of C11 or earlier C standards, but it is close, and compilers (such as gcc and clang) have implemented them on some targets. There may be some compilers that have previously implemented non-standard fixed point extensions, and will keep these for compatibility, but the TR 18037 is generally viewed as the way ahead for fixed point support in C. (Some other TR 18037 features, like named address spaces, are also seeing real-world use in compilers - while others such as the I/O extensions, are less likely to be useful.)

It strikes me as strange ("stupid" is perhaps too strong) that the TR

18037 committee have not made exact size types for fixed point data. I understand that they want to give compiler developers freedom to pick different sizes that are more efficient on a particular target, but it is a great disservice to users. Typedefs like these would be fine:
Reply to
David Brown

It may depend on the build - I believe support for fixed point is a compile-time option on gcc (like support for different languages). In particular, it is only available on some targets.

Your gcc installation should have a directory such as "lib/gcc/arm-none-eabi/4.7.3/include" - look there for a "stdfix.h" file alongside "stdint.h" and "stdbool.h".

Disclaimer - I haven't actually tried using the fixed point types as yet.

Reply to
David Brown

Dayum -- there it be!

Too bad we're doing it the "project efficient" way.

Do you happen to know if the compiler is smart enough to see that you're doing MAC-ish sorts of things, and actually using MAC instructions?

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

Sorry, I don't know - I haven't done anything like that yet on an ARM. I am looking forward to trying it sometime.

I can certainly say that gcc is becoming even more popular for the ARM, and that there are big commercial backers - so we can expect such things to get into gcc faster than used to be the case. The official maintainers of the ARM port are CodeSourcery, who are now part of Mentor Graphics. ARM now supports gcc directly with money, expertise, and a ready-to-use toolchain build. And Freescale CodeWarrior now has gcc as well as their own compiler - and gcc is the default for new Kinetis projects. (But at the same time as making that great move, they cut Linux support as a host!)

Reply to
David Brown

AFAIK it does support double-precision *scalar* floating-point.

-a

Reply to
Anders.Montonen

"High-zoot"??? What is that?

Then why don't you use a small PC? They come in rather small units, not much larger than a Wifi router. ITX may be the form factor I am thinking of.

Lol, that funny. Hardly a nightmare and time sink is a relative thing. Anyone who knows what they are doing with FPGAs can most likely do the job rather easily. Why is this algorithm so easy on a PC but so hard in an FPGA?

Ok, there's you solution. You can get USB ADC (although you didn't give any specs on the ADC), serial ports and maybe not a JTAG debugger, you can just run the software you already have working...

--

Rick
Reply to
rickman

The requirement of double precision floating point, one presumes. What would the FPGA approach to that be? The DSP blocks in FPGA's are usually fixed point and narrow.

Reply to
Paul Rubin

It's got lots of zoot, of course. Look up "super zoot" in the dictionary. Maybe "zoot suit".

At the time of writing, because I couldn't talk my customer into it. That problem has since been solved.

For the same reason that PCs use processors and not FPGAs. Because there are some algorithms that just fit better onto one thing or the other.

Look up "Extended Kalman Filter" on Wikipedia, consider that your H matrix will change every cycle depending on the incoming data, and then you tell me.

--
Tim Wescott 
Control system and signal processing consulting 
 Click to see the full signature
Reply to
Tim Wescott

Do you know how floating point is calculated? The question says you don't. I assume you have only worked with floating point from the software perspective where the computation "just happens".

--

Rick
Reply to
rickman

If you don't know either, that's ok. :P

The reason PCs use CPUs is because the processors are highly optimized for... well, PCs actually. Duh.

How about you just tell me. You are the one who made the statement that FPGAs aren't suitable. I can't think of anything that can be done on a GP processor that can't be done using an FPGA. When DSPs were still in diapers and there was a need for sub millisecond FFTs FPGAs came to the rescue.

Why can't you change the H matrix in an FPGA? Everything else can change on every clock cycle, why not in an FPGA?

I don't mean to be insulting, but I get tired of just how much prejudice and ignorance there is about FPGAs. No, an FPGA may not be the best solution to every problem, but to call programming them a "nightmare" is just not engineering. It is superstition. Did you really work with FPGAs before? Did someone competent show you how to use them?

--

Rick
Reply to
rickman

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.