Program like it's 1982!
;-)
Program like it's 1982!
;-)
We use lookup+interpolation in our embedded stuff, for both thermocouples and RTDs. It's fast, and you can do it with scaled integer math.
We do the reference junction compensation with a backwards lookup table: measure the RJ temperature, convert to millivolts, use that as the ref junction offset. That's more accurate than some linear compensation.
John
Here's a source for 'ya:
I'm calling it pre-mature optimization in that I'm expecting the optimizer to do it anyway and hence a human just forging ahead and doing it first is pre-mature.
But if I am expecting too much from an optimizer here, then I agree with you.
But Knuth, I think, would still suggest that Jan profile the code first before worrying about improving it. :-)
---Joel
They are. Graph voltage vs. temperature with a fat pen and you'll see something close to a straight line.
'twas me.
399°C ±1°C 400°C ±0.5°C 7th order 400°C ±0.5°C 7th order 399°C ±1°Cusable below
Some (type B comes to mind) are not even monotonic at temperatures around 0°C.
You can always fit polynomials to more narrow ranges of temperature and that gives good results. The extreme of that is cubic splines or even piecewise linear approximation. Cubic splines fit a cubic polynomial to a narrow range and match the slopes at the seams with adjacent cubic splines.
For example, this NIST polynomial fits almost perfectly over the range from -270°C to 0°C
double horner_T(double temperature) { double result; const double coeff[] = { 0.000000000000E+00, 0.387481063640E-01, 0.441944343470E-04, 0.118443231050E-06, 0.200329735540E-07, 0.901380195590E-09, 0.226511565930E-10, 0.360711542050E-12, 0.384939398830E-14, 0.282135219250E-16, 0.142515947790E-18, 0.487686622860E-21, 0.107955392700E-23, 0.139450270620E-26, 0.797951539270E-30 }; const int order = 14; int i; result = 0; for (i = order; i>0; i-- ) { result = (result + coeff[i])*temperature; } return result; };
You can evaluate the inverse by doing a binary search.. you know the result is between -270 and 0C (from looking at the voltage) so to get it to within 0.06C you need to evaluate the polynomial only 12 times.
Or use:temperature = a0 + x*(a1 + x*(a2 + x*(a3 + x*(a4 + x*(a5 + x*(a6 + x*(a7 + x*(a8 + x* a9 ))))))));
I have _some_ experience writing DAG-walking code (and basic block, too) to perform certain optimation rules. I do believe it is possible to perform this optimization, and probably a lot easier than some that are implemented. But I don't know of a c compiler that does it. And I'd be champing at the bit to find one and study the technique they applied.
So if anyone does know of a c compiler that does this, routinely, I'd appreciated letting me know.
Well, no one is going to argue with going after big fish before small fry.
And I think there is nothing wrong with c code writers being aware of the obvious heuristic I mentioned.
Jon
On a sunny day (Mon, 20 Sep 2010 16:56:04 -0400) it happened Spehro Pefhany wrote in :
Ah, yes, and that is from
OK, I will try some more things in the soft. .06 is a bit precise, superconduction starts in a range of about 5 K, so 1 K accuracy is enough for me. Thank you for the help.
I agree, I just have a suspicion that if you surveyed 100 C programmers, probably fewer than 10% would immediately recognize it for what it is.
Granted, I suppose that doesn't really matter -- it's not like very many would recognize the library code for the transcendental functions either; at some point you have to just trust that the code does what the documents claim it does, I suppose.
---Joel
That's the one I was thinking about. Isn't it the same as this:
Thanks, Rich
Once you see it, it becomes immediately recognizable every time after. It's VERY easy to learn. And if you do any polynomial work at all, you will soon learn it.
I have written those transcendentals, decades ago, using Chebychev and minimax methods. So I would recognize decent library code. (Indecent library code, too, such as ones that rely upon Taylor's.)
And trust, but verify.
Jon
Ever seen this book:
"Hacker's Deslight"
Within reason, yes, but realistically for every 100 people who use, e.g., the FFTW libraries, probably no more than a small handful have both the time and mathematical background to be able to thoroughly vet them. Even Intel learned this the hard way, with the Pentium FDIV bug and all...
---Joel
?
I have it, and have some corrections to it that I've written into the book, as well. There are some gems in there and some rather mundane and meanding things that I could do without, as well. All in all, though, it does help make people think more broadly and I like that in a book.
Something they should have caught, though.
I run some sanity checks on new FP library code I'm about to use -- if it is important to the application. I think others should do that, as well. Part of getting the job done right.
It's kind of hard to tell a company's executive staff, and their customers as well, that an instrumentation has a serious flaw that could have been (and should have been) uncovered early on. One would at least like to be able to say that every 'reasonable' test had been made.
Jon
Agreed, Intel definitely has enough people and budget that they should have caught it. It ended up costing them some incredible amount (hundreds of millions), as I recall, so I fully expect they now do rather *more* testing... or at least they've made a lot more of the CPU core "soft" so that microcode updates from the BIOS can fix it. :-)
Yeah, it's just that many systems are complex enough these days that "reasonable" can be a gray area. E.g., there are plenty of commerical routers out there running Linux, and some happen to have old enough kernels that there are known exploits to the TCP/IP stack or the internal web server or similar. Yet for a product that might sell for, say, $20 you probably can't expect the guys who designed the router to have even looked at the TCP/IP source code...
In theory that's one of the differences between "consumer grade" and "business grade" computer equipment, but in practice it seems that often the "business grade" stuff isn't nearly as much better as the pricing premium would suggest. (Although perhaps this is due to consumer grade routers being designed by folks in, e.g., China costing $0.50/hr vs. business grade routers being designed by folks in the U.S. costing $50/hr...)
Or maybe you get the worst of both worlds when you have companies paying a manager without much of a technical background $50/hr to manage some off-shore programmers without the strongest programming backgrounds making $0.50/hr... :-)
---Joel
x * x * x * x ) + (a5 * x * x * x * x * x) + (a6 * x * x * x * x * x * x) + (a7
Better to write x^n rather than n x's.
...and this is better when writing software: use common factor to speed up calcs.
I just LOVE sites that take a looooong time loading - most especially those that show a blank page (zilch) duting loading time.
Make at least one of them a nova..
(a4 * x * x * x * x ) + (a5 * x * x * x * x * x) + (a6 * x * x * x * x * x * x)
Article, schmarticle...it looks like they have little clue as how to factor for minimal math ops.
You can write it but unfortunately a^b in C etc. is the exclusive-or of a and b.
(a4 * x * x * x * x ) + (a5 * x * x * x * x * x) + (a6 * x * x * x * x * x * x)
Which "they"? You're not talking about the people who wrote this:
"Given a polynomial of degree n,
p(x) = anxn + an-1xn-1 + ... + a1x1 + a0
One might suspect that n+(n-1)+(n-2)+...+1 = n(n+1)/2 multiplications would be needed to evaluate p(x) for a given x. However Horner's Rule shows that it can be rewritten so that only n multiplications are needed:
p(x) = (((anx + an-1)x + a1)x + ... a1)x + a0 "
are you?
Thanks, Rich
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.