About thermocouples

n5 = n*n*n*n*n How many multiplies?

However even fewer are needed:

n2 = n*n n4 = n2*n2 n5 = n4*n Horners method (or the unrolled version) is usually cheapest way to compute non-sparce polynomials.

Hoewver these should be sparse - no even terms, but furthermore they cannot give the right answer.

--
¡spu?? ou '?? ?oo?
Reply to
Jasen Betts
Loading thread data ...

thermocouples.

formatting link

Looking at those coefficients and knowing that engineers generated them it is an open question as to whether or not the fit obtained for a 9th order polynomial is truly an optimum *least squares* fit or is systematically flawed from the outset. Without seeing the original calibration dataset it is impossible to say.

One symptom of least squares fitting is that the largest values get fitted best (which may not be helpful if the thermometer is intended to be used mostly at or near room temperature). Hence the rather large residual error when V=0. Minimum 1-norm may be better for this job and nailing the origin to be zero by construction would be wise.

One of the commonly used scratchpad data analysis programs will get this sort of polynomial fit where there is a dominant linear trend and small but significant higher terms wrong unless the dependent variable is prescaled to be exactly filling the range -1

Reply to
Martin Brown

The answers it gives to fullscale voltage on some thermocouple types are a tadge on the whimsical side. For instance under MS VC++ 2008 I get

v2t R 0.02 = -106551.80 (expected ~ 1685) v2t E 0.07 = 42518.28 (expected ~ 870)

I don't think it is the compiler that is at fault. I suspect a typo in one or more of the coefficients.

You might want to go back to the original piecewise NIST polynomials.

formatting link

Regards, Martin Brown

Reply to
Martin Brown

On a sunny day (Wed, 06 Oct 2010 16:31:05 +0100) it happened Martin Brown wrote in :

thermocouples.

formatting link

I dunno what you are testing, but there were some early not finished test versions. These days (already for weeks), the program is named 'th' (for thermocouple) this one:

formatting link
It has been released officially. Also it does temperature to voltage, what recently the whole discussion was about, using 'successive approximation'. The voltages are specified in millivolts these days.

This is how to use it (simply typing th shows this usage message): grml: ~ # th th: no thermocouple type specified, try -a [E|J|K|R|S|T], aborting.

Panteltje (c) th-0.4 thermocouple voltage (in mV) to temperature and reverse calculator. Usage: th -a [E|J|K|R|S|T] [-d] [-h] [-m C|K|F] -v voltage || -t temperature

-a type thermocouple type, either E, J, K, R, S, or T.

-c temperature cold junction temperature in units as described with -u.

-d debug mode, prints functions and arguments, and some variables.

-h help, this help.

-m mode temperature mode for temperature to voltage, either C, K, or F for Celsius, Kelvin, or Fahrenheit.

-v voltage voltage in mV, for voltage to temperature conversion.

-t temperature temperature.

Examples: voltage to temperature for a type T thermocouple: th -a T -v -5.35 Temperature in Kelvin to voltage for a type K thermocouple: th -a K -m K -t 1000

And it shows for your examples: grml: ~ # th -a R -v 20 type R platinum-13% rhodium(+) versus platinum(-) -50°C to 1768°C temperature is 1956.77 K 1683.62 °C 3062.52 °F

grml: ~ # th -a E -v 70 type E, nickel-10% chromium(+) versus constantan(-) -27°C to 1000°C temperature is 1188.98 K 915.83 °C 1680.49 °F

Your value of ~870 is wrong. This table shows mine is correct:

formatting link

You should read the whole thread before doing things :-) But I understand MS software kept you too busy ...

Reply to
Jan Panteltje

thermocouples.

formatting link

Lots of commonly used equations, and web site calculators that use them, are silly. The most commonly used microstrip equation gives negative impedances for wide traces.

John

Reply to
John Larkin

On a sunny day (Wed, 06 Oct 2010 08:52:46 -0700) it happened John Larkin wrote in :

thermocouples.

formatting link

He is using an early test version of my soft. The NIST poly-things are correct. See my reply to him with examples.

Reply to
Jan Panteltje

thermocouples.

formatting link

But that is a breakdown of slender body approximations taking the model outside its range of valid application. Same problem if you apply the laws of fixed wing aircraft to a bumble bee or test gravitational stability of a star on a human. You get a daft answer because the question being posed is not the right one - other effects matter.

The code in this thread (posted 18/9 with few comments apart from yours about the DC term that I can see from here) is supposed to be valid over a particular range. Testing the full scale end of range is a common sanity check and two were crazy.

NIST have used an equal ripple error fitting method to derive the polynomials so every detail of every coefficient has to be right. It is instructive to look at the individual terms of the expansion when the input voltage is at maximum range. I prefer it when they specify the basis functions used and show their more physical coefficients.

I think NIST modellers favour Chebyshev polynomials which generate a much better conditioned matrix problem for the polynomial fit. They turn it back into a classical polynomial in x^n as the final step.

The resulting polynomial goes wild outside its valid domain!

Regards, Martin Brown

Reply to
Martin Brown

thermocouples.

formatting link

As do many common transmission-line equations. That's because they are not models of any physical reality; thay are just hacked curve fits.

John

Reply to
John Larkin

thermocouples.

formatting link

It's like the polynomial is squirming to get away from the constraints of the least-square points. Occasionally I've added a couple of fake points outside where the real data (and refitted the resulting data points) so that instruments will behave in a reasonable fashion when you slightly exceed the official range limits.

Reply to
Spehro Pefhany

On a sunny day (Thu, 07 Oct 2010 10:37:59 -0400) it happened Spehro Pefhany wrote in :

thermocouples.

formatting link

I just do a range check. Out of range gives an out of range error.

Reply to
Jan Panteltje

That is unkind to the modellers. They construct an Nth order polynomial fit over a specified range that guarantees maximum error never to exceed epsilon. The price you pay is a residual error that oscillates round the target with magnitude epsilon.

The analytic power series expansion when it exists tends to diverge away from reality as x increases. And various tricks are used to restore or obtain good enough behaviour for engineering purposes.

You can do a lot better than either of these polynomial methods with rational approximations (better accuracy for fewer stored coefficients) and that is how almost all FPUs do it these days.

Taking Exp(-x) as a concrete example (and allowing 7 coefficients).

Standard power series expansion error O(x^8/8!) ~ 1/40320 ~25ppm (and all concentrated at the end of the range when x=1)

Chebyshev optimised gives global error

Reply to
Martin Brown

thermocouples.

formatting link

Sure, but sometimes the customer would prefer to see an instrument keep working to maybe 2550°F rather than quit and shut down the output exactly at 2500°F, so it can be calibrated at full scale, for example, or used near the limits even if there is a bit of over/undershoot. Analog instrumentation doesn't typically have these issues.

Reply to
Spehro Pefhany

Usually they minimize the sum of squares error at every data point.. which is in conflict with the goal of minimizing the maximum error at any point. It's the old bit about setting the derivative of the error to zero, and squares are easier to deal with.

formatting link

You can fiddle things a bit by adding extra points where the error is more important. For example if you know an instrument will be used at

500°C and checked at room temperature, you can add a bunch of extra points around those values.
Reply to
Spehro Pefhany

On a sunny day (Thu, 07 Oct 2010 12:26:53 -0400) it happened Spehro Pefhany wrote in :

You are right. It is a bit like digital audio. Old analog audio people used to go way past what is now 100% in digital, and called it 'headroom'. Too bad once they started using digital it all clipped... The better, more precise, our instruments, the sharper the borders will be defined. Better get used to it, and educate the users on that. Even the cheapest digital multimeter will show 'overrange'. Old analog ones would bend the needle a bit, sometimes the needle would stay bend :-). On the other hand, humans are neural nets, based on neurons with a hysteresis function, so by nature bend a bit too in perception.

Reply to
Jan Panteltje

n

There are polynomials (Legendre polynomials) that cover a range and blow up outside. The Tchebyshev polynomials do NOT blow up immediately outside, that's their advantage.

The problem of extrapolation beyond endpoints is mathematically insoluble (Cauchy radius-of-convergence offers a good treatment of the limitations, with real theorems... if you go for that stuff). Tchebyshev polynomials, though, are the choice for most-benign behavior at the fit boundaries.

Reply to
whit3rd

There are other ways and means - and Chebeshev polynomials are one of them. NPL and NIST use them for certain modelling tasks. These polynomials have the signature of an equal error solution as opposed to a least squares L2-norm fit of coefficients for x^n. Afraid Mathworld entries for Chebyshev and minimax fitting are dumbed down to oblivion :(

formatting link

Numerical Recipes is somewhat better, but do not under any circumstances rely on their sample code.

formatting link

Wiki has a bit on approximation theory and illustrates the closeness of Chebyshev to ideal minimax solutions for a couple of common functions.

formatting link

This stuff is all related to the mathematics of optimised filter design theory and the Remez exchange algorithm.

L1-norm tends to work better on experimental datasets with noise, but is harder to compute. Minimising |model - data| is a lot less sensitive to outliers than least squares (same for median vs mean).

Chebyshev gives something very close to optimal with a lot less effort.

Regards, Martin Brown

Reply to
Martin Brown

thermocouples.

formatting link

Everything I've tried (including Martin's examples) gives results very close to tables,in fact better than claimed accuracy.

--
"For a successful technology, reality must take precedence 
over public relations, for nature cannot be fooled."
                                       (Richard Feynman)
Reply to
Fred Abse

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.