Thermocouple and RTD linearisation question

Omega has some nice stuff.

The tables do change now and then, every few decades maybe.

Note that there are two standard platinum RTD tables, the usual 385 curve and the pure-platinum lab grade 392.

It's easy to linearize a platinum (or nickel!) RTD in hardware, but that's not necessary much these days.

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
jlarkin
Loading thread data ...

How many heater watts for, say, a 16 channel t/c acquision system?

"Lessened computation" only appeals to the math-phobic. I'd rather have a half page of code, than design a thermally insulated, heated junction box, and design the heater controller.

Our t/c products allow the customer to have his own reference junction location and configuration, which they sometimes do. Sometimes it's literally a bump in a cable. We only ask him to add an RTD out there, which we will process. They seem to like that. If we asked them to heat the junction box to an accurate temperature, they would rightfully think us to be insane.

How would you propose to measure the t/c voltage and convert to temperature without a processor?

This one has the ref junction RTD on the same PC board as everything else.

formatting link

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
jlarkin

Three.

Not true; roundoff errors for things like high-order polynomials are NOT minor concerns. I'm not math-phobic, so I often propogate an error estimate through the formulae. The maybe-four-digit accuracy of a thermocouple requires the coefficients to be specified to seven or eight decimal places. It's a tad ugly.

The polynomial coefficients from NIST had some typing errors; you might want to review those 'half page of code' items after looking here:

Your anguish at putting a sock over the thing is noted. Anguish at a design of electronics... well, this is a support group for that, we'll all help.

Reply to
whit3rd

And the copper has to be the same. Different copper wire alloys can easily have 100-nV/K TC slopes.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Cubic splines are the bomb for this sort of job.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

We most always use table lookup and interpolation at runtime. Some of our uPs don't have hardware float.

Anguish? I don't recall much anguish. Our thermocouple and RTD gadgets (both acquisition and simulation) all work.

Probably the worst parts were in the RTD simulators. It's nontrivial, sometimes a real PITA, to accurately simulate a floating resistor. Some people use a lot of real resistors and relays, but that has issues too.

Reply to
John Larkin

generalise.

at

or N=4.

ial to

like

nse

ich is

ave

are

ks.

g

Many years ago I went through this linearization exercise for an applicatio n using a embedded processor. While thumbing through an OMEGA temperature products handbook, I notice the lineraization tables for their thermocouple s. I bet that they are still available and also for RTDs. IMHO, a lookup table is the easiest & fastest way to go.

Reply to
jjhudak4

It's only a problem if you're forming the polynomials as sums of powers.

You can approximate x**N using polynomials of order N-2 to an relative accuracy of 2**-N. This is a cute result that follows directly from the explicit formula for Chebyshev polynomials--they have a maximum amplitude of 1, and the leading coefficient is 2**N.

Tn(x) = (2x)**n + (terms of order n-2 and lower)

so x**n = Tn(x)/2**n + (terms of order n-2 and lower).

Now |Tn(x)|

Reply to
Phil Hobbs

I didn't really want to know that.

Reply to
John Larkin

The last terms are probably coverng for rounding error in the regresion computation, and the few before that covering for imprecision in the input data.

--
  When I tried casting out nines I made a hash of it.
Reply to
Jasen Betts

Exactly what I said two days ago.

Use interpolated splines however, since they pass through the control points, not B-splines (despite their advantages of smooth derivatives)

CH

Reply to
Clifford Heath

wo

on

kes a few seconds

That's the point of the isothermal connections. As long as it is all isoth ermal other than the joint you are measuring it doesn't matter what the oth er wires are, it all falls out of the equations. So for all the joints othe r than the isothermal connection, either keep them the same materials or ke ep them at the same temperatures.

--

Rick C.

  • Get 2,000 miles of free Supercharging + Tesla referral code -
    formatting link
Reply to
Rick C

Just use cubic splines instead of high-level polynomials. Otherwise, if you really have to, at least chose the nodes properly (Chebyshev, if possible). A set of equidistant points. This is all wrong. How dare you.

Best regards, Piotr

Reply to
Piotr Wyderski

Some of them are not even convergent over their claimed range of validity! I find it rather worrying that they used N=10 polynomial fitting combined with a numerical algorithm that was incapable of delivering accurate results.

Excels own charting polynomial works OK on the same dataset out to N=6.

Here is the result of applying that more stable polynomial fit N=4 and forced through zero to the table of data in the link given elsewhere:

y = 3.0828574638606400E-11x4 - 9.1070253343747400E-08x3 +

3.1204293829190100E-05x2 + 3.9589043715018600E-02x

It remains numerically stable for N=5 and N=6 but makes very little improvement to the quality of the fit.

Note that *the* above coefficients are decreasing fast enough that the resulting series is convergent to a sensible answer even at T=-270 Successive terms are ~1000x smaller so that the non-linearity corrections are convergent and sensible.

ITS-1990 table gives

emf units: mV range: -270.000, 0.000, 10 0.000000000000E+00 0.394501280250E-01 0.236223735980E-04 -0.328589067840E-06 -0.499048287770E-08 -0.675090591730E-10 -0.574103274280E-12 -0.310888728940E-14 -0.104516093650E-16 -0.198892668780E-19 -0.163226974860E-22

The above are clearly total gibberish when T=-270!

--
Regards, 
Martin Brown
Reply to
Martin Brown

This

formatting link
suggests that simple linear interpolation with just 32 segments gets you within some 0.05C.

Reply to
Peter

Pure chronological snobbery. ;)

You want to use least-squares cubic splines to fit data, though--both X and Y of each knot are fit parameters. That gives you a useful amount of noise rejection without causing artifacts the way a long polynomial does. Nelder-Mead works well for finding the best fit. I usually run an interpolating spline through a subset of the data points for an initial guess.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

It's just plain stupid to use a long polynomial in powers of T, on account of the creeping linear dependence of the basis functions. Search for "Hilbert matrix".

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Yes, when fitting measured data. But we were talking about manufacturer's theoretical curves. By definition they are not noisy data, they're supposed to be definitive. If you're not going to calibrate to a better standard, your curves should hit all the points.

CH

Reply to
Clifford Heath

.

ny

Sadly, they are defined by experimental data, which is always noisy.

And since they are printed numbers of finite length, there is also rounding error.

--
Bill Sloman, Sydney
Reply to
Bill Sloman

+1

I know that and you know that but unfortunately engineers do not. A useful sanity check is that if the high order coefficients are not decreasing faster than (1/MAXRANGE)^N you are in serious trouble.

I am astonished that the ITS-1990 contains such complete and utter junk. Even back then it was not beyond the wit of man to rescale the problem onto -1 to 1 and then use Chebeshev polynomials or better rationals.

I suspect somehow we did the same course (or the same course material).

I have never been all that keen on splines for wide range calibration since most of the systems I needed to calibrate had well behaved deviations from ideal (at least if you did the calibration properly). There were a very limited number of of nearly monoisotopic elements to choose from and we needed 6 sig fig calibration from 7amu to 250amu.

Actually I was curious and just modelled it in Excel using the tables referenced elsewhere to determine the fit and I think what they did was almost certainly least squares fit Chebeshev (which converges OK) and then converted the coefficients to engineering polynomials which do not. I can reproduce their curve shape almost exactly except that my early Gaussian shaped misfit residual is centred on 145C rather than 169C. The rest of the curve is relatively well behaved apart from the extreme end.

It stems from the non-linear behaviour near absolute zero and near melting point that fitting a hockey-stick curve concentrates power in one of the highest available polynomial coefficients ~T6 in this case.

Anova shows there is almost no point in going beyond T5.

Coeff_N ChiSq MaxError V raw data 1445199.38 54.886 T0 Vres0 27.98034061 357227.8113 27.98034061 T1 Vres1 27.72757892 248.24666 0.821919528 T2 Vres2 -0.515081267 72.94431375 0.374740382 T3 Vres3 -0.312621842 2.039427885 0.077484025 T4 Vres4 0.020342186 1.760139883 0.094468194 T5 Vres2 0.038069482 0.774091766 0.056627856

Where T0 = 1, T1 = (2T/1372)-1 and T[n] = 2x.T[n-1]-T[n-2]

The T4 term isn't doing much good either. The worst case deviation can be improved by deleting it but with a larger least squares error.

If you push too hard and use N=10 this is what happens:

T Coeff_N ChiSq MaxError V raw data 1445199.38 54.886 T0 Vres0 27.97517699 357230.2563 27.97517699 T1 Vres1 27.72827116 250.5769622 0.817448158 T2 Vres2 -0.526050181 72.76262183 0.374846308 T3 Vres3 -0.311791205 1.991300561 0.086330497 T4 Vres2 0.006474322 1.882873757 0.091531603 T5 Vres2 0.039344529 0.878821435 0.052193998 T6 Vres2 -0.029108975 0.149404482 0.030724305 T7 Vres2 0.003952056 0.127703384 0.033284386 T8 Vres2 0.00969461 0.096817906 0.025513741 T9 Vres2 -0.006133809 0.071203158 0.019940176 T10Vres2 0.007501157 0.032705474 0.012619076

Excels solver doesn't get the absolute optimum least square solution.

T7 through 10 offer no worthwhile improvement in fit whatsoever.

For some calibration purposes equal ripple is better.

--
Regards, 
Martin Brown
Reply to
Martin Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.