somebody probably already thought of this...

i have a pyrometer (thermocouple and an analog gauge)...the setup take a fairly long time to settle at its final temperature, and i'd like a much faster response rate..

i was thinking that if whatever was reading the thermocouple analyzed the RATE of change of the temperature, it could figure out what the FINAL temperature would be..the time constant is a predictable thing, right?

i.e. it sees where the current reading is on the exponential "settling" curve and can therefore predict where it will end up

is this possible? is this done?

Reply to
acannell
Loading thread data ...

Yeah, they're called *anticipators* as in potators...

Reply to
Fred Bloggs

a écrit dans le message de news: snipped-for-privacy@d4g2000prg.googlegroups.com...

In my opinion the answer is yes.

It is corrently a first order with time constant tau.

final temperature = tau * dT/dt + T

the only problem is tau won't be the same in water or free air.

another problem is that dT / dt will be hard to take out of noise.

Reply to
vincent.thiernesse

A smaller thermocouple is the obvious choice, but it might not work (accurately) with your analog gauge.

No, it depends on the contact with the probe. Also, you are assuming that there is only one time constant and the response is a simple exponential increase- IOW you are assuming a certain model for the system, and that might not be very accurate.

Yes, possible, but not preferable.

Best regards, Spehro Pefhany

--
"it\'s the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany

not with pyrometers, but a few years back I wrote a least-squares predictor for thermal test results. It calculates Tau, Tfinal and R^2. At about 0.2Tau the predicted result was within 5% of the actual result.

This was when I did a whole bunch of thermal tests on a large box with a ~4hr thermal time constant, so I could do multiple experiments per day, c.f. one. it worked well.

Cheers Terry

Reply to
Terry Given

Sure it's possible. In fact you can do a deconvolution and speed up the response as much as you like (provided you know the impulse response

*very* accurately). The problem you run into is noise, due to having to apply high gain at frequencies where there isn't much signal.

Estimating the settling transient isn't as noisy as trying to speed up the response overall--you can do it using a least-squares fit of several samples to the (assumed known) impulse response, and fitting the height of the temperature step. That's a one-parameter fit, and quite well conditioned. The key is that you need to know the shape of the curve very accurately.

Cheers,

Phil Hobbs

Reply to
Phil Hobbs

This is certainly possible; however if you try to speed up the things by the factor of N, then all errors will be also multiplied by the factor of N.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

If the time constant or more likely time constant(s) of your setup are stable and reproducible then there are feedforward filter designs around to compensate for a step change. These are commonly used to compensate for the non-ideal behaviour of very high resistances of

10^9 ohms in Faraday current amplifiers for mass spectrometry.

They have to be very carefully trimmed to match the actual behaviour of each amplifier resistor combination and so unless you have very rigorous control of the system it is probably a lost cause. You might still be able to improve the time to settle a bit by applying an undercorrection with about the right time constant though.

Swap the thermocouple for an optical pyrometer would give the fastest response - no thermal inertia at all.

N.

It should not be quite that bad unless you do something silly.

Regards, Martin Brown

Reply to
Martin Brown

I think in this case the increase in errors will be more. There is likely more than just one time constant at work.

You can get better settling without increasing the noise, by removing as much of the RC type filtering as you can and then applying a FIR filter. This is because a FIR can make the same cut off without having to include the data from a very long time ago.

Reply to
MooseFET

Yes, the error is increased as the order of the function, and this is the fundamental property. I assummed the 1-st order contribution would be dominant.

To me, it looks like a good application for the Kalman-like estimator. May be, Tim Wescott would suggest some better ideas.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Forests have died to make papers discussing the deconvolution problem. It's a member of the family of "ill-posed problems."

A faster thermocouple, or an ir thermometer, might be a better approach.

John

Reply to
John Larkin

Depends on the transfer function you're trying to deconvolve. Gaussians and suchlike, where the high frequency information really goes away fast, are evil.evil.evil to deconvolve. Polynomial things are easier--you can get a factor of 2 or 3 speed increase for a one- or two-pole rolloff pretty easily, at a cost of 6 to 20 dB of SNR in the worst case (where all the noise is added after the low pass operation). That's sometimes a pretty good trade.

Other times, the thing is slow because its low frequency response has been artificially boosted, e.g. the carefully insulated pixels in my Footprints sensor. Deconvolving that gave a big SNR _improvement_ compared with just increasing the thermal conduction to make it faster--more signal and less thermal conduction noise (like Johnson noise).

Another time deconvolution helps a lot is if you're starting with a transfer function with a cusp or a discontinuity of some sort, which gives rise to ugly long settling tails. Back when I was a grad student, I got a factor of 2 improvement in the lateral resolution of a laser microscope by measuring the complex amplitude (mag and phase) vs position and applying various deconvolutions. Getting rid of the settling tail was a big win all by itself, and the noise gain of the filter was 1 dB.

I agree entirely with your basic point that there's no substitute for good data. It's only when sensor improvements are exhausted that deconvolution ought to come in.

Cheers,

Phil Hobbs

Reply to
Phil Hobbs

Maybe that's why we have such a wood pellet shortage right now ;-)

--
Regards, Joerg

http://www.analogconsultants.com/
Reply to
Joerg

You know what happens when you assume.

Some years back I designed a system with a very simple thermistor and heater servo. When I characterized the "plant", I found that it had a huge transport delay style phase lag. It also had more poles and zeros than you could shake a stick at. One that really surprised me depended on how the thermistor was epoxied in place. Epoxy on top of the thermsistor turns out to be bad because the heat has to flow through the thermistor to get there.

Using a model of the system and then correcting the model for any small errors it may have would likely make better results.

Reply to
MooseFET

internal model control. great stuff.

Cheers Terry

Reply to
Terry Given

Is "internal model control" the same as what I would call "adaptive model reference control", with continuous monitoring and statistical analysis of the fit of the model to measured system response, and continuous adjustment of the model for best fit?

Reply to
Glen Walpert

take

by

of N.

Still the fundamental problem remains with the error growth in proportion with the order of the model. Different methods can only give more or less accurate estimates.

VLV

Reply to
Vladimir Vassilevsky

N.

nope. there is a great description of it in the CRC Press Control Systems Handbook. OTTOMH the basic idea is to develop an ideal model of the plant, use the ideal model to provide feed-forward control, then apply a feedback controller to correct any errors. which is similar to MRAC, without the adaptation, except the implementation of IMC involves subsuming the model into the controller itself.

I came across this at the turn of the century, when I was working on

250kW 3-phase regen rectifier control. we had a very low sample rate (IGBT Fsw) and a very small line inductor - 2%, so a 2% voltage error caused 100% current error. which made closing the loop and getting it all right quite tricky. I spent a lot of time working on dead-time compensation (DTC. an un-related paper was published in IEEE TIA just recently which describes in detail how I did it), because I needed to very accurately control the inverter output voltage.

the low sample rate meant getting high bandwidth was quite tricky (hence doing ATAN calculations for pre-warping BLTs etc), so I turned to IMC. my first IMC current controller performed as well without DTC as my sync ref frame PI controller did with DTC, but with higher BW. by getting smarter with the plant model, and tossing in DTC, I got very good results. never again shall I use anything else :)

HTH

Cheers Terry

Reply to
Terry Given

Thanks, sounds like something worth learning. The CRC Control Systems Handbook is about to come out in a second edition, so I recon I'll wait for it.

Glen

Reply to
Glen Walpert

the IMC section, IMNSHO, paid for the book.

Cheers Terry

Reply to
Terry Given

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.