KiCad Spice, Anyone Tried It?

u

'd

y)

n

for

Ahhh the old bang a bit on the I/O port trick to get timing measurements in code theads...brings back memories ....then there is the write different values to a DAC trick....Or use a log

Used it on more than one occasion to convince the SW types that 'yes, your code is not correct and my CPU board really does work'.... I *really* like my keysight 4164.... J

Reply to
jjhudak4
Loading thread data ...

That's the sort of example DB has in mind, I think. Polynomials do have cer tain limitations as fit functions, mostly having to do with fitting functio ns that don't look like polynomials--things with discontinuities or asympto tes, for instance. No matter what you do, you can't fit 1/x or exp(-x) over any reasonably wide range with a polynomial.

The reason is that if you apply Taylor's theorem with remainder, the relati ve error gets much worse as |x| increases. For an empirical fit, extrapolat ing outside the fit region makes it worse, fast, because the higher order t erms increase without bound.

However, for gentle nonlinearities such as ADC INL and RTDs, where the high order terms are easily bounded and you don't extrapolate, polynomials are terrific, as you say.

The problem with the thermocouple fit functions are ridiculous has nothing to do with the mathematical properties of polynomials per se. Orthogonal po lynomials are super useful for representing functions, and in fact expandin g a function in Chebyshev polynomials is identical with the real-valued Fou rier transform of the same function evaluated on a warped X axis.

The big problem is with the representation in terms of the basis set {X^N}, which is badly ill-conditioned, so that as |x| becomes large, the number o f decimal places required to maintain accuracy explodes.

Forman Acton's classic "Numerical Methods That Work" is still an excellent read on that stuff. It's full of lore, but as readable as a young adult nov el.

Cheers

Phil Hobbs

Reply to
pcdhobbs

Unwise because it makes you look ignorant and/or foolish and/or arrogant.

"Better to keep your mouth closed and be thought a fool than to open it and demonstrate you are a fool".

What's the relevance of that to the subject being discussed, viz modern large software systems?

Basically it is mere flack: true but irrelevant and designed to deflect attention.

Sorry, it has the opposite effect.

Reply to
Tom Gardner

Some of the thermocouple tables have two regions, with different coefficients. That's impressive, since thermocouple voltages change softly with temperature.

Nobody here wants to argue with the ITS equations. We either code them full-tilt, or sometimes use them to build lookup tables that we can interpolate into.

No comment.

--

John Larkin         Highland Technology, Inc 
picosecond timing   precision measurement  

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

Why would you discuss that in an electronic design forum? Take that somewhere else.

Good book:

formatting link

Pretty smart guy.

--

John Larkin         Highland Technology, Inc 
picosecond timing   precision measurement  

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

Software and electronic have been completely entwined for almost half a century.

You made globally applicable statements based on your understanding of a small part of the software ecosphere that existed decades ago. Technology has moved on since then.

Hence it is unsurprising that your statements were, um, inadequate.

I'm not sure what relevance that has to the similar characteristics of software and electronics.

I almost got to see Feynman in 1986, when he came to Cambridge to give a talk. But the queue was too long :(

Reply to
Tom Gardner

You're quite right, and it's a pointer to the real cause: the badness of {X**N} as a basis set.

If you take a long Taylor series around T=0K and do the math to centre it at T = 300K, you wind up with a lot of high-order terms of the form T**N x 300K**(M-N), some with very large coefficients. Even after collecting terms, that'll make the coefficients of powers of T move around quite a bit just due to roundoff. Powers of T really aren't the right basis set except in intervals that include absolute zero.

Shifted Chebyshev polynomials are the bomb for that, especially with the Clenshaw-Curtis recursion that makes their evaluation very nearly efficient as the usual Horner rule for the powers-of-X basis, and of course dramatically better conditioned in general.

I haven't tried it myself, so I'll happily defer to your experience within the temperature range you care about. IIRC there was a discussion here some months ago about the badness of high-order expansions in powers of X**N, at least in situations where the contributions of high order terms weren't provably small.

As a physicist, I've had orthogonal polynomials up the wazoo, so the badness of the basis set {X**N} is really ingrained. I haven't needed to do a really high accuracy functional approximation in some years, but when I do, my go-to technique is to use ratios of Chebyshev polynomials computed via FFTs of function sampled on Chebyshev abscissae. It's really pretty--I learned it from the first edition of Numerical Recipes over 30 years ago when I was young and keen. I coded it up and still have it.

You sample the function at the appropriate Chebyshev abscissae, FFT to get the Nth order Chebyshev expansion (where N >> the number of orders you need), then construct rational functions of the form

sum(i=0 to N) A_i T_i(x) f_N,M(x) = -------------------------- 1 + sum (j=1 to M) A_j T_j(x)

By equating this to the computed FFT expansion, multiplying through by the denominator, and applying the Chebyshev orthogonality rules up to order J+M, you come up with a unique, generally well-conditioned expression for the numerator and denominator of the rational function. You can trade off N vs M to get the best accuracy, but in general functions with asymptotes fit best when M > N.

For functions I've used it on, this simple direct method gets within a decibel or so of the best minimax rational function approximations.

It's really a great read, despite being 50 or so years old, which is antediluvian for most computing books. I hope you like it.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Feynman was ingenious, which is to say he did unexpected things that worked.

John Larkin likes to think that the unexpected aspect is crucial.

I made it. Good talk, but not exactly life-changing.

--
Bill Sloman, Sydney
Reply to
Bill Sloman

Congruences between large scale software "using modern techniques" and hardware? Interesting! Since you're obviously expert in both, I'd like to learn more.

What are they, exactly?

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Raise a port pin at the start of the routine, and drop it at the end. Maybe invert it a couple of times, at interesting points.

We did one test on a Zynq, dual 600 MHz ARMs running the usual Linux. One program just toggled a pin as hard as it could, and we looked at that on a scope to see how much other things suspended the loop. We'd see occasional long highs or lows, "long" being numbers like 20 us. I was impressed. The linux periodic interrupt only took a few microseconds.

--

John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 

  Claude Bernard
Reply to
jlarkin

I've never used a logic analyzer. I think one of my guys occasionally uses a little USB analyzer pod. He can also build a LA into an FPGA and load a trace RAM with an event. We're doing that now to look at the startup transient of a very complex digital PLL, saving a couple thousand ADC and DAC and integrator samples in a burst.

--

John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 

  Claude Bernard
Reply to
jlarkin

I've been waiting, with a basically sour attitude, for someone to try to reduce PCB-level electronic design to some sort of Verilog-like, C++-like, structured, abstract, object-oriented language. I think it has been tried, to some extent.

What ever happened to the NI attempt to reduce logic design to hooking up little boxes on the screen, the Labview for FPGAs thing?

--

John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 

  Claude Bernard
Reply to
jlarkin

d say

e wrong -

But the ITS equations are a temperature scale, that apply to all thermocoup les of a type. Calibration includes instrument peculiarities that aren't know n except after construction of the whole instrument. Not all calibrations have continuous, well-behaved functional nature, but all polynomials do.

The general class of calibrations isn't automatically fit to a polynomial.

Reply to
whit3rd

That is an extremely important point, and you explained it well.

The polynomials given for things like thermistors and thermocouples are

get very small but can be very sensitive to errors. Orthogonal polynomials or other basis sets give more stable functions.

Reply to
David Brown

I liked his metaphor of having different tools in his toolbox, which he could whip out and use on other people's problems.

Crucial? Not usually.

I didn't expect it would have been - he was ageing at that point. The interest would have been a pale Erdos number sort of thing

Reply to
Tom Gardner

More of a jack of all trades and master of none, by choice. That has advantages and disadvantages.

That would take a /long/ time to discuss, especially if half the concepts are unfamiliar to the audience. To take /one/ example - how application server frameworks are designed and implemented - design patterns in the applications' components and the way they are composed to form an application, - design patterns used to communicate with the outside world, - antipatterns found in politics, development and design

It is a good topic for a conversation in pubs.

I normally start it by provocatively asking someone to distinguish between hardware and software. They normally claim it is obvious and come up with a definition. Then I introduce them to examples that don't fit their definition. Rinse and repeat.

Sometimes it can take a long time to realise what the proponents of a particular technique are doing, because they use unfamiliar terms to describe things that are standard in other fields.

Some examples: - Jackson's Structured Programming - Harel StateCharts - many of the more esoteric UML diagrams - Hatley and Pirbhai's Strategies for Realtime Systems Specification, as successfully used for the Boeing 777 - and, obviously, object oriented programming

Reply to
Tom Gardner

Yes, it isn't rocket science.

If the test point reflects the "idle" time, you can even put a voltmeter (preferable moving coil!) on it to visualise the processor utilisation. Dirty, but quick.

Did you form an opinion as to what caused the pauses?

Did the "other stuff" manage/avoid interfering with the caches?

Reply to
Tom Gardner

I've used analogous techniques in telecoms applications running in both and out of JAIN servers and. It was great for: - astounding people at how fast/efficient my design strategy and code was compared with their previous products - demonstrating to other companies exactly where their code was faulty - thus avoiding CEOs and lawyers becoming involved :)

The biggest problem was getting softies to understand why the mean value is so crude, and that a 95% value was much more useful.

Reply to
Tom Gardner

And how do you know that you are raising it at the /real/ start of the routine, and dropping it at the /real/ end of the routine? I've seen people write code like :

... set_pin_hi_macro(); x = big_calculation(); set_pin_lo_macro(); ...

without realising that the compiler (and processor, for more advanced cpus) can re-order a lot of that and make the timing seriously unrealistic.

Using pins and scopes to measure software performance is a useful technique, but you do have to understand how to avoid re-ordering issues and make sure you are measuring what you /think/ you are measuring. (This applies to any kind of instrumentation, not just pin/scope measurements.) Not all programmers understand the details here, and end up with something that looks okay but does not do what they want.

With care (especially processor affinity and interrupt control), you can get real-time tasks on an SMP Linux system running with extremely low latencies and jitter. But it is still very difficult to be /sure/ of the maximum latencies.

Reply to
David Brown

The same thing that happened to their idea of using Labview for anything real outside somebody's lab.

As an old pal put it, "Ah, Labview--spaghetti code that even *looks* like spaghetti." ;)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.