EE rant

Loading thread data ...

formatting link
probably does it better than you can.

There are whole bunch of positions sensors you buy. Linear variable diffential transformer (LVDT)are pretty reliable way of sensing absolute position.

Load cells frequently rely on stain gauges. They can be pretty good, but the LVDT is easier to understand.

So use a stacked pair of non-progressively wound toriods as your sensor. You drive one and the only coupling to the second toroid is the current circulating in the fluid that threads both of them. No exposed surfaces to corrode.

It's a bit bulky and you have to completely immerse both of them so that you have a well-defined and stable current path through the fluid, but it is a neat solution when you can use it.

Reply to
Anthony William Sloman

No, the hydrogen atom is analytically solvable in the nonrelativistic picture. You don't need asymptotic methods for that. (I expect that they don't put art majors through all the higher math classes.)

Cheers

Phil Hobbs

Reply to
Phil Hobbs

Yup. The power of stat mech is precisely in the asymptotics, though--that's how you get the thermodynamic quantities to be properly _extensive_. (That is, things like temperature and entropy density don't depend explicitly on the N, the number of atoms in the system.)

When you go to compute the partition function for the given ensemble (canonical for fixed N, grand canonical for variable N), one of the very first moves is to apply a truncated Stirling asymptotic expansion for the gamma (factorial) function.

The higher order corrections, such as the Mayer cluster expansion, are also asymptotic.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

Once you're formed an intensity-only image on a detector, a lot of the possibilities are lost. It's hard to avoid in optical astronomy, of course, but in instruments you have more optical tools in the bag, notably interferometry, so linear deconvolution is more fruitful.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

Why get out? Hot showers are a key component of design.

You have been insulted by Sloman, so are officially on your way to being an electronics design engineer.

Let me know if I can help.

Have you played with Raspberry Pi?

Reply to
John Larkin

That's what I mean, it's one of the few that is.

The math for introductory QM isn't horrible, I took AP calculus and was exposed to at least some amount of differential equations in high school y'know. Compared to classical EM it seems easier really - does anyone really _enjoy_ vector calculus?

Reply to
bitrex

That statement makes no sense. There are lots of academic papers about this method, with various kluges to keep the divides under control.

Deconvolution is an "ill-posed problem" so is publication-rich.

I built a roughly 40 ps TDR just for fun, as part of another proto board.

formatting link
It worked, although I haven't commercialized it yet. My idea is to make something that's fast but ugly, which isn't hard these days, and make it have beautiful step response by passing it through a software equalizer algorithm.

Here's my deconvolution thing:

formatting link
The yellow trace is the assumed ratty TDR and purple is the filter impulse response and the white trace is the convolved result.

This program will pretty-up some seriously nasty waveforms. It looks like it can, in real life, make a horror into a beautiful step with about half the 10:90 risetime of the original.

The program is fun to play with. Keep iterating and eventually things explode in cool ways.

Reply to
John Larkin

I did although it was a bit of a culture shock at the time.

My physics course was extremely mathematical and it was taught in first term maths for scientists. It doesn't seem so bad looking back. Grad and Div were easy enough and had some nice physical interpretations Curl was a bit of a handful. Ricci Tensor calculus can still be a bit hairy.

I never did get the hang of Green's function. I suspect it was the lecturer's fault. I could understand what it was and how it was supposed to work but I never found a problem where using it made any sense.

Reply to
Martin Brown

Ah, okay, read too fast. Sorry about that.

Sure. The div, grad, curl thing is fun. You just need the cheat sheets of vector identities inside the front and back covers of Jackson. ;)

Once you get past undergrad stuff, quantum gets much, much more involved than electrodynamics. I had to drop my last quantum field theory class when my study partner (who'd cajoled me into taking the class so he'd have somebody to study with) bailed out on me for the shockingly frivolous reason that he needed to graduate. ;)

The Dirac equation and Feynman diagrams are fun too--I actually learned about them in a nonrelativistic many body physics class taught by my favorite physics prof, Sandy Fetter, who also taught graduate electrodynamics. Feynman diagrams are an organized way of writing out perturbation integrals of higher and higher order.

The main drawback of that formalism is that it tempts the mystically-inclined to talk about "virtual photons" and so on, whereas those only exist in the perturbation math and not in real life. (See e.g. "The Tao of Physics" by Fritjof Capra and "The Dancing Wu Li Masters" by Gary Zukav, may their tribes decrease.)

For instance, it's hard to understand how you could ever get an attractive force if all you're doing is bouncing virtual billard balls off things.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

It's just an impulse response. We use them all the time in electronics.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

are lots of academic papers about

That statement is perfectly sensible; the FFT algorithm has no mechanism to accept data with non-constant signficance, which is what, obviously, happens with a divide-by-almost-zero step in the data processing. It's gonna give you what the 'signal' says, not what the 'signal' and known signal/noise ratio, tell you. That means using an FFT for the inverse is excessively noise-sensitive. There's OTHER ways to do a Fourier inversion that do allow the noise estimate its due influence.

Reply to
whit3rd

Yeah, clamping on the sync was a really bad habit of some "engineers".

It sure was, while it lasted. My masters project at university was creating a CCD camera with a Philips CCD sensor. Because their own version wasn't very good and it could not be used for serious measurement stuff.

What's next for TV? Nothing, IMHO. It had it's day and the world is moving on. Other than the evening news the last time I really watched TV was ... well ... heck, it's so many years ago that I can't even remember.

The switch to digital pretty much killed it for where I live because it became unreliable. The topper was when some stations gave up precious VHF channels for UHF. That was not smart at all.

If I ever want to watch something interesting it is on the Internet or a DVD from the library. However, now that I re-started ham radio I haven't seen any movie in months. No time.

Reply to
Joerg

The problem has nothing to do with the FFT, and everything to do with what you're trying to do with it. Dividing transforms is a perfectly rational way to deconvolve, provided you take into account the finite-length effects and prepare the denominator correctly.

When you transform back to the time domain, if you do that step correctly, you get exactly the same thing you'd get by doing it in the time domain (apart from different roundoff).

Cheers

Phil Hobbs

Reply to
Phil Hobbs

But these days graduates do know the colors of the LGBT flag :-)

Reply to
Joerg

As if this was something new. I knew a bio-girl then who liked to call herself HerrMann. In German you cannot modify THAT name to something more male in 8 letters. At the same time you could discuss with her the Q and phase noise properties of of dielectric puck oscillators.

Frankly, I could understand much less being a true follower of a transmogrified bronze time weather god and to claim to be scientifically oriented at the same time. Cognitive dissonance must hurt.

Gerhard

Reply to
Gerhard Hoffmann
[about FFT/divide/inverseFFT deconvolution]

Think again; an FFT algorithm implements least-squares fitting, essentially; there's zero difference between the transform's inversion and the original data, which (zero) is obviously the minimum of sum-of-squares-of-differences.

But, it's not correct if the standard deviations of the elements are not identical, because it IS minimizing sum-of-squares of differences, rather than the (correct) sum of (squares-of-differences/sigma-squared-of-this-element).

Reply to
whit3rd

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.