EE rant

It depends on where your interests lead you. I know and have worked (and currently do work with a subset) with 7 physics graduates (masters and PhDs) They do the following:

1 Teaches at a top-3 US university (Physics) 1 became a 'photonics researcher/engineer' at a research lab then moved to a defense contractor 1 became a digital and analog engineer and later engr manager at a company that makes high end pharmaceutical analysis equipment (an office mate of mine in grad school) 2 became a 'computer scientist' and 'principal researcher) at a federally funded R&D Lab ( teammates of mine) 2 became researchers at Livermore (invented ways to do laser etching of semiconductors) and then 1 became president of an electronics company and made enough money to retire at 50, the other hooked up with a VC company as a founding member then made enough money to retire at 55. (they both still 'work' and are way beyond financially secure) (They all had degrees from the 10 top academic institutions in the US, so name recognition plus they are academically gifted ppl played a big role)
Reply to
three_jeeps
Loading thread data ...

I see, in a "we can solve the hydrogen atom exactly & that's it" sense.

Reply to
bitrex

Advise that should be taken with a grain of salt, IMHO. If you think you know it all when you get your undergrad you are badly mistaken. If you turn off your learning you are outdated in less than 10 years. IMHO, Graduate school tends to be a hedge against obsolescence. Also getting connected with the right kind of company lets an EE develop and be successful. Working in a sweatshop mentality, top heavy company can stifle ones creativity and learning (as well as getting noticed for your talents).

It has been my experience for a number of (tens of) years that EE grads consistently get job offers with salaries in the top 30% of the job offers of graduates across the board for a given university. As a member of an EE dept faculty advisory board I've seen these numbers for many many years. It is one of a set of metrics used to evaluate the program as well as comparisons to other universities.

Reply to
three_jeeps

Yes that's the one. I don't understand much beyond part II, maybe someday, but the material about ODEs, difference equations, and asymptotic expansions is worth the price of admission alone.

I'm taking an online course in statistical mechanics, it's pretty cool, connecting quantum mechanics micro ---> the PVNRT macro

Reply to
bitrex

I use an ancient Corning lab hotplate (the kind with the magnetic stirrer) from eBay, with a piece of 1/2-inch aluminum jig plate on top of it. (It's about 6 x 9 inches, one of their standard sizes.) I have a cheapish Extech thermocouple thermometer (3-channel) to let me set the plate temperature, which should be about 250C.

Leaded ICs are a win, but some of them also have power pads. QFNs are really fun in high-vibration environments. :(

Cheers

Phil Hobbs

Reply to
Phil Hobbs

On 1/2/2023 2:08 AM, Jan Panteltje wrote: [...]

[...]

In other words, the school really sucked at predicting who could handle the work. I.e., their admissions office sucked. 😉

Reply to
Bob Engelhardt

Get an assortment of cheap surface-mount adapters.

formatting link
formatting link

I think building and testing circuits is engineering. We're not scientists. "Engineer" meant guys with crowbars and oily rags, not equation provers.

Power pads and dpak type things can be soldered with a good iron and maybe a bit of pre-heating with a heat gun.

I don't want any engineering techs. Engineers should be realtime hands-on. Solder. Probe around. See the scope waveforms changing yourself. Feel the parts to see if they are hot.

Reply to
John Larkin

Given an actual waveform a(t) and a desired waveform d(t), we can fix a to make d with an equalizer having impulse response e(t)

d(t) = a(t) ** e(t) ** is convolution

Finding e is the reverse convolution problem.

The classic way to find e(t) is to do complex FFTs on a and d and complex divide to get the FFT of e, then reverse FFT. That usually makes a bunch of divide-by-0 or divide-by-almost-0 points, which sort of blows up.

I do it in time domain.

Reply to
John Larkin

Getting a proper surface mount heating station is a much better idea. Do it right rather than futzing around with half-baked stop-gaps.

<snipped the half-baked stop-gaps>

Some of us are both

Engineers think about what they are doing. John Larkin doesn't, so he ignores that aspect of the job.

You need to think about what you do with the crowbar and the oily rag - and the numerous other tools that engineers use.

You don't "prove" equations - and only someone as ignorant as John Larkin would imagine that you might. Engineers can use them from time to time.

But getting a tool designed to do the job makes it a lot easier.

It was.

I did it a couple of times. It's pretty tedious too, but for tricky layouts it was quicker than persuading the printed circuit layout drafts-person to get it right

But quite a bit quicker and easier.

You've still got to review what they've done and take the time to persuade them to do it right.

yourself. Feel the parts to see if they are hot.

If you are too dumb to do it any other way, that is the only approach you can rely on.

Crawling beats running, if you can't run. It isn't quick.

Reply to
Anthony William Sloman

Not exactly; as any least-squares fit formula shows, a set of data points does NOT have to be equally weighted data points; the divide-by-almost-zero points might be all insignificant in the statistical weight sense, during that 'reverse FFT'. You have to, after division, reconsider the FFT step; the 'd(omega)' part of the inversion integral should be replaced by the reciprocal of the square of the expected precision (which is low for the result of division by zero), and results renormalized accordingly (instead of just FFT on a list, it has to be a weighted integral, with weights that either are forced to sum to (1) or similarly normalized).

If there's a division by zero in the f-domain, why isn't there one in the t-domain?

Reply to
whit3rd

"Doctor, Doctor! It hurts when I go like this!"

"So don't go like that."

;)

It's quite possible to do deconvolution badly as you say, but it's generally not hard to do it well, and it's often very profitable.

Of course, how much that gets you depends on the situation. You just have to keep an eye on the noise gain. In my long-ago thesis work, I got a factor of two in resolution in an interferometric laser microscope by deconvolving the analytically-known transfer function to something more nearly rectangular.

The limitations had more to do with ringing than with noise gain or singularities.

On the other hand, trying deconvolution to undo the effect of a Gaussian lowpass is going to run out of gas very soon, because the noise gain grows faster than exponentially with bandwidth.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

For starters, there are no divides.

Reply to
John Larkin

Sure, gain can't overcome extreme attenuation, or equivalently you can't create bandwidth much beyond what you already have. The noise explodes.

A good equalizer chip can take what looks like a brillo pad and make it into a nice eye diagram.

Reply to
John Larkin

On a sunny day (Wed, 4 Jan 2023 10:43:20 -0800) it happened Joerg snipped-for-privacy@analogconsultants.com wrote in snipped-for-privacy@mid.individual.net>:

Broadcasting studios cameras clamping on the black (not bottom sync) hundreds if not thousands of those circuits, in my time with tubes and then transistors. We had equipment / modules from Fernsehn GmbH in those days, Philips plumbicon color cameras. Ampex video recorders...

Analog video was fun.

I did this project to get my digital hands on up to date, for hardware and software:

formatting link
Now we have DVB-S2 via satellite and DVB-T2 terrestrial here.. New standard every few years, everybody had to buy a new DVB-T2 box... few years back..

Wonder what's next :-) DVB-S2 is close to the Shannon limit, but I did get surprised again today:

formatting link

Reply to
Jan Panteltje

On a sunny day (Wed, 4 Jan 2023 10:30:35 -0800) it happened Joerg snipped-for-privacy@analogconsultants.com wrote in snipped-for-privacy@mid.individual.net>:

Yes too much mamaticians work treated as God. Same for Einstein. In case of gravity we need a mechanism, likely a Le Sage like particle, seems I can explain much what we observe with that, including dark-matter. And if EM radiation is a state of - and carried by - Le Sage particles, then uniting everything is not that hard, comes naturally.

But I am but a neural net made of grey matter... Or all is linked by kwantuuum coupling... ;-)

We are just like the ant creeping up on a wall not knowing what the wall is made of, what it was build for, and who or what the architect is and intended by building it. Lots of things to discover!

Need to work on my create life make your own dino kit.

Reply to
Jan Panteltje

Right, because in the frequency domain, the denominator becomes small. That causes time domain troubles too. Any operation that fails in the frequency domain can only succeed in the time domain if it's not really the same operation.

For instance, windowing an impulse response to a finite length is equivalent to convolving its transform with the transform of the window function, which can smear out misbehaviour.

A lot of that is nonlinear, though. For instance, decision-feedback equalization (DFE) uses previous values of the accepted data to change the threshold voltage in real time. (It's nonlinear because it isn't the previous voltage values that are used.)

Cheers

Phil Hobbs

Reply to
Phil Hobbs

Which is why no one apart from an EE who skipped all the advanced maths classes would ever try to do it that way.

Effective deconvolution algorithms have been known since the late 1970's when computers became powerful enough to implement them. The first big breakthrough in applying non-linear constraints like positivity of a brightness distribution was Gull & Daniel, Nature 1978, 272, 686-690 (implementation was mathematically a bit flakey but it still worked OK)

formatting link
Prior to that you would always have non-sensical rings of negative brightness around bright point sources caused by the truncated Fourier transform.

Slightly later more mathematically refined versions widely used:

John Skilling & Bryan's Maximum Entropy Image Reconstruction

formatting link
Tim Cornwell's & Evans VM at the VLA

formatting link
Prior to that there were still some quite respectable linear deconvolution methods that involved weighting down the higher frequencies with a constraint (additive frequency dependent term in the denominator). Effectively a penalty function that prevents wild changes between adjacent pixels by constraining the second derivative.

Later Maximum Entropy deconvolution methods became routine and could solve very difficult problems albeit at high computational cost. They were the way that deconvolved images from the flawed HST were made.

The fault in the primary mirror was determined using a code from Jodrell Bank intended for adjusting the panels for focus on the big dish.

Feed forward compensation for step changes in input signal is as old as the hills. Mass spectrometers have used it since their invention. It is a one trick pony and only works in very limited circumstances.

10^11 ohm resistors were anything but pure resistors.

There was a whole year when the one guy in the world who made the best ones finally retired and when the new guy really hadn't got the knack.

Reply to
Martin Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.