High fashion crystal oscillator?

On Jan 18, 2019, Kevin Aylward wrote (in article):

math

going to work, unless small-signal approximations are good enough.

I know people with access to these.

I bet that Demir also has access to these tools at Bell Labs, and the reason

to figure it out. He might appreciate an email pointing a possible cause out.

By Shooting Method I assume that you mean a standard way to solve differential equations. Certainly often the only way for nonlinear diffeqs,

Joe Gwinn

Reply to
Joseph Gwinn
Loading thread data ...

I don't understand your point.

The Hajimiri-Lee approach assumes linear components, but time varying. So, sure, that doesn't work.

This shows why, without the maths :-)

formatting link

The Collins paper assumes that only noise near the x-ing point matters. This is not true with non-linear capacitance (or time constants), with the waveform being significantly dependant on that capacitance. If the capacitance changes due to noise, far away from the switching point, it changes the time that it takes for the signal to reach the next x-ing point. This means that all noise everywhere generates jitter/phase noise.

This is so trivially obvious, that it is somewhat stunning that the H-L approach got/has so much credibility.

Well, I do, as does any IC designer.

Again, you are confusing me here. Demir's approach *is* the method that shows that H-L is off by 50dB. That's was what Dimir was specifically addressing.

The Dimir paper just didn't explain *physically* why/how, i.e. in a way us engineers could understand. It was all maths. ?orthogonal perturbation to the tangent of the limit cycle? isn't actually intuitive as to what it means.

Yes.

formatting link

The shooting method is another way to get the steady state time domain response. It solves in the time domain, where as harmonic balance does this in the frequency domain. It gives a "cleaner" waveform. No harmonic ripples on square waves.

Phase noise calculations doesn't care how the steady state waveform is calculated . It just uses that result to compute the effect of noise at every point in the waveform to drive it own number cruncher.

-- Kevin Aylward

formatting link
- SuperSpice
formatting link

Reply to
Kevin Aylward

Am 20.01.19 um 00:00 schrieb Kevin Aylward:

I have made such a DMTD system to compare a "Space Hydrogen Maser" to a to a Cesium, also in space. Being at micro gravity makes it easier to concentrate on / interrogate only a small number of atoms. Easier than a fountain, good for precision.

Low frequency shielding is not much of a problem. Keeping the coupling areas small is a key, and there is a lot of milled metal around to get rid of the heat.

The low frequency stages are handled in the digital domain anyway in a corner of a largish Virtex.

All the world is not a VAX, and all oscillators are not on a tiny chip. Collins style filtering /slope gain requires serious, time/input invariant filters, and you do not have them on your chips. There is no way to get the 100 MHz signal properly filtered and also the 1 Hz version without resorting to external resources. A 30 pF Sio2 compensation capacitor was already a royal pain back in 741 times; the situation has not significantly improved. Onchip you have just small caps that vary with everything else: temp, supplies, bias, amplitude. There are no inductors that allow easy gain at RF and no gain at 1/f. No gain in the /f region is a good thing because there is nothing to be up converted to begin with.

There are no oscillator chips in a BVA oscillator.

The real problem with HL is that it does not give a recipe for design. In the best case you can see why your design failed.

That does not sound like engineering, it looks more like staggering blindly through the solution space and keeping the best one happens to find.

regards, Gerhard

Reply to
Gerhard Hoffmann

nd.

That the first time I've seen Kevin Aylward put in the same category as Joh n Larkin. That would be ruder to Kevin than it is to John, if Kevin weren't working in a much more restricted design space.

--
Bill Sloman, Sydney
Reply to
bill.sloman

-----Original Message-----

Ahmmmm.......

Simply not true. Maybe for AI that's an adequate description.

Sure, one can do it "blindly", but that isn't what is done.

I have explained this many times before. That view shows a lack of understanding how modern designs with billions of transistors is actually done today. Running 100,000s of simulations is the only way to produce a reliable product, that is competitive in the market place. Its what everyone designing ASICS does. Period. End of story.

For starters, even a single transistor amplifier is closed form analytically intractable. Even the idealised one transistor Widlar source gets you into transcendental functions. Adding in early effect, base spreading resistance, and your nacked.

formatting link

Its is why spice was invented. To solve vast arrays of non-linear systems of equations impossible to solve in the mind, Steven Hawking not withstanding.

For example, an optimised LDO circuit with 50 transistors has no chance of being reliably designed on paper, today. That's just a fact.

Lets look just at stability. Small signal analysis isn't enough. The loop phase/gain changes all over a transient pulse waveform, such that small signal analysis often fails to show real instability. Transient is the only way to ensure a reliable design. "Sound engineering" is designing for worst case over all process variations, and operating conditions. Simply impossible without running large numbers of simulations. e.g.

formatting link

So, for example, an LDO, needs to satisfy noise, stability, regulation... over all processes condition and temperature,

Just considering Max/Min extremes. Its max/min RC, max/min npn, max/min pnp, max/min nmos, max/min pmos, max/min vsupply, max/min temp, max/min rload, max/min cload. That's 256 combinations for starters.

Now, say there are two comp caps and two comp resisters. To locate the optimum value for those components, over all those operating conditions. Lets say the initial sweep of values is say, from 10 ohms to 100k, and 1 pf to 100pf, at 5p and 100 ohm steps, its 10,000s of runs. Then after homing in on the optimum solutions, runs at finer steps are made.

It is standard to get one off corners that are unstable, and moving the component a small amount makes another corner unstable. It is usually a fine balancing act. Sure, if one wanted 1Hz BW, and 20 dB PSRR, sure stick a 1nf cap on a node. Now try and sell the part with that performance.

Anyone that actually does ASIC design is going to be ROTFLAMO on the idea that this is some sort of idiots way of designing. It is mandatory in any mainstream commercial companies. Its part of the sign off process. There is no other way of doing it.

Of course, to design the LDO, you need to understand topologies, qualitivity, like how number of stages, device sizes (W/L), voltage ratings, etc effect the performance specifications being targeted. You can't start from a blank slate.

You appear to be confusing a designer that technically knows the one or two places to connect a compensation capacitor, how noise varies with current, how BW varies with current and capacitance, or when to use a folded cascode, or an emitter follower with a monkey at a typewriter.

The method we all use, is the Darwinian algorithm of replication, random variation, and selection, because, embarrassing or not, that is how the brain actually works. Thi algorithm is not the same one the monkey uses, despite creationists believing it is.

Kevin Aylward

formatting link
- SuperSpice
formatting link

Reply to
Kevin Aylward

To put it another way:

"Correlation does not imply causation", but it strongly hints at it. And enough correlations (and non-correlations) between all known parameters of the system, together, can prove causality between the few variables that are of interest.

Most crap science you see reported in the news is just finding correlations, with a few important and well-designed studies going the trouble to actually prove their point (where "proof" is a measure of statistical confidence, as it is in all experimental sciences).

Tim

--
Seven Transistor Labs, LLC 
Electrical Engineering Consultation and Design 
 Click to see the full signature
Reply to
Tim Williams

You got that wrong, as so often. I don't do ad hominem attacs; we discuss technical issues with maybe pointy words but with respect; on a somewhat elusive theme where even the both of us might get under fire from accepted gods like Rohde, each one for different reasons.

And JL is one of the few here who posts interesting stuff with technical content. I don't have much need for politicians.

regards, Gerhard

Reply to
Gerhard Hoffmann

I agree.

Engineers use statistics, for the most part, correctly, and have total control over the variables in the simulations. They don't work with an agenda of, this is the answer I want, what tests can I do to create that result. An engineer goes, well, that looks like the correct result, what do I have to do to break it.

If the product don't work when its built, it's the $hit hitting the fan.

-- Kevin Aylward

formatting link
- SuperSpice
formatting link

Reply to
Kevin Aylward

Yes. I try my best to keep personal comments out of it, and as such, I won't comment on Bill's approach shown here...:-)

-- Kevin Aylward

formatting link
- SuperSpice
formatting link

Reply to
Kevin Aylward

Comedy depends on getting stuff wrong in a particular way. Some people do take comic riffs seriously - they shouldn't.

It happens.

His stuff is less interesting if you've done much the same thirty years earlier. the most interesting part of his technical conent is the tricks he's missed.

--
Bill Sloman, Sydney
Reply to
bill.sloman

On Jan 19, 2019, Kevin Aylward wrote (in article):

As I said, something has drifted into circularity. What triggered my comment was the point that modeling nonlinear limiters using linear theory will get one 50 dB errors would seem to be true by default.

Leeson for envelope backs, and expensive simulators for detailed analysis.

Yep.

able

to

Yes, but do you think that Demir knows the physical reason for the 50 dB error, and if not, that he does not care to know?

at the same place on every run around the limit cycle.

As for the parallel debate on the necessity of monte-carlo simulation for design sign-off, I completely agree. In my experience, people work ideas out on paper, but develop and finalize design in simulation. This is true in both electronic and mechanical domains. Not to mention cooling and heat flow.

Joe Gwinn

Reply to
Joseph Gwinn

On Jan 19, 2019, Gerhard Hoffmann wrote (in article ):

Interesting. References?

In analog hardware, the problem with 1 Hz beatnotes is that the current spreads out in the ground plane, and causes synchronous feedback to the oscillators being compared, slightly modulating them. Even masses of milled

looking for effects below the -100 dB level, even tiny back-coupling can be devastating.

Solutions include multiple isolated ground planes, to confine the currents, bridged by RF transformers and differential or optical links spanning the moats between isolated areas.

Making the 1 Hz beatnote a digital dream in the FPGA - no messy 1 Hz analog stuff.

It is blind for sure, but with a computer working overnight to explore the

can handle analytically intractable problems.

The Monte-Carlo method was invented by physicists trying to solve equations too complicated to solve any other way:.

Joe Gwinn

Reply to
Joseph Gwinn

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.