design of analog circuits using genetic algorithm

Absolutely. It took tens or hundreds of millions of years, with a tested population of probably billions, before some things robust enough to be called distinct species started the archaea. It's not something you could do to produce a working FPGA design.

However, if you include the FPGA connectivity rules and timing charactersistics in the constraints, an emerging design would be transferrable to another instance. I expect it would just take a lot longer to emerge.

The whappy design of the machine is one argument against the possibility of people downloading their minds into a computer, for replication in an artificial brain at some time in the distant future.

In many senses you could view "higher" organisms as failed archaea/ bacteria. They haven't had to do any great rearrangement for billions of years; we have to keep elaborating, Heath Robinson* style, just to keep going.

Anyone for Eukaraoke?

Paul Burke

  • or Rube Goldberg over there?
Reply to
Paul Burke
Loading thread data ...

In message , John Larkin writes

And so does circuit design. Although the intuitive creative step to define the overall circuit architecture is still well beyond modern computation power optimising component values in an existing design is now quite practicable even on a PC given enough time.

17 dimensions is no real challenge to modern optimisers.

No modern least squares (or 1-Norm) optimiser should ever diverge (and that was true of the good ones even a couple of decades ago). What tends to happen is that they get trapped in steep diagonal valleys or at local minima and never find the true global optimum. The solution should never be worse than the initial guess.

Simplex isn't too bad if you already have some idea of how big a range of parameter space you have to cover. Conjugate gradients will handle the most difficult problems fairly well given a suitable starting point and something like simulated annealing is about as good as it gets for global optimisation irrespective of the initial starting point. Genetic algorithms are similar to the latter, but rely on an ensemble of simulations with parameters that are allowed to breed according to their success rating. They are harder to make work than simulated annealing codes though fun to watch on toy problems. I reckon simulated annealing is easier to use than GA. YMMV.

It should not be if you know how the free parameters are inter related. Filter design is one case where diddling individual parameters in a naive 1-D optimal search strategy will almost never get you what you want. There are specialised codes around for optimal filter design.

There is probably a faster way to do that but if it is fast enough then fine.

Regards,

-- Martin Brown

Reply to
Martin Brown

Does it? What machine-executable rules would create the schematic of, say, a spectrum analyzer, or a cell phone, or a laser printer?

As far as I've heard, only trivial circuits can be optimized by evolutionary techniques, and some produce (and publish!) preposterous results. Some things, like filters, can be successfully diddles, but only starting from a very-close-to-working initial design.

Where did that initial guess come from? Where did the circuit topology come from?

You are talking about minor tweaking of the values of a network that already exists and is close to the desired performance. That's not circuit design, that's a tiny, specialized piece of it.

John

Reply to
John Larkin

a

We make one VME board that has 1100 parts, including two FPGAs and a uP running a few thousand lines of code. Machine-optimize that!

But "start with a decent guess" is 99.99999% of the problem in electronic design.

John

Reply to
John Larkin

formatting link

I think that automated circuit design appeals to some, especially academics, who are uncomfortable with the reality of circuit design, namely that ideas bubble up from the human unconscious, by processes unknown, and that some people are good at it and others aren't.

Circuit design can be taught, as tennis can be taught, but there's no algorithm for either. Let them try something simpler first, like a tennis-playing robot.

John

Reply to
John Larkin

*Seventeen dimensions* is no problem? Riiight. That is, assuming you're doing a linear or nearly linear optimization, or you start with a decent guess. For more general problems with lots of 15-dimensional flat spots and local minima, you could be there for awhile, even with an exaflop machine.

Cheers,

Phil Hobbs

Reply to
Phil Hobbs

formatting link

Same basic problem as with neural nets: who's going to go into production with a design that nobody understands? How can anyone have confidence in the silly thing working on all the relevant edge conditions--temperature, supply voltage, EMI, the odd bypass cap failure, process corners, ..... Good luck persuading your foundry to make you another lot if your unconnected gate doesn't work 'correctly'. I can hear the laughter all the way to NYC.

Conceptually a neat idea, if you haven't got the time or opportunity to learn the craft, and have a lawyer-like mentality that tends to look down on what it doesn't understand--"I may not know much about design, but I know what I like."

Cheers,

Phil Hobbs

Reply to
Phil Hobbs

In message , John Larkin writes

Various tools embody some of the known domain specific rules. But since you ask on the partially automated design methods the following and references therein will do for a start (ACM wants money to download the article but the abstract and references are free access).

formatting link

Incidentally although chess has rules a human GM working with a decent computer engine (so called freestyle) chess can still wipe the floor with even the strongest chess programs. They lack certain types of long range planning that humans excel at (and we miss certain types of close in blind spots). GM Kramnik left a mate in one on for Fritz in a game that he should have won or at the very least drawn.

formatting link

At least his temperament was much better suited to playing against a machine. Kasparov was convinced that IBM had cheated when the superior chess architecture played human like moves. These days you would be hard pushed to find any serious PC chess engine that doesn't play the key moves in those Deep Blue games exactly right.

I am no great fan of evolutionary GA techniques. They are over hyped much like AI was in the 60's.

Global optimisers are a lot better these days than you seem to think. There is a lot of experience in the crystallographic community solving for structures and in other inverse problems that is directly applicable to this.

Humans are still needed to guide computer aided design. We are way better at long range planning and overall strategy. Where we fall down is on excessive minor details and being swamped by all the permutations

- even the best of us make fence post errors sometimes.

Simulated annealing is a lot more powerful than that.

Regards,

--
Martin Brown
Reply to
Martin Brown

Not at all. A few dozen dimensions was about the limit two decades ago. These days I understand the bleeding edge non-linear solvers can handle a few hundred free parameters and stand a good chance of finding a useful practical solution. No guarantee that it is the true global optimum, but a workable and useful solution non-the-less.

You can easily construct pathological cases that these codes cannot handle but for a large number of important practical problems they are adequate. They may not find the true global optimum solution, but a good solution will often be good enough to use.

If you keep the simulation space for circuit design in the continuous domain then search direction optimisation codes will work nicely. If you insist on forcing all component values to E12 or E24 at the outset then there will be problems. But even then simulated annealing will find a solution but it takes longer.

I understand they are using some of these methods to help optimise the horrid design problem of the next generation of self powered RFID tags.

Regards,

--
Martin Brown
Reply to
Martin Brown

If you have 17 dimensions, then in one minute on an exaflop machine, you can explore at most (6*10**19)**(1/17) or about 14.6 values on each coordinate axis, over the full range of values available (whatever that may be).

Even for an unconstrained optimization, your problem doesn't have to be very pathological for that not to be enough.

I use a clusterized optimizing FDTD program (one that took me the best part of six months' work to write) to design antenna-coupled tunnel junction (ACTJ) infrared detectors and modulators. You can see from that that I'm not against numerical optimization. It works great, _if_ the problem is of reasonable size, or _if_ the problem domain nearly decomposes into a reasonable number of smaller regions (i.e. you don't have strong nonlinear interactions among all the parameters), or _if_ you aren't too picky about global optimization. (My problem falls into the last category.)

Those special cases cover a lot of practical problems, it's true. However, it's nonsense to say that the general nonlinear optimization of a strongly interacting problem with dozens of variables, without good *a priori* information is within the state of the art. It isn't, and it's never going to be, because of the combinatoric explosion. (If we ever have a 1000 exaflop machine, in 1 minute it'd be able to check a whole

*22* values per axis of your 17-dimensional problem.)

Cheers,

Phil Hobbs

Reply to
Phil Hobbs

formatting link

Ooooh, nice choice. Sufficiently constrained to keep safety problems from blowing up the early solutions. And sufficiently difficult to break the non-working academics. Those false academics will, of course, dismiss it as having neither practical application nor academic rigor, as if they knew either.

Reply to
JosephKK

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.