# How to solve NP-problem using hydrodynamics of electrons

• posted

Let's take some NP-problem: we have a verifier which can quickly say that given input is correct or not, but there is huge number of possible inputs and we want to tell if there is a correct one (find it).

Imagine we have a chip with 'input' and 'output' in which is implemented (e.g. FPGA):

IF 'input' verifies the problem

- THEN send 'input' to 'output'

- ELSE send next possible 'input' (cyclically) to 'output'

such that this chip uses only basic logic gates - computations are made in some small number of layers - IT DOESN'T USE CLOCK (we can do it for some NP problems like 3SAT).

Now connect it's 'output' and 'input' to make a loop.

Such loop will be stable if it has found a solution (which can be transferred out). If there would be a clock, in each cycle there would be checked one input. But without clock it should be pure hydrodynamics of electrons. If there is a solution, other states would create fluctuations in time

- should be less energetically preferred than the stable one - physics should quickly solve our problem.

I know - there can be a problem with nonlinearity of transistors? If yes, there are plenty of technologies, maybe some of them would handle with it?

This loop computer idea is practically simplified from time-loop computer idea:

• posted

Had a similar thought with a friend. Challenge: implement the Mandelbrot fractal set using an analog computer. So you have two inputs, perhaps -1 to

+1V, representing real and imaginary axes (or if you want to get really proper, you could do the same with a phasor per se... who cares). You put this into a function box, which eventually tells you if the point is in the set (it doesn't converge within some time T), or if outside, how many "iterations" it took.

Now, the implication that this is an analog computer is that the computation can be performed continuously (there are algorithms which produce a continuous set, incidentially) and in the time-voltage domain. Settling time then corresponds to iterations, or if it never really sits still, it's in the set or something.

The crazy thing is, desktop computers are easily able to compute to a depth of a few million iterations, rendering a full screen (a megapixel let's say) in maybe a minute or less (maybe seconds on the video hardware). If you assume an fT around 1GHz, that's very roughly 1ns per "iteration", so a single pixel takes about 1ms to a depth of 1 million, and that's if you have the required accuracy in that time (which you wouldn't, closed loop settling time for a normal amp is a couple of fT's).

Even with the possible problems, an analog Mandelbrot computer could be really interesting...

Tim

```--
Deep Friar: a very philosophical monk.
Website: http://webpages.charter.net/dawill/tmoranwms```
• posted

About the Mandelbrot's set, You might be interested at model of computing on real numbers by Blum, Shub & Smale. But it's rather purely theoretical model and as I remember it's main motivation is belief that it can lead to P!=NP proof. But I have to admit that I don't see how to create this set using analog computer?

What I'm proposing is something different - the structure of this computer is completely digital, but time is analog. We can think about it as dynamical system which want to minimize it's energy. It can do it only by proper valuating variables and solving our problem.

If it will work, cryptosytems we are currently use are in danger - it could break RSA or find a key which if used on a beginning of an encrypted file makes output with significant correlations...

• posted
[snip]

Another gmail/googlegroups nutcase that _will_not_ make it onto the whitelist ;-)

...Jim Thompson

```--
| James E.Thompson, P.E.                           |    mens     |
• posted

Cant work for the general case. Google 'local minima' for reason why. If this class of problems didn't have strong local minima, they wouldn't be NP problems! There are some arguments that a fully quantum computer could work that way however your computing device will tend to bounce around the solution space cycling through a limited (albeit possibly very large) set of incorrect solutions in an orderly fashion that may appear chaotic over a sufficiently short timescale.

• posted

The per-pixel routine merely consists of addition and multiplication, which is quite easy to do with analog building blocks. The x + jy values are bounded (if they start going unbounded, you know you're not in the set!) so picking voltages is very easy. The only funny number you get is the depth count, which can get pretty gnarly after a while (a million is a lot of voltages to check).

Instead of a clock cycle making everything orderly, you must face propagation delay. Clock is necessarily longer than propagation delay, so you will chug faster, but you can't necessarily do the same computations the same way.

Tim

```--
Deep Friar: a very philosophical monk.
Website: http://webpages.charter.net/dawill/tmoranwms```
• posted

It's not classical computer: while removing the clock/synchronization we are destroying the order and releasing all it's statistical, quantum properties to make what physics do the best: solve its partial differential equations. Now it became continuous and so it can go with energy gradient to some local minimum, but the only local minimals are stable states (solutions). Every other is extremely unstable - electron fluid won't rest until it find a solution. The statistics, differences between propagation times will create extremely fast chaotic search for it. I'm not sure, but it could find solution qualitatively faster than classical computer.

I don't know to much about analog computers... It's easy to add, make some averages, integrate, differentiate signals ... but how to make even multiplication? Use transistor? it's extremely nonlinear... The other thing is that if You are dividing the picture into blocks - discretize it ... You get practically the same as in digital approach...

Jarek

• posted

Transistors have a very good exponential relationship over several orders of magnitude, so if you stick a transistor in the feedback loop of an op-amp you can instead create a pretty good logarithmic function. Then, to multiply, you just use... a*b = exp( ln(a) + ln(b) ) -- everything on the right-hand side is now doable.

With MOSFETs and an op-amp you can similarly build a "square rooter"...

• posted

Sounds like a quantum box could do it in a couple of nanoseconds.

I owned "Fractools" way back when EGA and VGA were terms that meant something graphical was about. It was one of the first authored apps for it. Now, even the free stuff blows it away in many areas. I liked how it calculated each pixel though, as opposed to all these today using bitblts. The current crop are grainier than my old app was because of the calculation method, and final resolution of the image. Nothing these days gives a sharp focus on the final image. All the fringes get munged by 'soft math'. Fractools did EVERY pixel. Some pictures took the whole day to calculate. Running it today would likely do it in minutes. I should dig it up and run it on a virtual machine.

GeForce Gold or Platinum is a good color organ app for your media player. It overlays graphical creations on top of your photos and such, and has several fractal routines that run. It has some really far out serpinsky routines.

Sort of analog, considering that it is analog music that stimulates it into action. Way better than simple fractals.

Get the trial, and then wait out a long time, and he'll offer you the full package for like \$24 instead of \$35.

• posted

I didn't realize that transistors could have such precise behavior. Thanks. But still I prefer digital computers to calculate fractals...

I was pointed out that instead of going with some gradient, this loop computer can just dump out because of frequency... But maybe it's a matter of technology?