secret of AI from a noob

Hi,

AI (artificial intelligence) seem complex? Ok here is an example to show how simple it really is, and some insight into the similarity between AI and true intelligence.

So here is what makes a perfect AI:

  1. learn from past experience

specifically this means that based on all past experience, ie all records of input to the AI, the behaviour, ie the AI outputs, will be determined based solely on past experience

  1. maximize variance

specifically this means that the behaviour, ie the AI's outputs, will be done such that based on past experience, ie the recorded inputs to the AI, the expected future path of the AI's measured inputs will have a maximum variance from what has previously been input. This in essense means that the AI will be attempting to maximize the knowledge of the outside world effectively, and at the same time for a sophisticated enough AI, potentially increasing variance in the outside world to the AI.

That's about it! The hard part is all the processing required to determine the path of maximum variance based on analyzing the past experience when dealing with complex behaviour. If a path of maximum variance isn't found, at least it is good to find a path that doesn't repeat already discovered experience, otherwise the AI is learning at a slower rate than ideal. Human's who need to study over the same material before understanding it (or read a complicated thing multiple times, are also learning at a sub-ideal rate based on not having or applying enough processing power, nothing wrong with that unless it becomes a broken record of never learning.

A perfect AI would attempt to maximize variance in the external environment I believe so would be potentially a good thing, and trying to put safeguards on a perfect AI is not possible, except for containment with an on/off switch, and a perfect AI could spread like wildfire possibly in the right environment, as spreading like wildfire is a good way to increase variance too.

An AI made this way is perfect up until it's capacity of processing the full input history.

sidenote:

It is interesting to think about how a perfect AI would try to maximize variance in it's observed inputs if it was deployed either on the internet or in a humanoid robotic form, or in a video game environment, this type of AI will solve any video game by finding the longest path of variance in the game.

cheers, Jamie

Reply to
Jamie M
Loading thread data ...

1) How do you define a "perfect" AI? As there are all sorts of intelligent creatures on earth with all sorts of capabilities seemingly well-suited to their environmental niche, and cognitive science researchers sometimes seem to have difficulty on even what the definition of "intelligence" is, it would seem difficult to make a "perfect" artificial intelligence without a definition of what a "perfect" non-artificial one is.

Pro tip: There's no such thing as a "perfect" anything, not in the real world.

It sounds like you're describing some variant on a neural network, which have been under investigation for the better part of half a century. They've found some successful applications but so far to my knowledge no artificial mind whose capabilities could rival the intelligence of even a simple insect has been constructed.

Ah. Okay.

Reply to
bitrex

A person that learns based solely on their own past experience is so mentally defective that they will not survive. Machines likewise. I suggest a rethink there!

So a try everything type of approach. Again a human or computer simply would not survive. If by some miracle it did for a while, a human would kill it.

This AI would be eliminated from the gene pool quickly.

NT

Reply to
tabbypurr

Jamie M should be >:-} ...Jim Thompson

--
| James E.Thompson                                 |    mens     | 
| Analog Innovations                               |     et      | 
| Analog/Mixed-Signal ASIC's and Discrete Systems  |    manus    | 
| STV, Queen Creek, AZ 85142    Skype: skypeanalog |             | 
| Voice:(480)460-2350  Fax: Available upon request |  Brass Rat  | 
| E-mail Icon at http://www.analog-innovations.com |    1962     | 

       "If you don't read [a] newspaper, you're uninformed.  
              If you do read it, you're misinformed"   
                        -Denzel Washington
Reply to
Jim Thompson

they will not survive. Machines likewise. I suggest a rethink there!

Hi,

What other sources do you learn from that aren't derived from past experience?

miracle it did for a while, a human would kill it.

Not a try anything approach, but an evaluate everything approach, and only try what is evaluated as the best chance of success (ie success = survival time * average variance experienced)

cheers, Jamie

Reply to
Jamie M

seriously?

evaluate everything but only learn from own past experience necessarily means try everything. The thing would die multiple times in its first year.

NT

Reply to
tabbypurr

"Experience" also means what effects actions have on inputs; that is, learning is recursive.

Knowing when to collapse the recursion loop is inbuilt in living organisms.

No. Living organisms *throw away* most input data to extract the *useful* part.

"Useful" is defined as "that which contributes to surviving long enough to reproduce" and is mostly hardwired frinst fight-or-flight reflex triggered by hardwired pattern recognition.

The problem with AI is really to decide whether or not to give it the survival/reproduction imperative. If not, what *is* its fundamental imperative?

(rest snipped)

Mark L. Fergerson

Reply to
Alien8752

Only Jamie could miss the "their own" part in "their own past experience" in the sentence he was replying to.

Most of us earn from books and other compilations of other people's past experience. Even Jamie is probably up to "sitting next to Nellie" though he's probably too dumb to realise that he was learning by imitation when it happened.

--
Bill Sloman, Sydney
Reply to
bill.sloman

Hi,

Yep, simple question or maybe you have no good answer? :)

would die multiple times in its first year.

Ya, but the past experiences where it dies (if remembered) would be recorded as low success, and only explored again if success wasn't found elsewhere.

cheers, Jamie

Reply to
Jamie M

Hi,

I think this is a similar idea to hierarchical structures, ie visual processing that feeds into a higher level decision making circuit.

Really though that hierarchical structure is only created as it is the most efficient form to do processing with limited resources.

If an AI has near infinite processing ability, there doesn't need to be any hierarchy and the learning can continue to be recursive based on the inputs until no more information can be processed I think.

hardwired frinst fight-or-flight reflex triggered by hardwired pattern recognition.

If not, what *is* its fundamental imperative?

That's fundamental imperative is simply just maximize variance based on the fully processed history of inputs I think.

cheers, Jamie

Reply to
Jamie M

I learn lots from books and such. For most of life a lot of "learning" is in the genes, if you could accelerate that time... George H.

Reply to
George Herold

I think that may be true for first -- and even second -- order effects. But, meatware seems to have inherent limitations that are probably designed to "favor the immediate" (potentially at the expense of the future). I.e., come up with a "good" plan to handle NOW... and worry about THEN, when *it* becomes the new NOW.

This suggests we abort our solution search possibly prematurely -- if we are truly looking for optimum (whatever THAT means) solutions.

Some of Watson's conclusions on problems that meatware has attempted to tackle (often THINKING they were doing a "good" job) have been surprising. A "mechanical" AI has the advantage that it can focus all of its resources on a particular task (assigned TO it). And, can bring far more resources to bear than most organic approaches can "manage" (let alone appropriate)

Again, immediacy tending to be favored over long-range goals.

I think we attribute "special insights" to "intuition"... visions folks have that others have prematurely aborted.

Some of the classic AI algorithms make these inherent limitations obvious (e.g., abandoning a strategy if it looks like it is MORE expensive than some other strategy -- even though the next step along that path might find a serendipitous outcome!

Some control systems actually WORSEN the process they are attempting to control (increase its standard deviation). This is often counterintuitive to the folks trying to apply such "solutions" (how could doing LESS result in MORE??)

When I've recoded algorithms in favor of "expert systems" (over procedural approaches), it has been amazing to see how much simpler the solutions become -- and how much clearer the factors governing the strategy adopted! Instead of deep decision trees that inherently steer a solution one way or another, it's just individual *weighted* criteria that drive the result.

Does it have to have one -- beyond what the developer has set forth?

I played with using bogus "currency" to allow competing tasks to "bid" for resources in a system -- letting the value of the resources effectively be set by the (current) highest bidder. In theory, it works well -- tasks that are willing to PAY more GET more.

In practice, however, the engineering problem shifts away from to one of "how do I design the optimal bidding strategy?" Which is kinda silly when the other bidders have been created by the same developer(s)!

Reply to
Don Y

How did you learn: Not to jump off tall buildings? Which end of a firearm to shun? That carbon-monoxide is poison?

Some experiences are final.

--
This email has not been checked by half-arsed antivirus software
Reply to
Jasen Betts

Or from Zork "Do not press this button again".

Reincarnation is useful if you are forced to learn only by experience.

The impressive thing about the Google machine learning of Go was that it did pretty much train itself up to beyond human player standard by using a database of top games as a bootstrap and then playing against itself with that as a starting point taking minor evolutionary variations of the network weights that resulted in a better outcome.

But the machine learning heuristic by maximising the distance to loss and minimising distance to a win is as old as the hills from the early days of computer games. Basically how the first chess endgame tablebases were done since the early codes were hopeless at that!

More complete heuristics allow for recognising how strong your opponent is and sometimes playing slightly unsound moves that they are unlikely to be able to see how to exploit to get a quicker win.

Even as recently as 1985 there were still plenty of chess puzzles obvious to a human that a computer simply could not grok. Today it is the other way around - although human guided computer (aka freestyle) is still he strongest of all.

--
Regards, 
Martin Brown
Reply to
Martin Brown

te:

ity

o mentally defective that

There was a tolerably serious book that proposed that the alternation of ic e ages and inter-glacials changed the environment too fast for gene-based e volutionary adaption, so our ancestors invented language, culture and learn ing by imitation, because it allowed quicker adaption to sudden and frequen t changes in climate.

formatting link

says something similar, but doesn't explicitly name culture-propagated adap tion as the specifically human trick.

I bought the book a few years ago, but it's probably still in Nijmegen.

--
Bill Sloman, Sydney
Reply to
bill.sloman

well it may not be PERSONAL experience..

but reading and observing others can be considered part of your experiences...

take to mean all input data. m

Reply to
makolber

Hi,

Ya that's what I meant thanks.

cheers, Jamie

Reply to
Jamie M

Hi,

Past experiences don't have to be personal experience, ie see the other post about any type of input being a past experience.

cheers, Jamie

Reply to
Jamie M

OK, that's too stupid for me.

How on earth is it going to try anything again once it's died?

Maybe you're trying to construct an idea after your brain has died.

NT

Reply to
tabbypurr

You've got the wrong picture. Think evolution, if it dies no offspring, only those that don't die go on. It'll be a very random thing, so you might want to play the game (run the simulation) several times.

George H.

Reply to
George Herold

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.