cybernetics

God and The Universe are identical concepts (pantheism).

Hope this helps.

Helps what?

Reply to
Don Stockbauer
Loading thread data ...

Oh no, not another transfinite debate so soon. s.e.d had an 11000 poster recently.

Reply to
JosephKK

Which will certainly be a bigger failure than Raygunomics. (A compound ripoff)

Reply to
JosephKK

request.)

Only if you have no sense. ;-)

--
You can\'t have a sense of humor, if you have no sense!
Reply to
Michael A. Terrell

Are you really up for one? Is your will, and are all your shots up to date? It can be quite stressful. :)

--
You can\'t have a sense of humor, if you have no sense!
Reply to
Michael A. Terrell

There are only 2 forms of affinity:

  1. Potential.

  1. Actualized

  2. Gay love
Reply to
Don Stockbauer

up

To quote a former US Pressydent now serving time near a small Texas town, grubbing out mesquite brush by hand with an adze, confined by the barbed-wire of his ranch:

BRING 'EM ON!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!= !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Reply to
Don Stockbauer

Willem states: The concepts of 'before' and 'after' are aspects *of* existence, so that question is meaningless.

So how does this differ from... Bill Sloman posting: No. I believe that we haven't got a clue about what created the universe, and I doubt that it is a question worth asking.

Well at least both will agree that they cannot explain the original of all this mass and energy. (My, such a marked difference though from the previous, lengthy =93science trash=94 force-taught to grade-schoolers! No? --- and is this though still going on?) But their stubbornness as well persists. Whether by a remark of: Oh no, not another transfinite debate so soon. s.e.d had an 11000 poster recently or by: I do believe that we have some fairly clear ideas about what created us from pre-exisitng life-forms, and a divine creator doesn't seem to be a useful hypothesis in that area. While one though has to admit that the first remark (of 1100 posts) indicates at the very least, true unresolve; the second remark is intentionally biased, as science still hasn=92t resolved the origins of life. Come on, can we not remain open-minded until so?

I personally have always maintain that the simultaneity in DNA is not from origin, but from ensuring that DNA does not get contaminated. (Funny, as this is now beginning to be the real concern of today=92s MODERN geneticists? ---So do they now know something more than that stubborn Bill refuses to ever know? ---But as well, people should begin to realize that Darwinism should always only remain as a theory. Must we expose the =93fears=94 of Darwin had, causing the delay of the =93officialdom=94 his theory?)

Reply to
Mikus E_

Wolf k. must get hives whenever he watches the TV show, =93Are You Smarter than a Fifth-Grader?=94 As... How dare they hint that there is nothing wrong with the English language when it comes to teaching.

Does wolf k. then himself dare to claim that the real obstacle to AI is the natural human language itself? So thinking and talking in Esperanto won=92t cut it either? And what of the programming language LISP? ---Has it merely developed an artificial =93speech=94 lisp when it comes to intelligence? BTW wolf k., there is something called =93future perfect tense=94.

Mikus E.

P.S. Intelligence seems invisible to a wiry wolf k.? Heck, I would think current chat-bots could fool wolf. (Unless of course he used =93sixth-grade=94 English.) ---Hmmm, but could a basic chat bot ever be able to register to vote if registration were allowable over the phone? (I would say yes.) So let=92s stop with the periodical slew of =93new=94 articles that desperately promote that silly Touring test. (--- Let=92s stop in artificially extending the =93life=94 of it.) Let=92s also realize that the =93halting=94 problem relates with something entirely different. (And that it took someone other than Touring himself to =93fix=94 those =93halts=94.) P.P.S. So wolf k., what is the solution to this AI problem? Should all the people adopt the way of the =93new=94 Democrats? Then no matter what the subject/question is, no real intelligence would be required for the reply as only be a pre-recorded =93sound bite=94 should meet the =93test=94. ---Such a simple task for the =93AI=94 programmer then. ...=93Houston, we have no problems with that New Democrat AI!=94 P.P.P.S. So why is it that the likes of Touring (and wolf k.) befuddle the adults and not the fifth-graders?

---And no, I am not looking for =9311000=94 posts, ...only if he will formally acknowledge that he himself thinks the problem is in the language used to abstrate AI. (---Oh, one step at a time!)

Reply to
Mikus E_

I don't watch that program.

Yes, if your categories of "knowledge" and "reasoning", etc, are no more =

than what your language tosses up (several puns there).

Esperanto is just a mish-mash of European languages. If you want a=20 better sense of how languages constrain thinking, try to learn a non-IE=20 language, such as Mandarin, or one of the Inuktitut dialects. What the=20 hell, study Klingon. Its inventor tried very hard to create a language=20 as unlike any Terran language as possible. He's a linguist, BTW, which=20 means he knows stuff about language that you don't even know you don't kn= ow.

LISP is a version of symbolic logic, which is a subset of human=20 language. Human languages do not capture all-there-is-to-know, logic=20 captures even less.

The AI community has historically overestimated the value of symbolic=20 processing, and of logic in particular. Some practitioners are beginning =

to understand that symbolic processing (of any kind), while a sine qua=20 non, is nevertheless only part of what we call "intelligence." Exactly=20 how symbolic processing figures in "intelligence" is IMO nowhere near a=20 settled question.

Argument by pun is fun, but gets you no ware, artificial or know.

Not in English. In English it's a mode. There are only two tenses in=20 English, both of them used to express indefinite time most of the time=20 (as every sentence in this paragraph does.) In English we use compound=20 verb phrases and modals to express most time relationships.

"Time" is always expressed in relation to the time of utterance, BTW.=20 All languages express time of utterance ("present"), time before=20 utterance ("past"), time after utterance ("future"), and anytime in=20 relation to utterance (indefinite time). I don't know of a language that =

violates this rule - it's a universal feature of human languages, IOW.=20 The only variations are in what verbal constructs are used to express=20 them (which in turn results in mandatory and optional expressions of=20 some time relationships.) In English (and other Germanic languages)=20 narrative, present, past, and future must be expressed, and this rule is =

so strong that when the indefinite present is used by the narrator to=20 describe what is happening, we call it the "narrative present".=20 Normally, the indefinite present is used for general (anytime)=20 statements, a rule that is so strong that most English speakers will add =

"now"/etc to indicate that they are expressing an actual present. -=20 It's these matters of grammar and usage, BTW, that 6th grade grammars=20 ignore, thereby misleading the smarter students who are understandably=20 confused when the Teacher's exposition doesn't reflect their experience=20 of the language.

There are very few universals in human languages. That "time" is one of=20 them is, as they say, suggestive.

[snip glibness]
Reply to
Wolf K

=20

know.

=20

=20

=20

=20

Just to bust your bubble. There are languages with time senses that are so different than those English that they effectively have nothing in common. It could seem that the other language had no time sense. Compare the time senses of Hopi, Maori, Mbuntu, Cantonese, Xhosa, Mandarin and others to that of English.

Reply to
JosephKK

Hey, even you acknowledged its pun-ness factor.

Not in English. In English it's a mode. There are only two tenses in English, both of them used to express indefinite time most of the time (as every sentence in this paragraph does.) In English we use compound verb phrases and modals to express most time relationships.

I quote from a published dictionary: future perfect --

  1. the future perfect tense
  2. a form in this tense

(Wolf k. spending way too much time with those =93knowledgeable=94 linguists? Hey didn=92t they come up with that Esperanto; you know, the one you surveyed as =93just a mish-mash of European languages=94? Didn=92t they try to target the scientific community? And yes, yes, those same linguists have been tampering with the grade-school books as well?

Hmmm, by your: "It's these matters of grammar and usage, BTW, that

6th grade grammars ignore, thereby misleading the smarter students who are understandably confused when the Teacher's exposition doesn't reflect their experience of the language." So just what are you trying to say here? That linguists are inept when it comes to reality? That teachers are merely reciting? That unguided humans are a natural for a =93better=94 language, perhaps making life simple for the AI constructor? ---So let=92s start using baby talk. (I am not trying to be facetious here! ---As a quality of human idea inter-reaction is the attempt to understand the other=92s intent whenever the speech is not as precise as desired.) Would that pass the Touring test or would it result in a bigger =93farce=94?)

Since you seemed way too anxious to insert =93[snip glibness]=94 ... I ask again (as I did both at the start and end of my previous post): Do you consider language as a stumbling block in the quest for AI? ...Or do you feel you have adequately answered that via =93the AI community has historically overestimated the value of symbolic processing=94? But are not all languages symbolic by nature? So why even consider one over another?

Hmmm, so to make the task seem again easier, maybe we should only consider a subset of the human language, creating yet another =93better=94 one that this time should be more appropriate to logic or computation? (But would then the Vulcan language always fair better? ...Or are they too much into self-logical, thus making their language less suitable for AI? ---ironically, most would think that it would be very well- suited for AI.) But as wolf k. might now express, still again too much emphasis on logic? (...Still, seems so ideal though. And far better than Klingonese! Or should that be labeled as Klingonian, Klingonic, or Klingo? Surely not plain Klingon!)

Now assuming that wolf k. was not being sarcastic when he indicated that linguists are =93experts=94, first via his: =93He's a linguist [referring to the creator of Klingon =93language=94], BTW, which means he knows stuff about language that you don't even know you don't know.=94, ...Because by his previous breath: =93It's these matters of grammar and usage, BTW, that 6th grade grammars ignore, thereby misleading the smarter students who are understandably confused when the Teacher's exposition doesn't reflect their experience of the language.=94 I would suggest that what is needed for AI =93language=94 is not the linguist, but the philosophy!

Recall that the early Greeks (with their hordes of self-appointed philosophers), came up with a precise language to better express/ explain their thoughts/culture. So if wolf k. feels that language has been an =93AI hindrance=94, perhaps the AI community should consider employing philosophers over linguists.

Yes, the spoken AI words should then always be very precise (thus making life easier for the AI coder), as transitory hip talk would not be in it=92s memory banks. (...Don=92t the French boast that their language from the 1500s can be fully understood today! ---So is this one country that really has supported their linguists?! But isn=92t it also the French whom often blame other linguists for WWII? ...For does not historian Jacques Barzun convincingly point his finger towards most pre-WWII German linguists?)

But hey, gone will be the (representative) phase =93bad is good=94, for now when the AI machine talks of passion, it will will have to always qualify itself with either =93plain=94 passion or =93bad passion=94. (---Wi= th the normal/=93plain=94 only lacking the implied and redundant qualifier of =93good=94. ) Hmmm, but doesn=92t this bode ill with the =93change-for-the- sake-of-change=94 group? Will they not try to always impose their =93badness=94? ------Yes, but given AI base =93sound logic=94 routines, nev= er a problem.

Mikus E.

P.S. Puns [the good ones], I hope; will still always be considered and then understood by AI.

Reply to
Mikus E_

Dictionaries just tell what people think they are talking about. They=20 don't tell you what things actually are.

t

Drivel.

wolf k.

Reply to
Wolf K

's'awlright.

One of my favorite quotes. Though the one i remember was worded different and i had no author for it.

No surprise, i did not (maybe could not) learn English grammar until i took 3 years of German in high school.

I would like to study Hopi, Ethiopian, Mandarin, and perhaps M'buntu.

Whenever you make the time you can find me in s.e.d.

Reply to
JosephKK

Curt Welch replies: Well, I guess the answer has to be yes. That is, you have to "teach" the machine one way or another. You either teach it by writing millions of lines of code, or you teach it by sending it to school for years. Either way, you are starting with a machine that doesn't have the knowledge, but then ends up with it after we do a lot of working putting the knowledge into it.

---end of quote

Herein lies the problem?... ...as you are wanting to start with a machine that doesn't have any known =93reason=94?

Are you not trying to make a distinction between =93lines of code=94 and the following stage of expectant input (knowledge) for AI? (SO bring on AI, as experience by itself, will define it!)

---That AI reasoning can exist without knowledge? That basilar AI code, despite being drawn upon knowledge, will be the AI =93reasoning=94 part? (Okay, this does seem =93cheap=94 shot --- in that AI cannot realistically exist without code.)

But let=92s see if we can draw any comparisons with a human newborn...

The newborn enters the real world with biological systems that are preset to run =93automatic=94. ---For doesn=92t he/she (its mind) really usually have nothing to worry about living/existing other than showing reflexive squawks whenever one of its free-running systems demands attention? (So, even though they are =93automatic=94, they still demand the attention of their host? ---And as well, from their mothers and fathers?) But SO what! As animals conduct their early existence in the same manner! But here is the kicker, it=92s the human reasoning that separates us from the plain survival animal =93minds=94. ---As thus it is very possible for humans to override their automatic systems.) And so, one has to always really wonder if one human newborn is more pre- disposed to be a survivalist than another. (...the ones meant to be?)

Is it still appropriate to bring up Lock=92s =93tabula rasa=94? Well, yes o= f course, as Curt Welch should agree to that AI can start with a blank slate.

But whereas computer intelligence is often expected to start off with a truly blank slate (as with no regard to reason), human intelligence never has been in that same sense! (Hey, something had to create that clean =93slate=94.)

So while Curt=92s =93experience=94 notion works often very well with animal behavior, it does not with human behavior.

Mikus E.

P.S. Curt Welch poses this =93litmus=94 test: =93To test this, why don't yo= u try to explain to me, what you think knowledge is and how you would go about building AI so it could learn, and use, knowledge, like a human does. Do you think you understand how to give a machine a mind so it can use knowledge like a human can use knowledge? Do you even have an idea where to begin?=94

Knowledge is knowledge. And as I had indicated, the =93students=94 of it may not ever make use of it. ---But keep this in mind as well, that all learned knowledge will always patiently await the host=92s reasoning. Hmmm, does it usually take a =93trigger=94 to actually bring =93resolve=94 to lingering problems that are solved by long stored knowledge? So do I know of a way to ensure that AI makes full use of ALL knowledge? Sooo possibly, despite that most humans cannot mange that themselves! (Hey, but isn=92t the sole trick in knowing how to use knowledge?)

Reply to
Mikus E_

You are quite right. You can't start from nothing and "teach" it everything. It must have something built into to start with. This is the blank slate argument. But just like with the a real blank slate (a chalk board), you can't start with nothing (a vacuum). You have to actually start with a piece of slate with very real innate properties already in it. It has the physical ability to hold chalk marks.

Same thing for a learning machine. If you program a learning algorithm into a computer, is very much doesn't start with nothing. It starts with an innate ability to learn. And we can get more specific about what those abilities are, but the point is you are right - you can't start with nothing - the machine must have some innate abilities hard wired into it.

If you want to call that initial innate ability the power to "reason" I guess you could do that, but I think that word really doesn't have much to do with what we actually need to build into our child AI - aka an AI machine build for the purpose of learning.

There was actually no problem in what I wrote - only in how you choose to interpret it. I take for granted that machines built to learn have innate learning powers, so I don't bother to say it every time. So when I wrote "You either teach it by writing millions of lines of code, or you teach it by sending it to school for years", what I obviously meant (obvious to me, but not to everyone) was: "You either teach it by writing millions of lines of code, or you teach it by [writing only enough code to allow it to learn and then] send it to school for years.

We were talking about the problems of langauge to AI so I just have to pause here to point out what you have written above and how that type of talk leads to so much confusion in AI work. You wrote "the [systems] demand attention of their host". There's an implication in those words that there is a "host" somewhere in there that is somehow separate from the rest of the system - like a human inside a car, controlling it.

Well, we can certainly justify that langauge by picking a system, like the brain, and saying the brain is the host, and the rest of the body makes up the "systems" you were talking about. And if that's what you meant, then that's fine.

However, the standard English langauge concepts we use to talk about humans is filled with assumption of duality - that we have a soul-like thing in us that is separate from our physical body. We generally call it the mind - but in using English correctly, we have to talk about the mind as if it were something separate from the body. But there in fact is nothing in us that is separate from the body. There is only the body. The heart is separate from the foot which is separate from the brain, but no where in the anatomy book will you find the "mind" organ. Which brings us to the confusion built into English. We talk about the "mind" controlling the body, but yet it's not a thing you can find in the body.

And we say things like, "the automatic systems demand the host to pay attention to them" because it's a common fall out from all the confusi8on crated by the use of the word mind and everything we talk about happening in the mental domain (like our thoughts and memories and feelings and intentions and all the rest of the concepts that fall on the mind side of the duality instead of the body side of the duality).

But enough of that, back to the tabula rasa question...

So you say. What's the real objective evidence to back that up however and what does that objective evidence really say about what the brain is doing different in us than in another animal?

Well, you see, I can't help by see stuff in your words that I think is at the core of the confusion these invalid English concepts instills in most people.

First off, the ENTIRE BODY and all the systems in it are "automatic systems". So how exactly do you justify talking as if "humans can override their automatic systems"? A car is nothing but a lot of automatic systems, but would we every talk like this about a car - "My car has the power to override it's automatic systems"? Not likely.

And it really makes no sense to talk about the body having the power to override it's automatic systems either. But yet we talk like you did above all the time because of the fact that English teaches us to believe in the error of duality. It teaches us to believe (well at least talk as if) we have a magical soul that is in charge of the hardware. And in trying to understand what the hardware is actually doing - all that talk only creates massive amounts of confusion that some people manage to escape from, but many others never do.

Many people make the argument that the blank slate is impossible because it must start with something. But to me, that's a silly and absurd argument. It's the same thing as trying to argue it's impossible to carve a statue from a blank rock because you can't start from nothing. Of course you can't start from nothing - you start with the rock and transform it into a statue. We all known that, which is why no one would make the argument that carving a statue from a blank-slate rock is impossible.

And when we say the "mind is a blank slate", we should all know that what we are really talking about is a brain that has innate powers to learn new behaviors, and that the "blank" part does not mean the brain starts with no behaviors, but that instead, it starts with innate low level primitive behaviors which are combined in new ways to create behaviors that did not exist in the machine before they were learned. In other words, "blank slate" doesn't mean "no slate" and it should never be argued as invalid as if it meant "starting with no slate".

Yes, our genetics creates the clean slate. Humans will create the clean AI-slate.

Well, it works just fine to explain all human behavior as well. But that's not a debate to get into in sci.electronics.design and comp.programming. Come over to comp.ai.philosophy if you want to debate that with me. :)

:)

Well, let me quickly outline how I think AI will be solved just for reference and tie that into your talk about knowledge.

I think AI will be solved by building a reinforcement trained recurrent temporal reaction machine. Such machines are conceptually (at the high level) trivial. They are nothing more (at a high abstract level) than a reinforcement trained associative memory system. They learn, by reinforcement (that is from a single internal reward signal), how to best react to the context defined by their sensory inputs in order to maximize the reward signal. I call this type of hardware "generic learning hardware" simply because how the system reacts to the environment is not in any way hard-coded into the system. It does have to start with some set of initial weights that defines how it reacts, but those weights are not set to a specific value to achieve a specific behavior - they are effectively random numbers producing random behavior. What such a machine knows nothing about at "birth" is the _value_ of any behavior. It has no clue which behaviors are more valuable (are likely to produce more rewards) than any other behavior. That is what such a machine goes about learning though experience - the value of one reaction vs another.

All human behavior, including reasoning, memory, thought, emotions, (even "the mind") can be explained by such generic learning hardware.

However, if you take the concepts of what a human is, that is given to us by standard English (a body with a mind that can use knowledge to reason), and you look at the type of hardware I just outlined above, it's hard to see a connection. Anyone that thinks that the solution to AI is to build a machine that can hold knowledge and use that knowledge to reason, is likely to build a very different type of machine than what I just outlined above. They are likely to build a machine like Cyc - which in my view, is a perfect example of the problems English has created for AI and why so much AI work has been wasted on building the completely wrong types of machines.

The machine I outlined above however does have the power to hold knowledge and does use it to "reason". But the internal form (the actual implementation) just so happens to look very different from how we like to talk about what the human mind does. It looks so different, that most AI researches over the past 60 years don't even think to look at such a design

- which in my view, is why AI has failed so badly so far - (failed that is to produce a machine equal to a human).

The problem with AI, is that we humans think we are working from a privileged position. We think that since we are a human, we must have intimate knowledge of what the human mind is all about - and as such, should simply be able to sit down and code it. After all, we have this huge vocabulary that we use to talk about what's happening in the human mind and what we do. But that huge vocabulary we use to talk about the human mind is about as intelligent and correct, as all the talk of gods throwing lightening bolts down from their chairs up in the clouds. It's mostly nonsense we made up to make us feel like we weren't as stupid as we really are about what the human mind is. And when you try to code that nonsense, you get just about as far as when you try to create thunder and lightning by building a chair in the clouds for the god to sit in.

What we have to code, looks nothing like what we describe with all this talk about using knowledge to reason with. What we have to code, is a reinforcement trained, real time, temporal, distributed, associative memory like reaction system.

--
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
Reply to
Curt Welch

Well, since microsoft primarily invented intel label makers, it's also the people who understand, engineering, and electronics invented USB, HDTV, All-In-One Printers, Desktop Publishing, Home Broadband, On-Line Publishing, Cyber Batteries, Self-Assembling Robots, Flat Screen Software Debuggers, Distributed Processing Software, PGP, GPS, Digital Terrain Mapping, Data Fusion, Atomic Clock Wristwatches, Light Sticks, Compact Flourescent Lighting, mp3, mpeg, Electronic Books, non C++ Pointers, and Holograms, rather than Virtual Photons, Virtual Memory, China, idiot Transistors, or Quantum Mechanics,

Reply to
zzbunker

,
,

Does your list ever change?

Reply to
Don Stockbauer

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.