gonk Aw crap. I was supposed to have quit Gaia for a while, but I
heart this discussion.
gonk
Vryko Lakas
Sotek
If we want to have a human-like AI, we need to program metaphors in.
I've come to this same conclusion after doing some research into the use of symbols. It seems to me that a great deal of what goes on inside human minds is symbolism, thinking and rationalizing by analogy and association. The importance of symbols in the fields of psychology and cognition cannot be overstated, it would seem from my readings*.
As a computer sciences fellow, do you know of any ways to effectively put this into practice?
No, symbolicism, or GOFAI (Good Old Fashioned AI) has failed. The easiest thing to do is write a program that uses symbolicism. You can have the propositions "All fathers are male" and "Joe is a father" and the program can easily make that deduction that "Joe is male". We can easily write these types of programs in Prolog or Lisp, but nothing intelligent comes out of it.
Sotek
(Idly, I got into that idea from Lakoff's works.)
You seem to be a Lakoff fan.
blaugh Lakoff this and Lakoff that.
Vryko Lakas
On another note, I was wondering what you think of the Top Down versus the Bottom Up approaches to developing AI. From what I've seen so far, TD method runs full force into the complexity barrier and has the limitation of only working off our current knowledge of both cognition and computer programming.
On the other hand, in terms of developing intelligence BU it seems that larger strides are made when exploring insect-like intelligences and studying the interaction between base level drives and instincts, a la Rodney Brook's earlier works in his pre-C.O.G. days.
I agree with BU approaches. TD methods can be easily implemented by symbolicism, but as you said, it only works off the knowledge already programmed into it.
*goes back on hiatus*