Welcome to Gaia! ::


There are two ways that an AI could be developed: try to make it think and act the same way we do, or try to pioneer another method that doesn't resemble the way we behave.

Argument for the first method: if we are trying to make something like us, it should do things the way we do. Otherwise, it wouldn't respond to events the same way a person might.
For the second: Airplanes came about because of birds. There were numerous attempts to make an airplane with flapping wings, but they never took off - literally and figuratively. In the end, only a fixed-wing airplane would work.

I'd put my vote in for the first one because I agree with that statement. What do you think and why?
Keda-kun
There are two ways that an AI could be developed: try to make it think and act the same way we do, or try to pioneer another method that doesn't resemble the way we behave.

Argument for the first method: if we are trying to make something like us, it should do things the way we do. Otherwise, it wouldn't respond to events the same way a person might.
For the second: Airplanes came about because of birds. There were numerous attempts to make an airplane with flapping wings, but they never took off - literally and figuratively. In the end, only a fixed-wing airplane would work.

I'd put my vote in for the first one because I agree with that statement. What do you think and why?


I think AI would be a spectacularly bad idea, at least if we use the first method. If we program a sentient machine to behave like humans, it will most likely either fai horribly or work so well that they start to replace humans, ruining the economy. Besides, I believe that the best idea is to have human minds encoded into machines, Ala Ghost in the Shell.
I support the first one, which means I think the current approaches are crap.

We're beginning to learn how people think, and it's via metaphorical reasoning, not pure rationality.

If we want to have a human-like AI, we need to program metaphors in.

And it'll hardly be the same, because of the differences in the 'body'; bodies do, in fact, shape minds.
William_Black
Keda-kun
There are two ways that an AI could be developed: try to make it think and act the same way we do, or try to pioneer another method that doesn't resemble the way we behave.

Argument for the first method: if we are trying to make something like us, it should do things the way we do. Otherwise, it wouldn't respond to events the same way a person might.
For the second: Airplanes came about because of birds. There were numerous attempts to make an airplane with flapping wings, but they never took off - literally and figuratively. In the end, only a fixed-wing airplane would work.

I'd put my vote in for the first one because I agree with that statement. What do you think and why?


I think AI would be a spectacularly bad idea, at least if we use the first method. If we program a sentient machine to behave like humans, it will most likely either fai horribly or work so well that they start to replace humans, ruining the economy. Besides, I believe that the best idea is to have human minds encoded into machines, Ala Ghost in the Shell.

We're talking about computers, not robots. They can't replicate themselves until we put them into something which they could control. Until they are installed in robots, they will only be programs on a computer.
Keda-kun
We're talking about computers, not robots. They can't replicate themselves until we put them into something which they could control. Until they are installed in robots, they will only be programs on a computer.


Somewhat irrelevant. If they're good enough AI they'll be able to affect the physical world once they work at it.

Of course, I find the argument silly; do we worry about our children replacing us?

Wheezing Gekko

Sotek
If we want to have a human-like AI, we need to program metaphors in.


I've come to this same conclusion after doing some research into the use of symbols. It seems to me that a great deal of what goes on inside human minds is symbolism, thinking and rationalizing by analogy and association. The importance of symbols in the fields of psychology and cognition cannot be overstated, it would seem from my readings*.
As a computer sciences fellow, do you know of any ways to effectively put this into practice?



*
An Illustrated Encyclopedia of Traditional Symbols by J.C. Cooper
The Secret Language of Symbols: A Visual Key to Symbols and Their Meanings by David Fontana
Symbol and Myth in Modern Literature by F. Parvin Sharpness
Not yet. But I don't know enough of the Cognitive Science aspects thereof to have a full understanding of metaphorical reasoning.

I feel perfectly confident that if I understood metaphorical reasoning the way I understand logical reasoning, I could program it into a computer with enough time.


(Idly, I got into that idea from Lakoff's works.)

Wheezing Gekko

Sotek
Not yet. But I don't know enough of the Cognitive Science aspects thereof to have a full understanding of metaphorical reasoning.

I feel perfectly confident that if I understood metaphorical reasoning the way I understand logical reasoning, I could program it into a computer with enough time.


On another note, I was wondering what you think of the Top Down versus the Bottom Up approaches to developing AI. From what I've seen so far, TD method runs full force into the complexity barrier and has the limitation of only working off our current knowledge of both cognition and computer programming.
On the other hand, in terms of developing intelligence BU it seems that larger strides are made when exploring insect-like intelligences and studying the interaction between base level drives and instincts, a la Rodney Brook's earlier works in his pre-C.O.G. days.
I don't remember any specific instances of attempts at applying either offhand...

...but I'd say that the way we're failing at producing anything we can really call AI as opposed to a 'knowledgeable' system or a quasi-rational actor shows that all our current approaches are grossly flawed and we need a paradigm shift.
The basic processes of the human brain, especially the way neural nets work, don't automatically give rise to human-like intelligence. Essentially, you can set a goal, and a neural net will learn which combinations of action lead to fulfillment of that goal; but such single-minded determinism isn't something that appears very intelligent.

Human beings are nuanced, having many conflicting desires. We also act, illogically, upon our emotions. Intelligence isn't about acting on our emotions, though; it's about acting in a way that lets us feel the emotions we want to feel. We think a lot about a lot of crap which doesn't even seem that important to us, because we aren't the people we want to be (most of the time). In short, for an AI to be intelligent in the sense that we are, we need to make it care about lots of different things, even if they aren't necessary.

Wheezing Gekko

gigacannon
The basic processes of the human brain, especially the way neural nets work, don't automatically give rise to human-like intelligence. Essentially, you can set a goal, and a neural net will learn which combinations of action lead to fulfillment of that goal; but such single-minded determinism isn't something that appears very intelligent.

Mark Tilden, founder of the B.E.A.M. approach to robotics, said of his early works (experiments with analog robot brains) that once they had found sufficient power sources, it was very difficult to get them to actually do anything. Mainly because unless you give them a specific task or set of objectives, they really don't want to waste power doing anything.

Here's a video (20 meg DIVX or 33 meg MPEG, 15 minutes play time) explaining his basic concepts and findings with building intelligent analog robots.
*Edited for spelling and links and then again for spelling.*
And that, Vryko, is why any AI ought to include boredom.
gonk Aw crap. I was supposed to have quit Gaia for a while, but I heart this discussion. gonk

Vryko Lakas
Sotek
If we want to have a human-like AI, we need to program metaphors in.


I've come to this same conclusion after doing some research into the use of symbols. It seems to me that a great deal of what goes on inside human minds is symbolism, thinking and rationalizing by analogy and association. The importance of symbols in the fields of psychology and cognition cannot be overstated, it would seem from my readings*.
As a computer sciences fellow, do you know of any ways to effectively put this into practice?

No, symbolicism, or GOFAI (Good Old Fashioned AI) has failed. The easiest thing to do is write a program that uses symbolicism. You can have the propositions "All fathers are male" and "Joe is a father" and the program can easily make that deduction that "Joe is male". We can easily write these types of programs in Prolog or Lisp, but nothing intelligent comes out of it.

Sotek
(Idly, I got into that idea from Lakoff's works.)

You seem to be a Lakoff fan. blaugh Lakoff this and Lakoff that.

Vryko Lakas
On another note, I was wondering what you think of the Top Down versus the Bottom Up approaches to developing AI. From what I've seen so far, TD method runs full force into the complexity barrier and has the limitation of only working off our current knowledge of both cognition and computer programming.
On the other hand, in terms of developing intelligence BU it seems that larger strides are made when exploring insect-like intelligences and studying the interaction between base level drives and instincts, a la Rodney Brook's earlier works in his pre-C.O.G. days.

I agree with BU approaches. TD methods can be easily implemented by symbolicism, but as you said, it only works off the knowledge already programmed into it.

*goes back on hiatus*
Da_Ish
No, symbolicism, or GOFAI (Good Old Fashioned AI) has failed. The easiest thing to do is write a program that uses symbolicism. You can have the propositions "All fathers are male" and "Joe is a father" and the program can easily make that deduction that "Joe is male". We can easily write these types of programs in Prolog or Lisp, but nothing intelligent comes out of it.
I agree with that; I think we need to re-evaluate our beliefs about how we think.

I mean, we don't normally think that way, quite frankly.

Quote:
You seem to be a Lakoff fan. blaugh Lakoff this and Lakoff that.
The man seems to be extremely right, and you only see me when I'm talking about thought... which is his field, so. wink

Yeah, I'm a fan of Lakoff. His ideas just seem so right.
Nobody does symbolicism anymore as part of AI. I think neural nets or at least bottom-up approaches are the way to go, since we've been successful with the robo-insects and that freaky Sony walking robot thing.

*really goes on hiatus* sweatdrop

Note to self: write-up due tomorrow! Four things due next week!

Quick Reply

Submit
Manage Your Items
Other Stuff
Get GCash
Offers
Get Items
More Items
Where Everyone Hangs Out
Other Community Areas
Virtual Spaces
Fun Stuff
Gaia's Games
Mini-Games
Play with GCash
Play with Platinum