|
|
|
|
|
|
Posted: Tue May 24, 2005 3:54 pm
Hokay! I guess SanguineV is our moderator, so if dezro agrees to it we're ready to go!
Topic: Does intelligent thought imply a soul? Moderators: SanguineV Side 1: TheBeatnik Side 2: Socrates in Disguise Format: Sides shall take turns posting their views on the topic to be discussed, if there is to ba an affirmative or negativ the poster taking the afirmative position will be able to post first. From then on take turns and the debate shall run until one party wished to end it in which case the negate (or other member) shall make a final post with no new concepts/arguments.
Notes Within this debate a soul is defined as: An eternal, undying part of a person that defines who they are and transcends death to move on to other places.
The debate is designed to focus on whether human thought is really distinct from machine thought and if they cannot be shown to be significantly different that machine thought may also be guided by a soul.
If at any time you wish to end the debate, or have another poster join your side or take over please PM the moderator. Please do so if you know you will be absent for some time as well.
Posters not listed in this post as on a side or as the moderator may not post - anything you do post will be deleted. If you have a problem with thi see the rules.
- SanguineV
|
 |
 |
|
|
|
|
|
|
|
|
Posted: Tue May 24, 2005 11:32 pm
((I would like to argue against this issue so according to the rules you go first))
|
 |
 |
|
|
Socrates in Disguise Captain
|
|
|
|
|
|
|
|
Posted: Wed May 25, 2005 7:14 pm
I stand to affirm the resolution that intelligent thought implies a soul.
Thought and sentience: the human brain is the only one intelligent enough to study itself. Truly a sacred thing, and truly possessing a soul, if they so exist. From this brilliant mind a new, artificial one is born, in it's image. The true question behind this debate is how to define intelligent, in my opinion. Intelligent life, I believe, does not allow for a robotic mind only intelligent enough to react and respond to what it's human master programs it to. A truly intelligent mind is able to learn for itself, is able to react and respond to everything of it's own accord. At that point, which scientifically has not been discovered yet, a robotic mind should be no different from a human one, and is therefore no less capable of having a soul, per se. Science has shown that a mind that can study something can replicate it. Why not, then, a human mind?
|
 |
 |
|
|
|
|
|
|
|
|
Posted: Thu May 26, 2005 11:57 pm
**EDIT**((whoops forgot that was up there...)) The average chess master can think up to 30 moves ahead before moving a simple pawn...a strategic move to open the path for the queen while blocking the knight and preparing an attack on the bishop...checkmate in two. A good move, an intelligent player, in the late 60's a machine was developed that played chess...and was sent into completion with the world's reigning chess champion...The machine won thinking over 100 moves ahead...intelligent...but no soul. A soul is not something you can grasp, not something that is even that comprehendible...Programmed algorithms could never understand such rash and strange illogical human thought. An officer is ordered to open fire at a group of protesters yelling at his squad. He hesitates and refuses to comply realizing the wrong in firing at innocent people. A robot would not think twice...it would follow orders like a good soldier. A soul signifies some sort of conscience-or rational knowledge of what is good and what is wrong. The difference between humans and machines is we have reason... Intelligent thought is almost never logical...a machine can understand logic...but can it really understand fear? Or hope? Or Guilt? Love? Quote: Science has shown that a mind that can study something can replicate it. Anything can be replicated...simple as copy and paste in this computer world. but computers don't have individuality...Say someone owns a Dell Dimension Desktop 4500 with XP, top of the line everything...while that computer is sweet...there are others like it...but say the is a teenage boy...blonde curly hair, 6'3, 130 lbs, Caucasian, with a backwards L shaped scar on his left forearm...people may come close to being like him but there will never be a complete replica, of that person. Humans are diverse because of their mind and soul.
|
 |
 |
|
|
Socrates in Disguise Captain
|
|
|
|
|
|
|
|
Posted: Fri May 27, 2005 12:04 am
dezrosatweaker Webster defines a soul as: Quote: The animating and vital principle in humans, credited with the faculties of thought, action, and emotion and often conceived as an immaterial entity. ((www.dictionary.com)) We have a definition of a soul in the opening post of the discussion. While it is not significantly different from the one above, please use the one within the thread as the basis, unless TheBeatnik is happy to take the one from www.dictionary.com - in which case it may stand.
|
 |
 |
|
|
|
|
|
|
|
|
Posted: Sat May 28, 2005 12:51 pm
So you agree with my statement that a mind that can study something in enough detail can replicate it... but you disagree with it in that there are certain things that cannot be replicated, like the soul, correct? You compare a human's mindframe in chess to a robot's... are you saying that a robot cannot be taught to consider human deciet and trickery, just like it is taught to look 100 moves ahead? Could robots not be so designed to compare to humans so well, that they too could trust their inner judgment before that of their superiors, like the police officer? It seems if this could be, then the boy one might attempt to replicate, the one with the scar, that robot could completely assume the identity and personality of that person. If you mean that a robot with no individuality could never have a soul, let's take an example like this: Two parents decide they want to have a robot built for them to be their child. Scientists use facets of the personality of both the mother and the father, use statistics to determine the most prominent idiosyncracies, and then base the new roboperson off of these two parents... a person like that has never been before, that robot HAS individuality. Doesn't that give it a soul, by your reasoning?
|
 |
 |
|
|
|
|
|
|
|
|
|
Socrates in Disguise Captain
|
Posted: Sat May 28, 2005 10:02 pm
What is the distinct line between good and bad? A Human is in constant battle with its subconscience battling the very very fine line...Computer programming is all about absolutes. If it's not this than it's that or that if that is that...how exactly do you decide?
Who's opinion do you take? and even if you did...could you trust it? In order to give a machine a soul you have to program a conscience..."Cerebra-Technical Liposuction" Utterly impossible.
You say that the "child" is individualistic. Think of this...A child is merely influenced mentally by it's parents...but a lot of what it does is what it thinks for himself...how could you decide what the "kid" wants to be when it grows up. I want to be a musician and an animator when I finish college because that's what I love to do? What does your...machine love to do?
|
 |
 |
|
|
|
|
|
|
|
|
Posted: Sun May 29, 2005 11:09 am
dezrosatweaker What is the distinct line between good and bad? A Human is in constant battle with its subconscience battling the very very fine line...Computer programming is all about absolutes. If it's not this than it's that or that if that is that...how exactly do you decide? Who's opinion do you take? and even if you did...could you trust it? In order to give a machine a soul you have to program a conscience..."Cerebra-Technical Liposuction" Utterly impossible. You say that the "child" is individualistic. Think of this...A child is merely influenced mentally by it's parents...but a lot of what it does is what it thinks for himself...how could you decide what the "kid" wants to be when it grows up. I want to be a musician and an animator when I finish college because that's what I love to do? What does your...machine love to do? Well, I think robot children, in time, would be able to be programmed to a point where they can be put through various levels and decide for themselves what they like to do more, based off statistics of how it's "parents" reacted during that time in their life. Sure, most children go through phases where they are entirely different from their parents, that's granted. My dad was a math major, but I myself hate math with a passion.... why? I can't explain it! It's like we are naturally programmed at birth to like or dislike different things. Robots can be programmed as such too, and no one would tell the difference. Who's the wiser? I think we humans are just organic machines, and the brain is no exception to the rest of the body. Our parents, as they raise us, define good and bad for us, they tell us what is right and wrong, and a machine's creators could do the same for them. These opinions on what's right and wrong will inevitably lead to conflicts throughout the robot's life, and it will struggle with itself, just as we struggle with ourselves, defining what's right and wrong... I believe an artificial mind can be created easily to the level where rather than just shut down when it encounters a moral dilemma, it will weight the pros and cons and make a decision... you can simply just program it to force it to do so. To one who might ask "What if it makes the wrong decision?" I respond, first, who defines what's right, and second, we all make mistakes. A robot choosing the wrong thing just makes it more human. A robot that messes up at certain points, that is "forced" to have free will (like we are), now THAT'S a robot with a soul.
|
 |
 |
|
|
|
|
|
|
|
|
|
Socrates in Disguise Captain
|
Posted: Sun May 29, 2005 2:04 pm
"forced" to have free will? That's an oxy-moron. How do you figure that's how we are? At how many algorithims would you stop computing the information? The human geneology is like pi ((sorry I know you hate math)) it continues endlessly with out repeats or ends. The endless possibilities and the unrandomness of computer generated random selection would make such statistics quite unaccurate. Quote: ...a soul is defined as: An eternal, undying part of a person that defines who they are and transcends death to move on to other places. Ok then, how does a robot's mind transcend death? A simple magnet will erase all knowledge and intelligence and even life. A person who gets amnesia still knows they exist and has the ability to make choices on their own. Theology speaking...on the basis of most religions when a person dies there soul lives on in some sort of after life. what is the fate of a robot after death? If it even does die.
|
 |
 |
|
|
|
|
|
|
|
|
Posted: Sun May 29, 2005 5:01 pm
Quote: "forced" to have free will? That's an oxy-moron. How do you figure that's how we are? Well, before I get at this, tell me, do you think we really have free will? Because if we don't have free will, the soul doesn't really make a difference in our choices, does it? At times everyone in their life has to make a choice. What I mean by "forced to have free will" is, there are times in a person's life when they HAVE to choose, and they can't just back off, choose not to choose. If a robot is faced with a moment like this, and responds, rather than malfunction, that constitutes a soul. Quote: At how many algorithims would you stop computing the information? The human geneology is like pi ((sorry I know you hate math)) it continues endlessly with out repeats or ends. The endless possibilities and the unrandomness of computer generated random selection would make such statistics quite unaccurate. I don't believe humans are all THAT complicated, actually. If YOU were a police officer ordered to shoot at an innocent crowd and you were contemplating whether or not it was ethical, how much would you think? Only really basic ideas, all of which come down to the root of being "programmed" into us. Like, "Death is bad". Who says? A person generally only thinks that far when contemplating, and a robot can have similar limitations... or maybe even greater ones, deeper contemplations... making them, dare I say, "more human"? Quote: Quote: ...a soul is defined as: An eternal, undying part of a person that defines who they are and transcends death to move on to other places. Ok then, how does a robot's mind transcend death? A simple magnet will erase all knowledge and intelligence and even life. A person who gets amnesia still knows they exist and has the ability to make choices on their own. Okay, put a magnet to a robot's head, it dies. Put a gun to a human's, it dies. Different weaknesses, that's all. The type of death doesn't matter, both are undergoing the destruction of all they know. A magnet does destroy the mind and it's goings on, but the soul is eternal, as we define it. Just like when a human dies, all their knowledge and all their brain's goings on die, but their soul survives. The human and robot soul are the same, after all. When a robot is built, equal to a human, how can this soul tell the difference, and why would one not then choose to inhabit it? Quote: Theology speaking...on the basis of most religions when a person dies there soul lives on in some sort of after life. what is the fate of a robot after death? If it even does die. I'll only touch this slightly because I've already answered it in the previous paragraph. It makes sense that since, by our definition, souls are eternal, and only choose to inhabit humans as shells before moving onto the next one. If a robot is created with such accuracy as to completely simulate a human, there would be no problem with a soul choosing to inhabit it.
|
 |
 |
|
|
|
|
|
|
|
|
|
Socrates in Disguise Captain
|
Posted: Mon May 30, 2005 2:12 pm
Say a person is raised in strict military family...there first instinct is to follow orders without thought of it. Put the same person and raise them in a good christian family...they will think before they shoot on a whim. The human conscience is in constant battle with the mind on the line between good and bad. Yet you still didn't answer my question how do you give a robot a conscience? How would a robot ever feel guilt, remorse, hate, love? The line between right and wrong is NEVER clear. You say we could set it up to have some sort of decisive reasoning...logic...but the way we think is very illogical. Ever seen some flick where the protagonist screams, "I will save her! Even if I have to kill all these people" that's human illogics...A robot wouldn't kill a crowd of people just to rescue one.
Or here's another example: A robot is asked something that would harm someone else. hostage situation, he has to lie or innocent customers at the local Bank of America will die. but lying is bad so all those people are dead.
A human knows what it thinks and has a pretty good idea of what it will do in each situation...that is why thought is minimul in these cases...robots might think longer because they DON"T KNOW WHAT THEY WANT. Their internal protocal has to tell them.
It is my belief that the soul is born within a person and trancends to other places or planes of existence, not to other people/things. How do you create a soul?
|
 |
 |
|
|
|
|
|
|
|
|
Posted: Tue May 31, 2005 6:50 pm
dezrosatweaker Say a person is raised in strict military family...there first instinct is to follow orders without thought of it. Put the same person and raise them in a good christian family...they will think before they shoot on a whim. The human conscience is in constant battle with the mind on the line between good and bad. Yet you still didn't answer my question how do you give a robot a conscience? How would a robot ever feel guilt, remorse, hate, love? The line between right and wrong is NEVER clear. You say we could set it up to have some sort of decisive reasoning...logic...but the way we think is very illogical. Ever seen some flick where the protagonist screams, "I will save her! Even if I have to kill all these people" that's human illogics...A robot wouldn't kill a crowd of people just to rescue one. Or here's another example: A robot is asked something that would harm someone else. hostage situation, he has to lie or innocent customers at the local Bank of America will die. but lying is bad so all those people are dead. A human knows what it thinks and has a pretty good idea of what it will do in each situation...that is why thought is minimul in these cases...robots might think longer because they DON"T KNOW WHAT THEY WANT. Their internal protocal has to tell them. It is my belief that the soul is born within a person and trancends to other places or planes of existence, not to other people/things. How do you create a soul? To answer your question of how you give a robot a conscience, it would be easy. You can design a pair of programs, one first of own self precedence, that bases choices over one's personal benefit, a second based around the good of the people around you. Based on what kind of personality you want the robot to have, you can give a slight boon to one over the other, to affect the most logical choices they reach and maybe change the tides a bit. What are our consciences but just a series of contemplation that we undergo, followed by a well-thought-out decision and a pang of guilt from the choice not chosen? A robot could easily simulate this. A robot could easily feel the entire range of emotions, including love, hate, guilt, remorse, it would just be conditioned into them. Naturally from birth we humans always feel a bit of remorse from one choice we had that we decided against. If a robot were well conditioned as we are to be keep in mind other choices even after the choice has been made, well, that right there, that's regret. Every function of the human mind can be reduced to a series of calculations and patterns we all undergo, and the finishing products, in the end, turn out to be randomized somewhat, depending on mood at the time and other immediate affects. Something you don't understand is, unlike the robots from Sci Fi movies, etc., they don't obey whatever a human says to them necessarily... they'd be similar to humans in every way, shape, and form. Just as YOU would lie to save a bunch of people, a perfectly designed robot would too. A robot can be programmed to realize that since lives are of utmost importance, and would be willing to disobey his teachings to save them. Just like "pushing is bad." If its "push or be shot", then no question as to what the robot would pick. Every pair or group of decisions is put into first syllogistic form, different choices being put in order similar to the way a human would, but with small differences pending the programmed personality of the artificial mind: first a personal emotion (pending it was very strong, like love, which holds sway over everything else), then by value (let them take out the frigates, save the cargo freights), then by quantity (to kill one man to save a hundred), etc. There is no decision a human must face that could not fall into this list. To your last point/question, I believe the definition states souls are eternal... that each human is a shell for a soul. This being the case, bodies and minds could be completely replicated, therefore having no difference to real ones, and souls would not be hesitant to harbor them...
|
 |
 |
|
|
|
|
|
|
|
|
|
Socrates in Disguise Captain
|
Posted: Wed Jun 01, 2005 7:46 pm
TheBeatnik To your last point/question, I believe the definition states souls are eternal... that each human is a shell for a soul. This being the case, bodies and minds could be completely replicated, therefore having no difference to real ones, and souls would not be hesitant to harbor them... To other "places" not other people. A soul is eternal but it had to originate somewhere...hence the human body...replication processed is not hard...bodies, simple. minds, child play. but to replicate a soul that takes some work. when a person dies there soul moves on to higher planes...robots are not on such planes. ((This debate is starting to go in circles, would you like to put the final statements in and end this debate?))
|
 |
 |
|
|
|
|
|
|
|
|
Posted: Wed Jun 01, 2005 8:33 pm
Socrates in Disguise ((This debate is starting to go in circles, would you like to put the final statements in and end this debate?)) TheBeatnik, you may make one closing statement without raising new arguments - rebuttals or reiterations are fine. Then we shall consider the debate closed.
|
 |
 |
|
|
|
|
|
|
|
|
|
|
Posted: Fri Jun 03, 2005 12:33 pm
SanguineV Socrates in Disguise ((This debate is starting to go in circles, would you like to put the final statements in and end this debate?)) TheBeatnik, you may make one closing statement without raising new arguments - rebuttals or reiterations are fine. Then we shall consider the debate closed.Cool. Sorry, yeah, it did start going in circles. Good debate! I bow to you. Now then, for my closing statements, I don't have much more to say than I already have said. I believe that everything really important about the human condition can be conveyed into a robot. The human body and mind, though most don't like to admit it, is just a very complex, natural robot. We are all driven by a series of tiny processes easily replicable. As children we are raised to believe certain things, to react in certain ways, just as a robot is programmed to. Our emotions of regret, anger, love, etc., are all just reactions trained into our system by our parents and by society. If this is what constitutes having a soul, there is no reason why a vessel completely replicable, but composed by human hands, would not succeed in having a soul.
|
 |
 |
|
|
|
|
|
|
 |
|
|
|
|
|