Welcome to Gaia! ::


Machiavelli, The Prince
It is a sound maxim that reprehensible actions may be justified by their effects, and that when the effect is good, it always justifies the action. For it is the man who uses violence to spoils things, not the man who uses it to mend them, that is blameworthy. A Prince should therefore disregard the reproach of being thought cruel where it enables him to keep his subjects united and loyal. For he who quells disorder by a very few signal examples will in the end be more merciful than he who from too great leniency permits things to take their course and so result in chaos and bloodshed; for these hurt the whole state, whereas the severities of the Prince injure individuals only. It is essential therefore, for a Prince who desires to maintain his position, to have learned how to be other than good, and to use or not use his goodness as necessity requires.
This is one of the definitive supports of the simple statement of "the ends justify the means" in the context of utilitarianism (the greatest good to the greatest number).

Personally, however, I'm a little confused as to how the ends, which are always intentional, and always undecided (let us discard the fantastical notions of fate and destiny, seeing as they have no grounding or proof) are supposed to justify the means, which are rather more substantial (in that the present is decided, whereas the future isn't), and will have effects upon the actual results (i.e. will affect the likelihood of the ends occuring).

However, the utilitarian notion of the 'greater good' seems far more fallible. For example:

Say that terrorists have, somehow, gotten hold of a bargaining chip. Hostages, key data, maybe a dangerous weapon. You are presented with a simple choice - they make use of their chip, or you kill one single innocent person.

The basic moral instinct of most is to jump at the word "innocent" and say "no." Utilitarianism on the other hand, declares that the individual should be sacrificed for the good of those who will be affected by the chip. However, let us modify the scenario - how about 10 innocents, or 100 (yes, Swordfish)? At what point does the 'greater good' cease to be greater? How can you determine what is more important out of a selection of lives, the majority of which you are unaware of. Are the hostages all tramps, or members of the UN? Are those who you would have to kill potential presidents, or drop-outs?

This argument isn't particularly sound, but it does demonstrate some of the basic flaws in both utilitarianism, and thus weakens the principles of the Machiavellian example. There are, of course, other factors, which is why I'm not going to write out everything now, but rather put it up for debate as it is.

Is utilitarianism justifiable, on either a local, universal or case-by-case basis, and does it justify the Machiavellian ideals (create good by any method, no matter how bad)?
Felle
Personally, however, I'm a little confused as to how the ends, which are always intentional, and always undecided (let us discard the fantastical notions of fate and destiny, seeing as they have no grounding or proof) are supposed to justify the means, which are rather more substantial (in that the present is decided, whereas the future isn't), and will have effects upon the actual results (i.e. will affect the likelihood of the ends occuring).

Even thought the future is undecided, it would make no sense to take no action because we cannot be certain that the outcome will be exactly what we predict. I fail to see how this is an objection to utlilitarianism as all you have demonstrated is that there is always an element of uncertainty in any decision making process.

Fell
The basic moral instinct of most is to jump at the word "innocent" and say "no." Utilitarianism on the other hand, declares that the individual should be sacrificed for the good of those who will be affected by the chip. However, let us modify the scenario - how about 10 innocents, or 100 (yes, Swordfish)? At what point does the 'greater good' cease to be greater? How can you determine what is more important out of a selection of lives, the majority of which you are unaware of. Are the hostages all tramps, or members of the UN? Are those who you would have to kill potential presidents, or drop-outs?

Again, what is the objection here? All you have done is present questions that, given the appropriate information, could indeed be answered by a utilitarian. None of these examples actually prove that there is any "fallibility" in utilitarianism as none of them actually demonstrate a fundamental flaw in the calculations it uses to arrive at its conclusions.

Fell
This argument isn't particularly sound, but it does demonstrate some of the basic flaws in both utilitarianism, and thus weakens the principles of the Machiavellian example. There are, of course, other factors, which is why I'm not going to write out everything now, but rather put it up for debate as it is.

No it's not, and no it doesn't.

Felle
Is utilitarianism justifiable, on either a local, universal or case-by-case basis, and does it justify the Machiavellian ideals (create good by any method, no matter how bad)?

The surest test of your objection is this: what would you propose as an alternative? Surely you are not just attacking a moral system without an alternative?
Hmmm I say it is justifiable on an individual basis and should be used if you really don't care what others think because obviously many will voice dissent. Though the danger with the whole "end justifies the means" is you are screwed if it doesn't end well.
Tangled Up In Blue
Even thought the future is undecided, it would make no sense to take no action because we cannot be certain that the outcome will be exactly what we predict. I fail to see how this is an objection to utlilitarianism as all you have demonstrated is that there is always an element of uncertainty in any decision making process.
I feel this is quite obvious. A plan is a step further away in time than an action. It is thereby a step more fallible than an action, because it is doubly fallible, because it relies upon the action, whereas the action relies upon nothing.

Hence, a bad action might cause a bad result, or it might cause a good result, but more importantly, the bad action will certainly be bad. Hence, you are aware of the nature of your action that you undertake, but unaware of what the nature of the result will be.

Tangled Up In Blue
Again, what is the objection here? All you have done is present questions that, given the appropriate information, could indeed be answered by a utilitarian. None of these examples actually prove that there is any "fallibility" in utilitarianism as none of them actually demonstrate a fundamental flaw in the calculations it uses to arrive at its conclusions.
Obviously, it is anecdotal. Utilitarianism refuses to define how exactly you define 'the greater good' in any situation. There are numerous greater goods that you could choose from: the greater good of everyone, everywhere, ever; or a narrower perspective, say, everyone in your country, etc. Given that utilitarianism only provides a basic ideal, and does nothing to justify or explain the practical application of that ideal, it leaves itself open to standard realistic problems, whereas systems which take more care to justify what their nature is will not experience such problems.

Tangled Up In Blue
The surest test of your objection is this: what would you propose as an alternative? Surely you are not just attacking a moral system without an alternative?
Perhaps I am, but if you do want an alternative, there is an obvious one:

All means must be justified before any ends be considered. Emergent properties cannot be used to justify individual breaches of codes of conduct. All doctrines must be strict both to the individual and the group; they should not make exceptions in the case of one solely on a rule of thumb that the other is more important.
Felle
I feel this is quite obvious. A plan is a step further away in time than an action. It is thereby a step more fallible than an action, because it is doubly fallible, because it relies upon the action, whereas the action relies upon nothing.

Hence, a bad action might cause a bad result, or it might cause a good result, but more importantly, the bad action will certainly be bad. Hence, you are aware of the nature of your action that you undertake, but unaware of what the nature of the result will be.

Are you suggesting that actions can be judged in any other way than by their results? If so, you assume that there is inherant morality or immorality in certain actions, which leads to an inability to act.

For example, let's apply your logic and then assume that killing is a "bad" act. Now, say I am in a room where a man with a gun is threatening two other people. He will shoot them if something is not done. I too have a gun. If the act of killing is inherantly bad then the more moral action would be to do nothing, as it would not be good per se but at least neutral. If I do nothing however and do the less bad act, then the man shoots and kills the other two people. Thus I am presented with the impossible choice: Commit a bad act or allow someone else to commit a bad act.

Utilitarianism solves this problem by allowing me to make a calculation based on the situation. If I kill this man, I can say, the result will be bad insomuch as I have, as a result of my actions, killed another human being. On the other hand, the alternative is that he will kill two human beings, which, we must assume, is worse than killing one. Thus I can safely say that I am doing the more moral thing by only killing one person instead of allowing two to be killed instead. Your idea of moral and immoral action only leads to deadlock and indecision.

Felle
Obviously, it is anecdotal. Utilitarianism refuses to define how exactly you define 'the greater good' in any situation. There are numerous greater goods that you could choose from: the greater good of everyone, everywhere, ever; or a narrower perspective, say, everyone in your country, etc. Given that utilitarianism only provides a basic ideal, and does nothing to justify or explain the practical application of that ideal, it leaves itself open to standard realistic problems, whereas systems which take more care to justify what their nature is will not experience such problems.

I still fail to see your point. What system are you refering to?

Tangled Up In Blue
Perhaps I am, but if you do want an alternative, there is an obvious one:

All means must be justified before any ends be considered. Emergent properties cannot be used to justify individual breaches of codes of conduct. All doctrines must be strict both to the individual and the group; they should not make exceptions in the case of one solely on a rule of thumb that the other is more important.

See my above example. Means are justified by their results, not by themselves.

Quick Reply

Submit
Manage Your Items
Other Stuff
Get GCash
Offers
Get Items
More Items
Where Everyone Hangs Out
Other Community Areas
Virtual Spaces
Fun Stuff
Gaia's Games
Mini-Games
Play with GCash
Play with Platinum