Author Topic: Asimovian Thought Experiment  (Read 12201 times)

0 Members and 1 Guest are viewing this topic.

Offline Wobble73

  • 210
  • Reality is for people with no imagination
    • Steam
Re: Asimovian Thought Experiment
Humans sometimes don't have free will, they sometimes act on instinct, something robots could never have!
Who is General Failure and why is he reading my hard disk?
Early bird gets the worm, but the second mouse gets the cheese
Ambition is a poor excuse for not having enough sense to be lazy.
 
Member of the Scooby Doo Fanclub. And we're not talking a cartoon dog here people!!

 You would be well adviced to question the wisdom of older forumites, we all have our preferences and perversions

 

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Re: Asimovian Thought Experiment
You've never really dealt with having a whole spectrum of certainty. Generally your logic has seemed to follow that if a robot has the appearance of free will, we ought to act as though it has free will. You've never differentiated between a robot having half the appearance of free will, as opposed to 90% of the appearance of free will. (Feel free to correct me on this if I missed something. That is one of my objections to your stance.)

Did you not say that you didn't want to get into yet another discussion with me over the testing? :p

I don't see where testing comes in here. In this thread, I do not remember reading any point where you talked about a 'spectrum' of free will prior to this point.

Quote
For starters, I should say that I don't propose that every single robot be tested in such a manner. If it's obvious that it's not simply a fluke ala Short Circuit, and it's actually a general property of a robot design (or a particular robot brain) then I think it's good enough to assume that such robots have free will. It would be impractical to do otherwise.

I never said every single robot would have to be tested either.

I never said you did.

However the fact remains that you could have thousands of robots off the production lines long before you had any kind of definitive test. Bear in mind that there is no real reason to think that free will wouldn't happen as the result of an accident either. Sci-fi is full of examples of smarter and smarter computers achieving sentience.

I'm not sure what the point of this paragraph is beyond adding additional information.

Quote
I believe that humans are assumed to have free will because the assumption works. I think that if we had good evidence that humans could not control themselves, it would be evident in the way that our society functioned. I think that because humans have built a society that assumes free will, and then successfully lived within a society that assumes free will, it has proven beyond reasonable doubt that humans have free will.

Let's assume that humans don't have free will but think they do. They would build a society based on free will because they think they have it. Then cause they think they have it they would live successfully within it. By your definition that's proof beyond a reasonable doubt that they have free will but in the end you've reached the wrong conclusion.

Then prove to me that humans don't have free will, but think they do. Present some actual evidence that supports your point.

Human society has to be based on free will in order to work. That's true but we still quite often say that humans have lost their free will. The mentally ill for instance perform actions that look perfectly free willed to them from their own narrow perspective but those of us who don't have the same affliction can look at the person's actions and say that he no longer has his free will and is acting differently because of altered brain chemistry. But altered from what? Who's to say that what we consider to be sane = free will? Perfectly sane people do little crazy things all the time and then say "I have no idea why I chose to do that. It seemed like a good idea at the time"

You're making the assumption that the baseline = free will but you haven't got any evidence to prove it. What if someone who is a habitual thief simply has a brain chemistry that makes him like stealing. If someone does that all the time you call them a kleptomaniac and treat them for it but if they only do it from time to time you call them a thief and lock them away.

What if a sufficiently high intelligence can see that we're all a little mad and are deluding ourselves into thinking we have free will because of it?

I say that the baseline is free will because by and large that is what our government, justice system, etc. is based on. I think most people would say that our justice system has progressed since the Salem witch trials, or when people attributed criminal behavior to demons or spirits. In that time we have held people more accountable for their actions - rather than say that they are possessed, we may diagnose them with some kind of mental illness, or find some kind of psychological quirk.

Now what is free will? Free will as defined by dictionary.com (For consistency) would be: "free and independent choice; voluntary decision". At this very instant I can snap my fingers if I choose to do so. I need not consult with anyone else. It is an entirely voluntary act, I may choose to do so or not do so when I desire, whether I am told by someone else or not.

Do most of the people around you choose to decide to eat at various times, even if they're hungry? Do they choose to sometimes not sleep or nap, even if they feel tired? Do they perform tasks only when directly told to, or are they capable of doing things simply because they want to get something out of it? Do they do things for other people, even though they don't want to? Most humans I've seen demonstrate the ability for this kind of behavior. All of it implies that they have free will.

Now, what evidence does your sufficiently high intelligence offer which proves we are all deluding ourselves?

While I'm at it. Here's another argument to further muddy the waters. In Larry Niven's Protector Brennan-monster frequently says that he no longer has free will since he's now too intelligent. Whenever he is presented with a choice he knows which is the best course of action to take and thus takes it. Only a mental defective would willingly take a course of action which didn't appear to be the best thing to do at the time after all.

Now who says that AIs won't fall into that category? What if humans are smart enough to have free will but stupid enough not to lose it again? An AI like that might actually fail several of our tests for free will since it would be unable to make sub-optimal choices. We happen to regard being able to make sub-optimal choices as an example of free will but yet again we could be deluding ourselves. It could be that we're simply too dumb to make the best choice in every situation.

The best decision...for who? By what criteria? Is it the morally best course of action? Is it the most beneficial course of action? Is it the most logical course of action? Is it the most consistent course of action?

It's an interesting example, but as long as he can act mentally defective, he hasn't lost free will. I believe that us humans are capable of knowing the best course of action but, even so, we can still decide to ignore it and do something else instead.

So this doesn't seem like an argument against free will, but rather, an argument that robots don't even need apparent free will to be a part of human society.

You seem to be arguing based on the assumption that free will is binary. You either have it or you don't. It might be nothing of the kind.

Then pay more attention to my posts where I acknowledge just that:

You admit that you don't know and you go from there. Perhaps you grant it some sort of partial rights based on what you can prove. But if you can't prove that a robot will not lose control of itself arbitrarily, you've already hit a definition that would exclude a human from certain aspects of society.

Bear in mind you can't prove that any human on the planet won't go mad. So it would have to depend on the chance of it losing control.

That seems like a wise course of action.

Quote
Nor do we try animals for killing and eating other animals, or even humans, at least not generally. Here the reasons are less well-defined, but I think most people figure that the animal doesn't know better, it's natural behavior for the animal, and that is the way that the animal gets nourishment. Perhaps the animal is a carnivore and so it is biologically geared towards killing and eating other complex animals. But nor do we grant them rights such as freedom of speech or the right to vote. Again, we have applied some arbitrary mental test to animals and decided to exclude them from human society. Some animals are believed to be highly intelligent; but we still treat them the same as other animals.

Need I point out that there are several people involved in trying to get chimps rights on the grounds that said tests are flawed?

Quote
If we're only 50% sure that a robot has free will, but we're 99.9% sure that humans have free will, it seems kind of self-evident. We would not grant full rights to somebody unable to control themselves half the time. Most likely we would lock them up and separate them from society, if we considered them a threat to themselves or others.

I didn't say that the robot was uncontrollable 50% of the time. I said that we aren't certain the robot has free will or not because the tests aren't good enough. Using the example above a free will test for an AI smarter than humanity might fail to give conclusive evidence in either direction simply because the test is wrong.

I didn't say that the robot was uncontrollable 50% of the time, either. When I wrote that I envisioned a robot who runs the risk of having a human giving him orders and being forced to obey them. After all, by the very definition of free will, a robot would have to be uncontrollable.

I don't see why you choose to be obsessed with the tests being wrong. Of course they'll be inaccurate. So what do you propose as an alternative?
-C

 

Offline TrashMan

  • T-tower Avenger. srsly.
  • 213
  • God-Emperor of your kind!
    • FLAMES OF WAR
Re: Asimovian Thought Experiment
Really...arguing against humans having free will makes as much sense as claiming they have no hands and breathe liquid nitrogen....

As for robots, I really don't think they will EVER be complex enugh to compete with humans... For both techincal and practical reasons.
Nobody dies as a virgin - the life ****s us all!

You're a wrongularity from which no right can escape!

 

Offline Colonol Dekker

  • HLP is my mistress
  • Moderator
  • 213
  • Aken Tigh Dekker- you've probably heard me
    • My old squad sub-domain
Re: Asimovian Thought Experiment
I would honestly like a futurama robot. But it's a lottery whether we'll have R2-D2's, T-800's Benders or Sonnys.

It'll be fun regardless......... :D
Campaigns I've added my distinctiveness to-
- Blue Planet: Battle Captains
-Battle of Neptune
-Between the Ashes 2
-Blue planet: Age of Aquarius
-FOTG?
-Inferno R1
-Ribos: The aftermath / -Retreat from Deneb
-Sol: A History
-TBP EACW teaser
-Earth Brakiri war
-TBP Fortune Hunters (I think?)
-TBP Relic
-Trancsend (Possibly?)
-Uncharted Territory
-Vassagos Dirge
-War Machine
(Others lost to the mists of time and no discernible audit trail)

Your friendly Orestes tactical controller.

Secret bomb God.
That one time I got permabanned and got to read who was being bitxhy about me :p....
GO GO DEKKER RANGERSSSS!!!!!!!!!!!!!!!!!
President of the Scooby Doo Model Appreciation Society
The only good Zod is a dead Zod
NEWGROUNDS COMEDY GOLD, UPDATED DAILY
http://badges.steamprofile.com/profile/default/steam/76561198011784807.png

 

Offline Roanoke

  • 210
Re: Asimovian Thought Experiment
Doesn't the existance of laws, any laws,  contradict the possability of free will (wether it's Asimov and robots, and, on a more abstract note, humans too) ?

What if a robot doesn't want to "live" ? A human can throw themself off a bridge, if they choose to. A robot wouldn't have the choice.

I also agree to K. though. Untill we came up with a test that couldn't be cheated we would never really know.

 

Offline Flaser

  • 210
  • man/fish warsie
Re: Asimovian Thought Experiment
I think a whole lot of you are afraid of the possibility that the soul is a phenomenon associable with the physical world. I prefer - and see the most evidence for - the Shirowian, 'Ghost in the Shell' approach.

Namely, any sufficiently complex system's behaviour will eventually reach such a chaotic state where you can't deterministically predict its exact reactions with any useable certanity.
You can still do accurate predictions over the average of said actions and long term trends.

This is what you could call 'will' as it has characteristics (long term trends) that make it immediately personal, but it is still not a simple program, that could be run time and again to recieve the same results.

This is a structuralist approach. Shirow goes beyond this; by stating that the current structure is built upon earlier versions, and said structures manifest themselves in base functions, that you could call instinct.

Philosophy of Ghost in the Shell

Therefore any robot, AI or any other system with enough complexity and redundant base functions would have a soul in my definition, and while its nature and evolution would be radically different to ours; I don't believe that as individuals or species they would have any less legitimacy or genuiness than humans in my eyes.
"I was going to become a speed dealer. If one stupid fairytale turns out to be total nonsense, what does the young man do? If you answered, “Wake up and face reality,” you don’t remember what it was like being a young man. You just go to the next entry in the catalogue of lies you can use to destroy your life." - John Dolan

  

Offline Mefustae

  • 210
  • Chevron locked...
Re: Asimovian Thought Experiment
R2-D2's, T-800's Benders or Sonnys.
Disclaimer: If you can identify all four (4) of the aforementioned robots in under 10 seconds, you qualify as a geek.

 

Offline TrashMan

  • T-tower Avenger. srsly.
  • 213
  • God-Emperor of your kind!
    • FLAMES OF WAR
Re: Asimovian Thought Experiment
R2-D2 = Star Wars
T-800 = Terminator
Bender = Futurama
Sony = ???? isn't this a compayn rather tha na robot?


Humans can't (and won't) produce somethuing they cannot comprehend themselves.
The human brain is so redicolously complex that it takes a human lifetime to study it and you still are only scratching the surface..

To manufacture a robot with such a complex "artificial" brain would require ENORMEUS ammounts of knowledge, work hours, technology and most of all money...

So I see it a Dyson Shpere thing - possible in theory (lol...evn tough the theory is on a very hsaky foundation) - but something that will NEVER be built..
Nobody dies as a virgin - the life ****s us all!

You're a wrongularity from which no right can escape!

 

Offline Ghostavo

  • 210
  • Let it be glue!
    • Skype
    • Steam
    • Twitter
Re: Asimovian Thought Experiment
Sonny is from the same author as this thread's about, that should give you enough clues.
"Closing the Box" - a campaign in the making :nervous:

Shrike is a dirty dirty admin, he's the destroyer of souls... oh god, let it be glue...

 

Offline Janos

  • A *really* weird sheep
  • 28
Re: Asimovian Thought Experiment
Doesn't the existance of laws, any laws,  contradict the possability of free will (wether it's Asimov and robots, and, on a more abstract note, humans too) ?
No, because laws are sociological constructs which outline rules for living in a society, and they are frequently broken.

Quote
What if a robot doesn't want to "live" ? A human can throw themself off a bridge, if they choose to. A robot wouldn't have the choice.

Why not? Wouldn't that only depend on the flexibility of said robot's coding, which dictates its behaviour?

Quote
I also agree to K. though. Untill we came up with a test that couldn't be cheated we would never really know.
I love free will vs. determinism -debates, they never go anywhere. Maybe, just maybe, because we cannot simply perform wide empiristic research on humans (same reason why biologists and sociologists often clash about human behaviour).
lol wtf

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
However the fact remains that you could have thousands of robots off the production lines long before you had any kind of definitive test. Bear in mind that there is no real reason to think that free will wouldn't happen as the result of an accident either. Sci-fi is full of examples of smarter and smarter computers achieving sentience.

I'm not sure what the point of this paragraph is beyond adding additional information.


It's yet another example of how a robot could be created without tests to prove free will existing to test it.

I say that the baseline is free will because by and large that is what our government, justice system, etc. is based on.


That's very circular logic. The legal system is based on the assumption of free will because only the presence of free will creates the need for a legal system.

Quote
I think most people would say that our justice system has progressed since the Salem witch trials, or when people attributed criminal behavior to demons or spirits. In that time we have held people more accountable for their actions - rather than say that they are possessed, we may diagnose them with some kind of mental illness, or find some kind of psychological quirk.

Ironically I was about to raise a similar point myself. Why are you assuming that this is where the progress ends? What if in 400 years from now we don't have prisons at all because humanity has proved that all criminal behaviour (and not just some of it) is due to psychological problems? We already use criminal profiling to catch murderers and such profiling always talks about the murderer's need to do x or y. Yet when we catch them we decide that they were 100% responsible for their actions and thus have to be sent to jail for them. What if in 400 years prison is viewed as being as stupid a notion as demonic possession?

What if a large number of things attributed to free will are actually not due to free will at all?

Quote
Do most of the people around you choose to decide to eat at various times, even if they're hungry? Do they choose to sometimes not sleep or nap, even if they feel tired? Do they perform tasks only when directly told to, or are they capable of doing things simply because they want to get something out of it? Do they do things for other people, even though they don't want to? Most humans I've seen demonstrate the ability for this kind of behavior. All of it implies that they have free will.


Again with the assumption of binary free will that is either on or off. What if whether you snap your fingers is all your choice, when you sleep is 90% your choice and whether or not you murder the pretty girl who just walked past is down the unique set of psychological quirks you've built up during your lifetime and is something you have little control over?

Quote
Now, what evidence does your sufficiently high intelligence offer which proves we are all deluding ourselves?


In the same way that a sufficiently intelligent being could see that the Salem witch trials were bull****.

The best decision...for who? By what criteria? Is it the morally best course of action? Is it the most beneficial course of action? Is it the most logical course of action? Is it the most consistent course of action?

It's an interesting example, but as long as he can act mentally defective, he hasn't lost free will. I believe that us humans are capable of knowing the best course of action but, even so, we can still decide to ignore it and do something else instead.


But why on earth would he "act mentally deficient"? You're taking a very anthropomorphic view on the subject. Why would a highly intelligent being deliberately choose to do that? Again I'm not assuming that AIs will be comparable to us in intelligence. What if the first AI is to us what we are to chimps? Cats? Woodlice even? Would you still expect the same systems that govern our existence to be necessarily be relevant at all  to such an AI?

Humans are stupid. We can choose to make the wrong choice. But that doesn't mean that every intelligent being has to be like that.

Quote
You seem to be arguing based on the assumption that free will is binary. You either have it or you don't. It might be nothing of the kind.

Then pay more attention to my posts where I acknowledge just that:

You admit that you don't know and you go from there. Perhaps you grant it some sort of partial rights based on what you can prove. But if you can't prove that a robot will not lose control of itself arbitrarily, you've already hit a definition that would exclude a human from certain aspects of society.

Nope. As far as I can see from that you're still assuming binary free will and simply assigning a percentage chance that the robot has it or not. I'm talking about something different. I'm talking about their being a spectrum of free will with certain actions which are under your complete control and others being only partially or not at all under your control. A robot under Asimov's laws does not have free will. Yet this does not mean it would act uncontrollably because we understand exactly what the limits on its free will are and have determined what danger they present.

Besides you keep bringing up the law. And the law does make a binary distinction between free will and insanity. A man who commited a murder is either guilty or insane. There is no well he was mostly insane but he could have chosen not to do x so we give him a reduced prison sentence for that minor mistake and treat him for the insanity which is mostly to blame. Suppose we have a person who will medically qualify as a psychopath today but wouldn't a year ago because the tendencies weren't quite strong enough yet. Would you say that a murder committed 6 months ago was 100% free will? What about 3 months ago? Yesterday? At which point does the choice suddenly flip between insanity and free will?

Quote
I don't see why you choose to be obsessed with the tests being wrong. Of course they'll be inaccurate. So what do you propose as an alternative?

You've missed the point completely if you think I need to propose an alternative. That's like saying "Well science might not give us the right answer so are you proposing voodoo as an alternative?" I'm simply saying that science may not give you a conclusive answer. So making statements like "I'll give a robot rights based on how well I can determine if it has free will" means little if your tests are not conclusive in the first place. 

Asking me for an alternative simply proves that you're not paying attention to what I'm saying. There is no alternative (at least no sensible one). I'm asking what do you do when the tests ARE inconclusive. Not what other tests you would do instead.

I asked you what do you do with a robot that you're 50% certain has free will. What you're suggesting so far sounds a little too much like this

Man 1: I think he's dead. But I'm not a doctor. I can't tell
Man 2: How certain are you he's dead?
Man 1: I'm 50:50. He could be in a deep coma.
Man 2: Well then. If you're 50% certain he's dead we'll only bury him up to his waist.

What do you do when your 50% robot is accused of a crime?
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline Roanoke

  • 210
Re: Asimovian Thought Experiment
Doesn't the existance of laws, any laws,  contradict the possability of free will (wether it's Asimov and robots, and, on a more abstract note, humans too) ?
No, because laws are sociological constructs which outline rules for living in a society, and they are frequently broken.

Quote
What if a robot doesn't want to "live" ? A human can throw themself off a bridge, if they choose to. A robot wouldn't have the choice.

Why not? Wouldn't that only depend on the flexibility of said robot's coding, which dictates its behaviour?

Quote
I also agree to K. though. Untill we came up with a test that couldn't be cheated we would never really know.
I love free will vs. determinism -debates, they never go anywhere. Maybe, just maybe, because we cannot simply perform wide empiristic research on humans (same reason why biologists and sociologists often clash about human behaviour).

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I appreciate your point, but in the Asimov text it looks pretty clear cut to me

 

Offline Bobboau

  • Just a MODern kinda guy
    Just MODerately cool
    And MODest too
  • 213
Re: Asimovian Thought Experiment
I think a whole lot of you are afraid of the possibility that the soul is a phenomenon associable with the physical world. I prefer - and see the most evidence for...

that's very similar to what I was saying earlier, no one payed any attention to it then either.
Bobboau, bringing you products that work... in theory
learn to use PCS
creator of the ProXimus Procedural Texture and Effect Generator
My latest build of PCS2, get it while it's hot!
PCS 2.0.3


DEUTERONOMY 22:11
Thou shalt not wear a garment of diverse sorts, [as] of woollen and linen together

 

Offline Flaser

  • 210
  • man/fish warsie
Re: Asimovian Thought Experiment
I think a whole lot of you are afraid of the possibility that the soul is a phenomenon associable with the physical world. I prefer - and see the most evidence for...

that's very similar to what I was saying earlier, no one payed any attention to it then either.

"Logically sound? How laughable. The only thing that people use logic for is to see what they want to see and disregard what they do not." - my sig in a lot of other forums
"I was going to become a speed dealer. If one stupid fairytale turns out to be total nonsense, what does the young man do? If you answered, “Wake up and face reality,” you don’t remember what it was like being a young man. You just go to the next entry in the catalogue of lies you can use to destroy your life." - John Dolan

 

Offline Colonol Dekker

  • HLP is my mistress
  • Moderator
  • 213
  • Aken Tigh Dekker- you've probably heard me
    • My old squad sub-domain
Re: Asimovian Thought Experiment
Logic is all about point of view anyway.

Bear with me on this one, A species like the black widow or preying mantis eats the male after mating, Correct?

To them it seem logical because it stops the male breeding with other females, But other species bond with the male and prevent them from "throwing it about like muck" (angler fish)
Campaigns I've added my distinctiveness to-
- Blue Planet: Battle Captains
-Battle of Neptune
-Between the Ashes 2
-Blue planet: Age of Aquarius
-FOTG?
-Inferno R1
-Ribos: The aftermath / -Retreat from Deneb
-Sol: A History
-TBP EACW teaser
-Earth Brakiri war
-TBP Fortune Hunters (I think?)
-TBP Relic
-Trancsend (Possibly?)
-Uncharted Territory
-Vassagos Dirge
-War Machine
(Others lost to the mists of time and no discernible audit trail)

Your friendly Orestes tactical controller.

Secret bomb God.
That one time I got permabanned and got to read who was being bitxhy about me :p....
GO GO DEKKER RANGERSSSS!!!!!!!!!!!!!!!!!
President of the Scooby Doo Model Appreciation Society
The only good Zod is a dead Zod
NEWGROUNDS COMEDY GOLD, UPDATED DAILY
http://badges.steamprofile.com/profile/default/steam/76561198011784807.png

 

Offline Janos

  • A *really* weird sheep
  • 28
Re: Asimovian Thought Experiment
Logic is all about point of view anyway.

Bear with me on this one, A species like the black widow or preying mantis eats the male after mating, Correct?

To them it seem logical because it stops the male breeding with other females, But other species bond with the male and prevent them from "throwing it about like muck" (angler fish)

Actually male is easy meat and a good source of protein so hell, why not? Male's possible sex life, something that is completely unheard of here in the internet, does not interest the female in the slightest.
lol wtf

 

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Re: Asimovian Thought Experiment
Quote
I say that the baseline is free will because by and large that is what our government, justice system, etc. is based on.


That's very circular logic. The legal system is based on the assumption of free will because only the presence of free will creates the need for a legal system.

If *only* the presence of free will creates the need for a legal system, and we have a need for a legal system, what does that suggest?

Surely that's not what you meant.

Quote
I think most people would say that our justice system has progressed since the Salem witch trials, or when people attributed criminal behavior to demons or spirits. In that time we have held people more accountable for their actions - rather than say that they are possessed, we may diagnose them with some kind of mental illness, or find some kind of psychological quirk.

Ironically I was about to raise a similar point myself. Why are you assuming that this is where the progress ends?

I didn't say this was where progress ends.

What if in 400 years from now we don't have prisons at all because humanity has proved that all criminal behaviour (and not just some of it) is due to psychological problems? We already use criminal profiling to catch murderers and such profiling always talks about the murderer's need to do x or y. Yet when we catch them we decide that they were 100% responsible for their actions and thus have to be sent to jail for them. What if in 400 years prison is viewed as being as stupid a notion as demonic possession?

What if a large number of things attributed to free will are actually not due to free will at all?

Are you actually making the claim that all crime is due to psychological problems which are completely outside the criminal's control? Or are you just spewing extraneous crap because you don't feel like directly refuting my point?

I can elaborate. By moving the blame from unprovable entities such as demons or spirits, and moving it to more provable things like psychological afflictions, we've made a shift from choosing to believe that people are affected by unprovable forces that _might_ interfere with free will (Which you keep relying on to support your argument) to choosing to believe that people are affected by (arguably) provable forces that we can show would interfere with free will.

Even if we were to decide that, 400 years from now, all crime is caused by psychological illness - we would still be relying on evidence.

Quote
Do most of the people around you choose to decide to eat at various times, even if they're hungry? Do they choose to sometimes not sleep or nap, even if they feel tired? Do they perform tasks only when directly told to, or are they capable of doing things simply because they want to get something out of it? Do they do things for other people, even though they don't want to? Most humans I've seen demonstrate the ability for this kind of behavior. All of it implies that they have free will.


Again with the assumption of binary free will that is either on or off. What if whether you snap your fingers is all your choice, when you sleep is 90% your choice and whether or not you murder the pretty girl who just walked past is down the unique set of psychological quirks you've built up during your lifetime and is something you have little control over?

A) You're not even talking about the same point I was.
B) You're not even trying to give any evidence for your "What if" statements.

Quote
Now, what evidence does your sufficiently high intelligence offer which proves we are all deluding ourselves?


In the same way that a sufficiently intelligent being could see that the Salem witch trials were bull****.

Nope. Not good enough.

Quote
The best decision...for who? By what criteria? Is it the morally best course of action? Is it the most beneficial course of action? Is it the most logical course of action? Is it the most consistent course of action?

It's an interesting example, but as long as he can act mentally defective, he hasn't lost free will. I believe that us humans are capable of knowing the best course of action but, even so, we can still decide to ignore it and do something else instead.


But why on earth would he "act mentally deficient"? You're taking a very anthropomorphic view on the subject. Why would a highly intelligent being deliberately choose to do that? Again I'm not assuming that AIs will be comparable to us in intelligence. What if the first AI is to us what we are to chimps? Cats? Woodlice even? Would you still expect the same systems that govern our existence to be necessarily be relevant at all  to such an AI?

Humans are stupid. We can choose to make the wrong choice. But that doesn't mean that every intelligent being has to be like that.

Simply because a being doesn't choose to take an action doesn't mean that it can't.

If an intelligent being can't make the wrong choice, it doesn't have free will.

Nope. As far as I can see from that you're still assuming binary free will and simply assigning a percentage chance that the robot has it or not. I'm talking about something different. I'm talking about their being a spectrum of free will with certain actions which are under your complete control and others being only partially or not at all under your control. A robot under Asimov's laws does not have free will.

Now who's assuming binary free will?

In virtually every one of Asimov's stories, robots have displayed a certain amount of free will, even while following the laws. Bicentennial Man, for instance.

Yet this does not mean it would act uncontrollably because we understand exactly what the limits on its free will are and have determined what danger they present.

No, we haven't. Even when the laws were known in Asimov's robot stories, robot behavior was not 100% predictable.

Besides you keep bringing up the law. And the law does make a binary distinction between free will and insanity. A man who commited a murder is either guilty or insane.

That's not completely true. A man who committed a murder may be guilty of first degree murder, second degree murder, or (in some states) third degree murder, each with their own varying definitions of how responsible the murderer was for the murder. (EG were their actions completely considered or an emotional outburst?)

There is no well he was mostly insane but he could have chosen not to do x so we give him a reduced prison sentence for that minor mistake and treat him for the insanity which is mostly to blame. Suppose we have a person who will medically qualify as a psychopath today but wouldn't a year ago because the tendencies weren't quite strong enough yet. Would you say that a murder committed 6 months ago was 100% free will? What about 3 months ago? Yesterday? At which point does the choice suddenly flip between insanity and free will?

You'd have to ask someone who actually wants to argue about the fine points of mental illnesses and the proper treatment of them.

Quote
I don't see why you choose to be obsessed with the tests being wrong. Of course they'll be inaccurate. So what do you propose as an alternative?

You've missed the point completely if you think I need to propose an alternative.

If you are going to argue against determining whether a robot has free will via a scientific process, you had damn well better have an alternative. You have completely failed to provide any substantial evidence to support your point. Hell, you seem to keep on changing your point. First you were saying that free will was irrelevant, then you started differentiating between the appearance of free will and actual free will, and now you're starting to complain that I haven't suitably addressed partial free will (When prior to that point, neither had you)

That's like saying "Well science might not give us the right answer so are you proposing voodoo as an alternative?" I'm simply saying that science may not give you a conclusive answer.

And I've already explicitly stated that I believe that a test for free will could give you a wrong answer. Now you're arguing that the tests might be inconclusive? Decide what you're actually objecting to. :doubt:

Asking me for an alternative simply proves that you're not paying attention to what I'm saying.

No, it just means that I have more respect for myself than you apparently do.

I've stated my solution. I've stated my supporting evidence. I've made an effort to provide substantial evidence.

You have stated no solution. You haven't come up with any supporting evidence. You've only made an effort to come up with unprovable thought experiments and somebody else's fictional characters.

There is no alternative (at least no sensible one). I'm asking what do you do when the tests ARE inconclusive. Not what other tests you would do instead.

I've acknowledged that the tests might not be right. That the tests might not be conclusive is your argument, one that I'm not participating in until you can prove your point. (And actually have a consistent point.)

I asked you what do you do with a robot that you're 50% certain has free will. What you're suggesting so far sounds a little too much like this

Man 1: I think he's dead. But I'm not a doctor. I can't tell
Man 2: How certain are you he's dead?
Man 1: I'm 50:50. He could be in a deep coma.
Man 2: Well then. If you're 50% certain he's dead we'll only bury him up to his waist.

I'm sorry that you think it sounds like that's what I'm saying. That's not what I'm saying.

What do you do when your 50% robot is accused of a crime?

You should figure out if the accusation is true or not. :)
-C

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
Yet again you're deliberately misunderstanding me. So I'm out.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline Bobboau

  • Just a MODern kinda guy
    Just MODerately cool
    And MODest too
  • 213
Re: Asimovian Thought Experiment
how exactly are we defining free will in this... discussion?
Bobboau, bringing you products that work... in theory
learn to use PCS
creator of the ProXimus Procedural Texture and Effect Generator
My latest build of PCS2, get it while it's hot!
PCS 2.0.3


DEUTERONOMY 22:11
Thou shalt not wear a garment of diverse sorts, [as] of woollen and linen together

 

Offline Wobble73

  • 210
  • Reality is for people with no imagination
    • Steam
Re: Asimovian Thought Experiment
how exactly are we defining free will in this... discussion?

 :nervous:

What discussion??  :nervous:

What was the question again?

 :drevil:  :lol:
Who is General Failure and why is he reading my hard disk?
Early bird gets the worm, but the second mouse gets the cheese
Ambition is a poor excuse for not having enough sense to be lazy.
 
Member of the Scooby Doo Fanclub. And we're not talking a cartoon dog here people!!

 You would be well adviced to question the wisdom of older forumites, we all have our preferences and perversions