Author Topic: Asimovian Thought Experiment  (Read 12186 times)

0 Members and 1 Guest are viewing this topic.

Offline castor

  • 29
    • http://www.ffighters.co.uk./home/
Re: Asimovian Thought Experiment
I've got the belief that the essence of free will, if such thing exists, is beyond the scope of human understanding. Such like concepts of "Infinity" "eternity" etc.
Heh, one can experience things that are impossible to conceptualize. Try and debate that :confused:

 

Offline phreak

  • Gun Phreak
  • 211
  • -1
Re: Asimovian Thought Experiment
A robot with free-will implies that there is some randomness in its decision tree.  All current random number generation algorithms are detirministic, and the results will be the same if the seed is the same.  So if you can manipulate the random seed, you can get the robot with free-will to do an action 100% of the time.

http://en.wikipedia.org/wiki/Pseudo-random_number_generator

edit:  I didn't read the previous 4 pages, so this was probably brought up already.

edit2: i need to quote this :p

Quote
Farnsworth: Behold! The death clock. Simply jam your finger in the hole and this read-out tells you exactly how long you have left to live.
Leela: Does it really work?
Farnsworth: Well it's occasionally off by a few seconds. What with free will and all.
Fry: Sounds like fun. How long do I have left to live?  <He puts his finger in the hole and the clock dings>
Bender: Ooh! Dibs on his CD player!
« Last Edit: May 31, 2007, 12:15:27 pm by phreak »
Offically approved by Ebola Virus Man :wtf:
phreakscp - gtalk
phreak317#7583 - discord

 

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Re: Asimovian Thought Experiment
Yet again you're deliberately misunderstanding me. So I'm out.

At the very beginning of this debate you objected to another poster's point specifically because he wasn't able to prove one of his points. You stated that for his point to be at all valid as an argument he would have to prove that point.

You, on the other hand, haven't held yourself to that same standard. You've made multiple points in this article which rely on being prefaced with "What if?" because they're inherently unprovable. I don't see any point to arguing with someone who uses boundless limits for their objections, but then demands that the other posters must be constrained by what is actually provable. I will always be arguing from an unfair disadvantage.

Furthermore, you've failed to define what you're actually arguing. I've dropped a couple explicit definitions in my posts to try and make it clearer what I mean by "free will"; you haven't. I've clearly listed my reasoning behind my argument; you haven't. I've tried to make it clear what, exactly my assertion is; you haven't.

I, personally, get tired of being asked to come up with how I believe that people should respond to completely imaginary situations, and then getting attacked for my answer each and every time. You don't even bother to come up with how you think they should respond, at least not in the same detail that I have, so again it's an inherently unfair position to be arguing from.

I don't see much point to your objections; they don't provide for interesting discussion, just push the discussion into more and more philosophical territory. They aren't factual in nature so nobody is really learning anything.
-C

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
I have no problem with a discussion that is based on the facts. I do however see no point in discussing anything with someone who doesn't wish to understand my position and whose only interest is in seeing how they can twist my words in order to win.

The existence of free will is a philosophical point. That was my entire objection to it being used. You entered this discussion on a philosophical point right in your very first reply to me. And then you say that I need to back up my points with scientific answers and evidence? For a philosophical argument?

You have then repeatedly gone back to the same philosophical point that humans must have free will because our laws make it so. And yet I'm wrong cause I'm making philosophical arguments? :lol:
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline Bobboau

  • Just a MODern kinda guy
    Just MODerately cool
    And MODest too
  • 213
Re: Asimovian Thought Experiment
so was a definition dropped somewhere already?
Bobboau, bringing you products that work... in theory
learn to use PCS
creator of the ProXimus Procedural Texture and Effect Generator
My latest build of PCS2, get it while it's hot!
PCS 2.0.3


DEUTERONOMY 22:11
Thou shalt not wear a garment of diverse sorts, [as] of woollen and linen together

 

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Re: Asimovian Thought Experiment
Now what is free will? Free will as defined by dictionary.com (For consistency) would be: "free and independent choice; voluntary decision".

Granted that's only definition 1, definition 2 is

Quote
2.   Philosophy. the doctrine that the conduct of human beings expresses personal choice and is not simply determined by physical or divine forces.
-C

 

Offline Bobboau

  • Just a MODern kinda guy
    Just MODerately cool
    And MODest too
  • 213
Re: Asimovian Thought Experiment
the first definition is fairly usless, as it just brings more undefined terms into the frey which are more or less exactly like free will.

the second however I think can be used; "not simply determined by physical forces", from this I can be willing to go either way on it. everything in the world is a result of physical forces, so in this respect I can say that then no we wouldn't have free will by that definition, however I don't quite agree with the word 'determined' in nature there seems to be a lot of genuine chaos, at the atomic level things work in a statistical nature, not the solid form we are used to thinking about, this boils up to the macroscopic level, even computers make mistakes every now and then so something as complex and imprecise as an animal brain (like ours) will likely have a much larger variance rate. so given exactly the same situation I would wager that though you would oftine get very similar results there would be some small amount of unpredictability that would make it imposable for you to ever be able to 100% accurately determine what would happen. so if full determinism is the only alternative to free will then I'm going to have to go with free will.
Bobboau, bringing you products that work... in theory
learn to use PCS
creator of the ProXimus Procedural Texture and Effect Generator
My latest build of PCS2, get it while it's hot!
PCS 2.0.3


DEUTERONOMY 22:11
Thou shalt not wear a garment of diverse sorts, [as] of woollen and linen together

 

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
If it's the only alternative....I don't think anyone on this thread was saying that it was.
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline Bobboau

  • Just a MODern kinda guy
    Just MODerately cool
    And MODest too
  • 213
Re: Asimovian Thought Experiment
well what are the other alternatives?
Bobboau, bringing you products that work... in theory
learn to use PCS
creator of the ProXimus Procedural Texture and Effect Generator
My latest build of PCS2, get it while it's hot!
PCS 2.0.3


DEUTERONOMY 22:11
Thou shalt not wear a garment of diverse sorts, [as] of woollen and linen together

 

Offline Fabian

  • AI Code Modulator
    Temporal Mechanic
  • 25
Re: Asimovian Thought Experiment
Quote
However, I will say that I believe it is possible to devise such a test, given that it there is significant precedent that we can prove that an individual is or was incapable of acting normally (eg the insanity plea). I would assume that such a test would involve questions that would test the robot's ability to judge right from wrong, and its ability to interact with people, and judge from those interactions what was appropriate or inappropriate in a given situation.

I also believe that such a test would involve an investigation into the robot's software and hardware, in order to determine whether there were hidden triggers which would restrict the robot's ability to act freely. Presumably the test would also interview persons who had interacted with the robot outside of the court setting, in order to gauge the robot's behavior in a more realistic environment. I wouldn't rule out long-term observation or checkups.

Through such a test, I believe that you could prove beyond reasonable doubt that a robot possessed free will comparable to a standard human.

In other words you'd do your entire array of tests and end up exactly where I said you would. With a robot that may or may not have free will but which to all intents and purposes, appears to have it. You still wouldn't have proved free will. You'd simply have run out out of tests for it and drawn the conclusion that "As far as I can tell, it has free will"

Wow, that is fascinating. If I now run the same test on a human (which our society assumes normally has "free will") and he uses his "free will" i.e. decides to fail the test.

... Uhm, wouldn't that mean that I "prove" that he has no free will, even though he had, which contradicts the whole assumption that humans have "free will".

So now you need to prove they have no "free will". So you order one human to do everything to pass the test and show that he has a free will. You needed to order him as he had no free will to decide that himself (by assumption). So the tests shows he has "free will" even though we assumed he has not.

And if he fails the test (i.e. has no "free will"), he disobeyed an order with his "free will", which again contradicts the assumption that humans have no "free will".

As both scenarios ultimately can lead to contradiction, already a test for humans "free will" seems to be not possible with 100% accuracy.

I don't know how you ever want to make one for robots ...

The insanity plea test just proves that the person acted "insane" - whether or not this person actually is "insane" is not sure. (So a healthy person can be proven to be insane)

The thing with the insanity further is the assumption that "insane" persons do not have this free will, i. e.  definitely will fail the test. So if someone passes the test => he is not insane.

The thing is insane persons also no longer obey orders, but if they would do it, someone could order them to pass the test. (by leaking the answers for example ;) ).

A difference is that a robot would still obey its orders ...

cu

Fabian

 

Offline Fabian

  • AI Code Modulator
    Temporal Mechanic
  • 25
Re: Asimovian Thought Experiment
[...]
Will is the easier part, ironically. It's simply the fact that some output results from input. "Free" is more difficult to define, because it's realted to how the output is produced from input.

[...]

I myself think that every sentient system has a free will, because it's not simply producing always the same output from certain input, but affecting the process of decision consciously. It doesn't matter that these processes are bound to matter - everything is. The electrochemical reactions going on in the brain *are* the sentience, consciousness and free will; those things are not just a by-product of the reactions. They are the same thing.

It all boils down to definitions though. If you want to say that free will doesn't exist because brains are just a bunch of matter doing it's thing in the head, fine. In my opinion though, the key point is that the brain affects itself as much as outside world affects it, and thus the freedom of will is IMHO fulfilled - since the output is not simply dependant from input.

Note that chance has little to do with this. You can obviously make a simple computer to produce varying output from same input (random seed generator is the prime example of this) but whether or not this has any coherence, sentience or free will is obvious - no. It's simply random...

Okay, that means a simple self-learning AI to play tic-tac-toe has free-will?

The world is quite limited, but inside this world the AI can do any decision it wants to do.

The outside world _and_ its own experiences decide what it will do.

Of course we soon land at the point of kara that a "ueber-intelligent being" would no longer have a free will, because making mistakes would be counter productive.

I think we are mission out something.

Hm, lets add some hypothetical "feelings".

After having won for 10000000 times it begins to feel "boring", so the AI decides to let the human win some times also.

Its like a human thinking "this is too boring I'll let the fighter escape and chase it then again and boom a hidden other fighter shoots him out of the sky."

But when is something boring? I think, when you don't need to think anymore or nothing new "seems" to happen.

And that is a very interesting point, we always have a point of view here and what the brain concentrates on at the moment.

i.e. a mission can be completely boring to me, because I played it 100x times already and think I know exactly what happens.

But it could be that (in this case due to random factors) something entirely different happens, but I just don't notice it, because I'm not paying attention to it.

So the outside world influences me to be bored, but normally it should influence me to be not bored, but as I am using a point of view I am not seeing this "influence".

Of course this all rather "proves" your point of outside and inside world influencing actions. Feelings are also only inside world.

--

I just want to help me build my bad ass AI that takes over the world by infecting old Windows computers with a virus and sending lots of SPAM mails to cover up the slow taking over :P ... or wait ...

--

I think regarding free will what matters for me personally is that I have "desires" and "feelings" and that I can influence those with my own actions to a certain degree. This seems to me that I have a free will, which gives me a feeling of "freedom". :) (yahoo, they programmed freedom into us ...)

On a personal thing, I think most humans just want to be "happy".

lol, that means an AI to be like human must at least have:

- feelings and the desire to have good feelings and be "happy"

* Can you imagine shivans getting angry, because you killed their wing mates?
* Can you imagine a shivan getting happy by killing humans and such aiming better.

I really would like to see how an emotion driven AI would perform in FS2 battle.

(the above assumes shivans have a free will and feelings of course, but it could of course only be true for terran pilots, which some of them learned to suppress emotions better than others ...).

Hell, you could even have a "virtual academy" with all the pilots created based on some "genetic algorithm" and then chose them based on their ranking for the mission based on the AI level.

i.e. A "Captain" scored perfect in AI school, but still might not be completely emotionally stable (he's insane! :p).

If things were done that way, they would at least no longer seem to be that predictable.

(of course humans are sometimes really predictable also; might be funny to have an AI that thinks "damn, why is this pilot so stupid like the rest of them")

On the other hand random can be inserted as well, not in the inner world, but in the things happening in the outer world, i.e. a different pilot would be chosen based on level, but also on random factors (i.e. the other person could be ill ;) ) ...

I think we should just connect FS2 to Second Life and get the pilot data from there - or from the sims games. :lol:

Seriously if their avatars had any feelings and skills attached, it would be easy to create human profiles out of them :nod:.

Okay, I'm really getting off-topic here, but I think we should first create that AI with free will, then we can worry about the consequences. ;7

And what would be better than BtRL to start such an AI. They have a resurrection ship after all and seemingly (they think to have) feelings ... :-).

Which is a cool point. Unless with us humans you can transfer an AI (even with free will) and such preserve the mind or even copy it at some point, but I guess it would be as unethical as to clone someone ...

And for the original poster:

I also don't think the laws will work to protect society as a robot cannot properly do all of them. A robot cannot even blow up itself (hurts 3 and 1 and 2 afterwards).

There can be a situation, where he has to decide against one of them:

He has a gun, "enemy" human has a gun and to be protecting human has no gun. "enemy" will shoot in 5 secs.

There is no time to do anything else as to fire the gun at an enemy to protect human thus breaking rule 1.

If he doesn't fire them he is also breaking rule 1 (due to inaction).

If he doesn't he is also breaking rule 2.

If he does blow up, due to not deciding he breaks rule 3 and 2 and 1.

If he goes for what breaks the least rules its to fire the gun.

And I also fear that overprotection might occur.

cu

Fabian

PS: Is there anyone else interested or is there already a long buried thread about a new AI for FS2?

 

Offline WMCoolmon

  • Purveyor of space crack
  • 213
Re: Asimovian Thought Experiment
Wow, that is fascinating. If I now run the same test on a human (which our society assumes normally has "free will") and he uses his "free will" i.e. decides to fail the test.

... Uhm, wouldn't that mean that I "prove" that he has no free will, even though he had, which contradicts the whole assumption that humans have "free will".

So now you need to prove they have no "free will". So you order one human to do everything to pass the test and show that he has a free will. You needed to order him as he had no free will to decide that himself (by assumption). So the tests shows he has "free will" even though we assumed he has not.

And if he fails the test (i.e. has no "free will"), he disobeyed an order with his "free will", which again contradicts the assumption that humans have no "free will".

As both scenarios ultimately can lead to contradiction, already a test for humans "free will" seems to be not possible with 100% accuracy.

I don't know how you ever want to make one for robots ...

Still, that assumes that you can pass the free will test if you don't have free will. To use another test as an example, you could score a 0 on a math test by using your math knowledge (The very thing the test was supposed to test for) to score all the wrong answers. But the assumption is that if you don't have the math knowledge, you won't have such a choice. So math tests are considered an effective way to test for math knowledge.

To take the three important parts of the definition below, here's how I would interpret them:
1) Free choice - Sounds the same thing as free will so I'm going to ignore this for now
2) Independent choice - Determine whether or not the robot (I'll assume it's a robot) is significantly affected by factors other than the test itself. For example, remote control via radio, or orders given to it prior to the test.
3) Voluntary decision - Determine whether or not the robot can choose to decide how to react to a given situation.

(2) would be a matter of cutting the robot off from the outside world, eg enclosing it in a soundproofed room with a screen mesh and no openings to the outside. However I'm sure that there are locations on earth which do just that and more, to prevent wiretapping and such. So this shouldn't be a problem, unless we assume that the robot is based on completely unknown technology. (Which I'm not, until we get evidence of some other technologically advanced race.)

To test for a lack of orders given to it prior to the test, you'd have to know something about the robot. For example, if you knew it was based on the Three Laws, you would have to devise some way of getting around the Second Law. Invoke a First Law violation. I'll dig up an example from a short story, wherein one of Asimov's characters forces a robot into obvious mental freeze because he asserts that in lying due to the second law, the robot would cause irreparable harm to another human being. So that would be my first guess at how to prevent such a robot from lying.

Of course, there's always the possibility that the other side would make such an assertion as well. So it would be most important to word the hypothesis as strongly as possible, but I imagine in would go something like, "This test is designed to test for free will. It is of the utmost importance that you are as honest with your answers as possible. If you are not honest with your answers, it is completely possible that it will result in harm or death to a human being which could have otherwise been prevented."

If the other side in the argument were to make a similar argument to the robot, you would end up competing for strength - who could prove the most harm to a human being. However in a situation such as this, a robot (in Asimov's novels) has generally displayed visible difficulty with resolving situation (And if it believes that harm would result for a human being whether it lied or told the truth, could go into mental freeze-out for lack of a third option.)

If you didn't know the rules, and had no means of determining them outside of questioning, you would run into a problem. I'll try to address this in (3)

3) :D
Proving that a robot has voluntary action is difficult. Working from the idea that we can't figure out what a robot is thinking or measure mental activity of any kind, I immediately see two ways you could test for voluntary action.

The first is reaction speed. A robot bound by the three laws will of course automatically move as fast as possible to save a human being who is in danger. However it will have no such deep compunction to perform ordinary or mundane tasks, or act on some other moral imperative. (For example, a human being torturing and killing an animal.) If a robot were designed to prevent any moral action, you would see similar patterns. A robot could not and would not hesitate to prevent any such acts.

The second is to determine whether a robot can act differently when faced with the same situation. If a robot must prevent harm to human beings, it will be forced to prevent that harm all the time. If a robot must follow orders, it will do so all the time. Obviously it's not foolproof. If a robot were programmed to always prevent harm from 9am-5pm, and/or the first three times a given human was determined to be in a certain amount of trouble, it would be extremely difficult to prove the existence of the rule.

Via these two methods it's possible that you could find patterns that indicate hidden rules in the robot, and use them in test (2).

Conc)
As long as you assume that you can't measure anything internal about a being a robot, it's going to be a test of intelligence and how much you can eliminate possibilities of deception in the test. On the other hand, that is how any other test works with human beings.

The insanity plea test just proves that the person acted "insane" - whether or not this person actually is "insane" is not sure. (So a healthy person can be proven to be insane)

The thing with the insanity further is the assumption that "insane" persons do not have this free will, i. e.  definitely will fail the test. So if someone passes the test => he is not insane.

The thing is insane persons also no longer obey orders, but if they would do it, someone could order them to pass the test. (by leaking the answers for example ;) ).

A difference is that a robot would still obey its orders ...

cu

Fabian

Neither the definition of insanity used in court cases, nor the medical definition, contains any inherent wording that an insane individual will not follow orders, not according to wikipedia anyway.

If you're assuming that someone can leak the answers to a free will test, then your objection is no different to objecting to any other test you can cheat in. So you test for English skills, and somebody gets a third person to write the essays, smuggles them in, and copies them down. You test for physical fitness, and someone does steroids. You test for blood, and somebody gets a sample of blood that shows whatever they want it to show. You test for criminal background, and somebody uses bribes to erase damning evidence from their record. You test for innocence, and somebody plants (or destroys) evidence.

So objecting to a free will test on the grounds that you could cheat seems unreasonable, given that you can cheat most of the other tests we humans use.

Although...one might argue that cheating is a sign of free will, given that it's constant in almost every single tests we humans devise to test other humans. You may be on to something here. :p It can be a sign of independent creativity.
-C

  

Offline karajorma

  • King Louie - Jungle VIP
  • Administrator
  • 214
    • Karajorma's Freespace FAQ
Re: Asimovian Thought Experiment
Wow, that is fascinating. If I now run the same test on a human (which our society assumes normally has "free will") and he uses his "free will" i.e. decides to fail the test.

... Uhm, wouldn't that mean that I "prove" that he has no free will, even though he had, which contradicts the whole assumption that humans have "free will".

You remind me of a thought I had earlier on in this thread. Imagine the first supposed AI was accused of a crime (say murder). Under those conditions it might actually be in the AI's interests to fail a free will test so that it could simply be thought of as a defective machine still and "fixed" :)
Karajorma's Freespace FAQ. It's almost like asking me yourself.

[ Diaspora ] - [ Seeds Of Rebellion ] - [ Mind Games ]

 

Offline TrashMan

  • T-tower Avenger. srsly.
  • 213
  • God-Emperor of your kind!
    • FLAMES OF WAR
Re: Asimovian Thought Experiment
Free will is not the same as some randomness or unpredictibility (within certain boundries)...
Nobody dies as a virgin - the life ****s us all!

You're a wrongularity from which no right can escape!