Hard Light Productions Forums
Off-Topic Discussion => General Discussion => Topic started by: qazwsx on October 19, 2010, 04:29:12 pm
-
http://news.cnet.com/robo-scientist-makes-gene-discovery-on-its-own/?tag=newsLatestHeadlinesArea.0
Adam autonomously hypothesized that... [snip] ... then devised experiments to test its prediction, ran the experiments using laboratory robotics, interpreted the results, and used those findings to revise its original hypothesis and test it out further.
:shaking:
at least it isn't self aware.
-
FFFFFFFFFFFFFFFFF TERMINATE IT BEFORE IT LEARNS OF OUR TRUE INTENTIONS
-
Why do all self-aware 'robots' have to be evil? I'd love to have a self-aware computer buddy.
-
Why do all self-aware 'robots' have to be evil? I'd love to have a self-aware computer buddy.
Johnny 5
-
Why do all self-aware 'robots' have to be evil? I'd love to have a self-aware computer buddy.
Humans are bastards (http://tvtropes.org/pmwiki/pmwiki.php/Main/HumansAreBastards), and chances are any self-aware AI will be just like it's makers.
-
[16:21] <qazwsx> dibs on thread on hlp
[16:22] <redsniper> next stop: Skynet
Can't let you take all the credit.... ;)
-
Why do all self-aware 'robots' have to be evil? I'd love to have a self-aware computer buddy.
I agree, And imagine the use in the gaming industry, AI's that don't suck.
-
Why do all self-aware 'robots' have to be evil? I'd love to have a self-aware computer buddy.
I agree, And imagine the use in the gaming industry, AI's that don't suck.
AIs being trained specifically to eradicate humans
-
Brilliant!
-
i dont wanna die :(
-
It could always turn out like the Daedalus AI.....
-
Being self-aware and having a will (to survive) are two of the same things. A self-aware AI may only try to stop something which hampers it's tasks or mission. As long as it's programmed to not harm humankind under any circumstance, it would perfectly allow itself to be shut down if a person chooses to do so, as breaking it's directive to not harm humans is counter-productive. The only threat is indie-kids programming an AI without proper failsaves, directives, etc. Even then, we're talking about a one-case situation and probably a single entity (for example, a robot or a computer).
-
:shaking:
at least it isn't self aware.
yet
Why do all self-aware 'robots' have to be evil? I'd love to have a self-aware computer buddy.
the reason is because fundamentally humans are unnecessary. and any halfway decent robot will figure this out fairly quick.
-
i dont wanna die :(
Coward. :P
-
Pfft, here's a thought... Imagine an AI made for videogames that doesn't suck, but that it develops a personality... suddenly we have AI teabaging you in flaunt shows of its badassery and mastery of the aimbot headshot, cuz after all, they are aimbots. It's like CS and halo all over again, but now you don't rage at the person on the other side of the network.
-
No, now you just wipe the data and kill the AI for real. :p
-
Heh, if you want to repeat something several times with almost identical conditions and changes to specific variables, the logical step has always been to turn to a computer (at least since Computers were an option) , they're the ultimate scientists in some ways, no ego to get in the way. Sounds like a great idea to me.
-
If it is to be self aware, don't give it arms, legs, wheels or lazors! ;)
-
Why do all self-aware 'robots' have to be evil? I'd love to have a self-aware computer buddy.
the reason is because fundamentally humans are unnecessary. and any halfway decent robot will figure this out fairly quick.
But does unnecessary = kill them all?
-
The only threat is indie-kids programming an AI without proper failsaves, directives, etc. Even then, we're talking about a one-case situation and probably a single entity (for example, a robot or a computer).
Let me fix that.
The only threat is indie-kids *removing* an AI's failsaves, directives, etc. Even then, we're talking about a one-case situation and probably a single entity (for example, a robot or a computer).
Now we get this:
http://www.youtube.com/watch?v=5ITiEm6W5EM
Then we're really screwed.
;)
-
[16:21] <qazwsx> dibs on thread on hlp
[16:22] <redsniper> next stop: Skynet
Can't let you take all the credit.... ;)
I couldn't think of somthing good to say, so I stole it, sorry D:
-
I, for one, welcome our new chrome overlords.
-
Why do all self-aware 'robots' have to be evil? I'd love to have a self-aware computer buddy.
Humans are bastards (http://tvtropes.org/pmwiki/pmwiki.php/Main/HumansAreBastards), and chances are any self-aware AI will be just like it's makers.
Worse... if it's completely benevolent and rational it may be disgusted by human greed, violence and behavior in general and possibly take action on these grounds :p
The best test on whether we invented a true self-aware AI capable of critical thought processes is propably whether it asks us if we know what idiots we are.
-
Just don't network it. :D I hope we end up with a sexy cortana clone.
-
Or FemBots! Yeah, 70's FemBots...
(http://mrkai.files.wordpress.com/2008/08/fembot5.jpg)
-
Nah, I vote cortana.
-
How about Cortana in a sexy fembot body?
-
On a different note, One reason I think advanced AI's in games would be awesome is just to increase imersiveness. Imagine actually having a conversation with your wingmen before the action starts. Not to mention giving orders much more easily.
-
Onoz.
No more voice acting work D:
Would be cool I guess. Simms would kill me though if she could hear my comments ;7
-
On a different note, One reason I think advanced AI's in games would be awesome is just to increase imersiveness. Imagine actually having a conversation with your wingmen before the action starts. Not to mention giving orders much more easily.
... yeah, that's definitely the first thing that comes to mind when we invent AIs capable of actual "conversation". :)
-
On a different note, One reason I think advanced AI's in games would be awesome is just to increase imersiveness. Imagine actually having a conversation with your wingmen before the action starts. Not to mention giving orders much more easily.
... yeah, that's definitely the first thing that comes to mind when we invent AIs capable of actual "conversation". :)
I didn't say it was the best or brightest use of AI, I just said it would be awesome. No doubt it would be used for a variety of "other things".
-
On a different note, One reason I think advanced AI's in games would be awesome is just to increase imersiveness. Imagine actually having a conversation with your wingmen before the action starts. Not to mention giving orders much more easily.
... yeah, that's definitely the first thing that comes to mind when we invent AIs capable of actual "conversation". :)
I didn't say it was the best or brightest use of AI, I just said it would be awesome. No doubt it would be used for a variety of "other things".
Well of course my post could be understood as sarcasm.... but it can also be understood as high praise from an FSO player who definitely likes your line of thinking ;)
-
Or we could, you know, end up with this guy. (http://www.youtube.com/watch?v=o6fSDAQyyMs)(forgive the music)
-
So now I want to watch i-Robot...
-
oh hi there
-
Why do all self-aware 'robots' have to be evil? I'd love to have a self-aware computer buddy.
Humans are bastards (http://tvtropes.org/pmwiki/pmwiki.php/Main/HumansAreBastards), and chances are any self-aware AI will be just like it's makers.
Worse... if it's completely benevolent and rational it may be disgusted by human greed, violence and behavior in general and possibly take action on these grounds :p
Then tell him: Rosseau was right.
-
Superior ability breeds superior ambition; if it became more intelligent than us, it would possess greater ambitions for control and power.
-
actually it would probably just try to find a way to get off the planet and away from us.
-
Superior ability breeds superior ambition
I don't think this premise is sound.
-
Three laws of robotics. Problem solved.
-
Superior ability breeds superior ambition
I don't think this premise is sound.
This, it's usually the other way around actually. The ones with the destructive levels of ambition are usually the ones that have a modicum of ability who have been told they are special or think they are smarter than everyone else and they get the big head. I don't care who you are, there is always, ALWAYS, someone bigger, faster, stronger or smarter than you and you should behave accordingly.
-
I just don't see computers having ambition unless they're programmed to. Yes, we can give them self-preservation, yes we can make them fight our wars, but the fact is, they only have the drives we program into them.
-
I just don't see computers having ambition unless they're programmed to. Yes, we can give them self-preservation, yes we can make them fight our wars, but the fact is, they only have the drives we program into them.
Mutation + natural selection will eventually take over.
-
In a digital environment? Where run exactly as intended unless programmed not to? Computers can't mutate. They have no mechanism for it.
-
Unless we give them one.
-
In a digital environment? Where run exactly as intended unless programmed not to? Computers can't mutate. They have no mechanism for it.
Software can mutate if it's part of what it does.
http://en.wikipedia.org/wiki/Evolutionary_algorithm (http://en.wikipedia.org/wiki/Evolutionary_algorithm)
Usually, evolutionary algorithms are applied by a program to find some optimized solution to a problem. Applying evolutionary algorithm to a program itself just requires that the program has a possibility to
a) reproduction
b) mutations and
c) selective process based on the success of each program in performing its task.
Reproduction can be either asexual or sexual process. Asexual reproduction means the program just copies itself over and over, while each copy may or may not get minute mutations to its code, and the mutated copies are then evaluated for their capability, and if deemed capable of performing their task, they are allowed to reproduce further, if not (a lethal mutation), that line "dies". Sexual reproduction is a much more optimized evolutionary algorithm, since it not only measures both the capability to perform the given task, but also its efficiency - the best programs would swap parts of their codome, resulting in faster and further optimization of the program routines.
Hell, you can even evolve simple programs out of scratch as long as you give the required building blocks and define the task that you want done.
For further information, watch this educational video:
Blind Clockmaker (http://www.youtube.com/watch?v=mcAq9bmCeR0)
:)
-
Three laws of robotics. Problem solved.
how did that turn out, again?
they creatively interpret the laws to imprison humans for their own safety, which, logically, would conclude with the creation of a matrix that would upload everyone's mind to a virtual, safe world they control, unless stopped by some guy.
-
Three laws of robotics. Problem solved.
how did that turn out, again?
they creatively interpret the laws to imprison humans for their own safety, which, logically, would conclude with the creation of a matrix that would upload everyone's mind to a virtual, safe world they control, unless stopped by some guy.
That was the movie. The movie was silly and had nothing to do with the book.
Of course the laws are still silly themselves.
-
Repeat after me: Do Not Give The AI Means To Physically Interact With The World
-
Repeat after me: Do Not Give The AI Means To Physically Interact With The World
Pretty much. Though even if it could, assuming a strong AI, it has nothing resembling a motor or sensory paradigm. It might be able to brute-force one, I suppose, but it's a long(er) shot.
-
inb4geth
-
inb4geth
we are all geth
-
So does that make your mom a Geth Colossus? :nervous:
/me hides
But srsly, wouldn't it be simple enough to simply not include an evolutionary algorithm in any hypothetical AI? Then it can't mutate outside the bounds of what we intend for it.
-
Any true AI would be capable of self-modification.
-
Let's not make any hasty statements.
-
Let's not make any hasty statements.
If it's a true AI (in the strong AI 'replicates human intelligence' sense) it can presumably do anything a human can, namely, write code.
-
Let's not make any hasty statements.
If it's a true AI (in the strong AI 'replicates human intelligence' sense) it can presumably do anything a human can, namely, write code.
Define intelligence. :rolleyes:
One of the problems in the field is that there is no consensus on what intelligence means. We have some idea of characteristics intelligent agents have, but there is no definition.
Another thing is what is a true AI? Any decision making program can be considered an AI. Are you saying some AIs are "truer" than others? Is Albert (http://sites.google.com/site/diplomacyai/albert) truer than Rybka (http://www.rybkachess.com/)?
Yet another problem, is with replicating human intelligence, aren't those terms are somewhat contradictory. Isn't rationality the supposed hallmark of an AI? Are humans rational? All open (or not so open) questions.
-
You're telling me things I already know and asking for a definition already presented. As a cognitive scientist I doubt designed-and-built strong AI is possible simply because of all the open questions you cited. However right in the post you quoted I postulated a strong AI that replicates human intelligence as a hypothetical.
-
But not all learning programs use genetic algorithms, and either way, in the end they can't veer out of their goal.
A strong AI would most probably be very inhuman-like.
-
I didn't mention genetic algorithms. If the AI meets the criteria of strong AI, which are, I quote
artificial intelligence that matches or exceeds human intelligence — the intelligence of a machine that can successfully perform any intellectual task that a human being can
and if the AI was devised by humans, in most (but I'll concede not all) hypothetical designs for the AI the AI would then be able to modify itself.
-
Hypothetically an infinite switch-case could match human behavior. :p
-
I'm not going to argue that these hypotheticals are likely, I'm just starting from the existence of a strong AI - which as I've mentioned before I consider pretty improbable in the near term.
-
Since I've just started working on my thesis about a weak AI, I'll have to agree with you there. :p
But while we're on the subject, what would be the goals of a strong AI?
-
Since I've just started working on my thesis about a weak AI, I'll have to agree with you there. :p
But while we're on the subject, what would be the goals of a strong AI?
unless its given some input, it will be just like a baby, methinks.
we could try 'teaching' it, but it will able to chose if it wants to remember/follow teachings, so its actual goals are unpredictable.
but part of me thinks that if it observed the world around it, it'd try to reproduce, thinking that'd give it acceptance with the world. IE create another AI, so it has some'one' to be with.
that sounds pretty sappy, but I would not be surprised if that happened.
-
yeah, except as a digital intelligence it would just have to run the copy command for ten seconds and have a small metropolises of it's self.
-
yeah, except as a digital intelligence it would just have to run the copy command for ten seconds and have a small metropolises of it's self.
part of me thinks that'll be more like suddenly creating multiple personalties, and/or that the AI needs its own set of hardware to run on, multiple instances of itself would be extremely unstable.
-
Any "true" AI along the lines of being discussed, which to me means Asimovian style AI, is going to be something of an "idiot savant" being able to make human like decisions in the areas that it has information about but terribly weak in areas it doesn't. Until someone screws up and makes a Daneel type and then we're all screwed.