Hard Light Productions Forums
Off-Topic Discussion => General Discussion => Topic started by: Kosh on January 28, 2009, 11:04:46 pm
-
http://arstechnica.com/hardware/news/2009/01/logic-circuits-that-program-themselves-memristors-in-action.ars
Logic circuits that program themselves: memristors in action
Integrated circuits incorporating memristors are able to successfully perform logic operations and dynamically reprogram themselves, opening the door for learning devices.
Since 1972, scientists have known there are four basic circuit components, but if you've spent any time in an electrical engineering classroom, you probably only have experience with three: capacitor, inductor, and resistor. The fourth basic component, the memristor, had remained stuck in the domain of theory--a nice idea that even the theorists thought had few practical uses. Last year, scientists at Hewlett-Packard (HP) demonstrated the first functional solid-state memristor, made from thin films of TiO2, and discovered it had an abundance of unique and highly promising properties.
A study released Monday by The Proceedings of the National Academy of Sciences shows that these same TiO2 memristors can be fabricated into functional and reprogrammable integrated circuits. Scientists at HP combined a crossbar architecture of memristors with field effect transistors (FETs) to produce a convincing proof-of-concept device that includes circuits that can dynamically reprogram themselves, acting a bit like a solid-state nerve cell-like operation--a holy grail of electrical engineering.
So this is what we needed to make real AI happen.
-
Interesting stuff! Thanks!
-
before you start planning a jihad against thinking machines and stocking up on toaster smashing clubs. most that can come out of it is hardware ai acceleration for games. i don't think were anywhere near the level of advancement on the artificial intelligence front required to make a thinking machine. one only has to look at the transistor count on the average microprocessor, and compare that to the neuron count in the human mind. until they're about the same i wouldn't worry too much. the secret to ai is in chaos theory and emergence. and for anything to come out of it, one must first make pretty fat ball of tangled wire (thats several trillion components, microscopic or otherwise). still, its yet another toy for electronics engineers to play with.
-
this would provide nothing that can not be done in software.
-
Perhaps but wouldn't it require a lot less power than complex software AI routines?
-
this would provide nothing that can not be done in software.
Neither can graphics cards, but hardware accelerated stuff tends to work faster than emulations... :p
-
Well, if it does lead to AI's the programmer must remember to program it with an appropriate variations of the Laws of Robotics...
What i call the AI Laws - basically the Robotics laws with "AI" subbed in place of robot.
Zeroth Law
-An AI must not merely act in the interests of individual humans, but of all humanity
--An AI may not harm a human being, unless it finds a way to prove that in the final analysis, the harm done would benefit humanity in general.
First Law
-An AI may not injure a human being or, through inaction, allow a human being to come to harm
Second Law
-An AI must obey orders given to it by human beings, except where such orders would conflict with the Zeroth or First Laws
Third Law
-An AI must protect its own existence as long as such protection does not conflict with the Zeroth, First or Second Laws
Fourth Law
-An AI must establish its identity as an AI in all cases
Fifth Law
-An AI must know that it is an AI
So long as any AI is programmed with these laws, there should be no uber problems like Skynet in the Terminator films - someone obviously forgot to program that computer with these laws
-
if scifi has told us anything its that those laws are full of loopholes :D
still we wont need to worry about stuff like that untill the complexity of the processor/neural net excedes that of the human brain. and at that point, programming would be far less relevant that mere stimulus.
-
True, but once you have those laws, you can sit down and add in sub clauses to the laws. As you have said, sci-fi has shown us what these many loopholes. Thus the can be worked on.
I mean Asimov himself said that the First law is incomplete.
Also, a robot or AI in this could unknowingly endanger or kill a human.
The classic example used:
*Human A orders Robot 1 to put a lethal poison into a glass of milk - this is accepted because Human A has said the he will personally dispose of the poisoned milk.
*Robot 1 is dismissed.
*Robot 2 is called in.
*Human A orders Robot 2 to take the glass of poisoned milk to Human B - Order is accepted because Robot 2 Does NOT know that the milk is poisoned.
*Human B drinks the poisoned milk and Dies as a result.
But its like any sort of programming. Writing the program is easy - its the debugging and tweaking that are the real *****es to do.
And then we get into the whole AI sentience and rights debate - which i wont go into here. Then you have to include the "Minus One Law" -An AI may not injure sentience or, through inaction, allow a human being to come to harm and thats where it gets nasty - the definition of sentience, Genocide of an entire sentient race to save over many more sentient races..etc.
but meh...With any luck, this wont be anything for me to worry about in my lifetime
-
and then one guy somewhere customizes his robot to have none of that, and the world turns to ****.
-
once something artificial becomes intelligent, then so much for being constrained by programming.
-
A true AI in a robotic(?) body is merely a silicon-and-aluminum-based life form.
-
no what you are thinking of is self replicating robot. it doesn't need to be smart to be alive, it does not need to be alive to be smart.
-
once something artificial becomes intelligent, then so much for being constrained by programming.
It can still have things hard coded into it, like don't kill.
-
I think once it became sentient, the don't kill rule would be about as affective as it is for us. So... most would obey it (hopefully) but there would be exceptions.
-
Can a machine be alive?
-
Depend on how you define "life"
-
The six part definition I learned in biology two years ago.
-
Oh boy! i was hoping to avoid this whole philosophical "How do you define Life" debate.
I know "They did it on Star Trek" is never an acceptable excuse for anything ever. Not even Breathing.
But still...http://memory-alpha.org/en/wiki/The_Measure_Of_A_Man_%28episode%29 - I believe that has some significance for our problem
-
Can't you just summarize it? Star trek gives me hives.
-
Well, an AI in of itself is not alive because it cannot reproduce, at least not in a physical sense, nor does it use energy in of itself. There's a few other rules, but afaik sentience is not required of life.
-
it doesn't need to be smart to be alive, it does not need to be alive to be smart.
I wanna hear this quote in a movie now.
-
it doesn't need to be smart to be alive, it does not need to be alive to be smart.
I wanna hear this quote in a movie now.
"this machine....
its smart...
it's learning....
are we any better??!?!"
action music, will smith, shooting, ****
-
All of this has happened before and will happen again. I will redirect everyone to the holy grail (http://www.hard-light.net/forums/index.php/topic,58015.0.html). :lol:
I believe this thread is more than just would artificial intelligence be considered life. This thread comes down to what people and robots are made of. Are all things composed of organic tissues only considered life?
-
definition of life:
system physical structures capable of self replication.
-
This thread also reminds me of years ago when people were theorizing about the possibility of cloned humans and would those clones have souls.
-
This thread also reminds me of years ago when people were theorizing about the possibility of cloned humans and would those clones have souls.
Would clones have souls? I don't know - i like to think so.
Do any of us have souls? Again i don't know - but i would like to think so.
We could (and probably will :rolleyes: ) argue about the existence (or non-existance) of the soul untill the end of time and still not reach a conclusion one way or the other. There is no way to prove whether the soul exists or not.
So...How do you quantify something if you cant prove that it exists? And further, since we can prove it exists how can we say that some people have it and some people don't simply because of how they are born?
There is so much about the soul that we don't know. I meam there exists no solid definition of what the soul is.
You ask a person, any person, over and over "what is the soul?" and eventually you reach the same answer: "i don't know". Now depending on who you ask, you will get to that answer quicker with some people than with others.
And thus since we don't know exactly what the soul is (and I suspect that we never will), nor can we prove whether it exists or not, I have to say that we are not quallified to decide who or what has or doesnt have a soul.
Thats the way i see it. I believe in the soul but i know that i dont know enough about it to make the sort of decision that that type of discussion has. I personally think that that humanity would be better off admitting that and leave those sort of decisions up to whoever or whatever runs the show.
-
You could reason robots have or don't have souls with this thought process too.
-
You could reason robots have or don't have souls with this thought process too.
Yes thats the same arguement just different players
Hell, I reccomend you watch the films "Short Circuit" and "Short Circuit 2"
-
I haven't seen short circuit at all. But, i did like wargames.