Author Topic: Automation and robots, the perspective of a former HLPer  (Read 7226 times)

0 Members and 1 Guest are viewing this topic.

Offline The E

  • He's Ebeneezer Goode
  • 213
  • Nothing personal, just tech support.
    • Steam
    • Twitter
Re: Automation and robots, the perspective of a former HLPer
General strong AI will be here within 20 years, at most.

As has been tradition for the past 40 years.

So I guess we'll get a package deal of strong AGI and commercial fusion reactors?
If I'm just aching this can't go on
I came from chasing dreams to feel alone
There must be changes, miss to feel strong
I really need lifе to touch me
--Evergrey, Where August Mourns

 

Offline Luis Dias

  • 211
Re: Automation and robots, the perspective of a former HLPer
That was too facetious especially coming from you The_E, whom I am not generally expecting that kind of stuff.

Fusion is harder than was expected, and AI will probably bring unexpected difficulties as well. However, I see no real barriers here. AI "already" exists in very fragmented low brow applications (like Siri or Search or Auto driving, etc.), and it will only improve exponentially. We do not need "strong AI" for AI to be useful. It is useful in the entirety of its "ramping up", which is only a very big incentive to R&D it.

"Strong AI" is a simple name for a complex feat, and we will slowly recognize that the practical AIs that we have on our hands and around us are increasingly smarter and adaptive until we suddenly realise we can actually call them "strong AI".

I do think such an AI will be built before 2030 in the lab, it will probably just reach commercial applicability ten years later, etc. (Just like "Watson" will only reach the rest of us in a few years, perhaps 10).

 

Offline The E

  • He's Ebeneezer Goode
  • 213
  • Nothing personal, just tech support.
    • Steam
    • Twitter
Re: Automation and robots, the perspective of a former HLPer
I'm a strong believer in the saying that "If we know how to do it, it isn't AI".

Yes, we can expect our software agents to get ever more helpful and sophisticated, but equating it with strong AI is in my opinion the wrong way to think about it (There's a lot of really interesting ethics debates to be had on the question whether or not developing an AI would be a moral thing to do, and a lot of ancillary questions whether or not an AI patterned upon human consciousness would even be willing (or able) to do useful work).

AI, as in "superhuman superfast intelligence that improves itself as needed" is imho a pipe dream; whether it is a dream or a nightmare depends entirely on how dystopian you're feeling.
If I'm just aching this can't go on
I came from chasing dreams to feel alone
There must be changes, miss to feel strong
I really need lifе to touch me
--Evergrey, Where August Mourns

 

Offline Luis Dias

  • 211
Re: Automation and robots, the perspective of a former HLPer
I think that's the hollywoodian vision of what an AI is. Let me express myself better at what I mean. I envision that current AIs are going to get better and better. We will laugh at calling them anywhere near "Strong AIs". All along the ways we will just say "Oh, this isn't AI at all, it's just Siri, a "natural language user interface"", "Oh that ain't AI, it's just an auto-driver", "Oh that ain't AI it's just a program that is managing the backstore of Amazon".

We will get ever increasingly smart and intuitive AIs, every increasingly aware of what we mean when we say things and why we say them, and increasingly able to respond to our more complex queries and so on, and in all that ramp we will still laugh by the prospect of saying these things are "Strong AI". Of course not! This is just the maid bot that knows how to clean my house from start to finish. That is just the pilot of the plane, it only knows how to drive planes! Etc., etc.

And then you'll have more general purpose AIs that serve you as a kind of software butlers, that do understand your personalities and tastes, that know many general things that humans know too. Even then people will not call them "Strong AI".

And then suddenly you'll have strong AIs and you wonder how on Earth did they do that so quickly and unexpectedly. :D Well, it did so by fooling you on the "unsmart" things AIs are doing today and will do tomorrow.

 

Offline Nuke

  • Ka-Boom!
  • 212
  • Mutants Worship Me
Re: Automation and robots, the perspective of a former HLPer
That was too facetious especially coming from you The_E, whom I am not generally expecting that kind of stuff.

Fusion is harder than was expected, and AI will probably bring unexpected difficulties as well. However, I see no real barriers here. AI "already" exists in very fragmented low brow applications (like Siri or Search or Auto driving, etc.), and it will only improve exponentially. We do not need "strong AI" for AI to be useful. It is useful in the entirety of its "ramping up", which is only a very big incentive to R&D it.

"Strong AI" is a simple name for a complex feat, and we will slowly recognize that the practical AIs that we have on our hands and around us are increasingly smarter and adaptive until we suddenly realise we can actually call them "strong AI".

I do think such an AI will be built before 2030 in the lab, it will probably just reach commercial applicability ten years later, etc. (Just like "Watson" will only reach the rest of us in a few years, perhaps 10).

the only reason fusion is taking so long is because all the money got dumped on the most expensive, most complex system possible, the mother****ing tokamak. i have a feeling small fusion (polywell/dpf/the unnamed thing the skunkworks is working on) will have a working reactor in less than iter's timeframe (much less iter and demo). oh and it will be an american reactor that goes to the military first.

i dont think ai will come as quickly either. all predictions will require mores law to remain accurate for another 25 years. i have a feeling that we will hit the semiconductor wall first. we dont know if the transition to and miniaturization of quantum computers will follow moors law. it might take awhile to catch up to semiconductor tech, so we might have a period of time without significant computer tech increases.
« Last Edit: September 27, 2013, 01:05:51 pm by Nuke »
I can no longer sit back and allow communist infiltration, communist indoctrination, communist subversion, and the international communist conspiracy to sap and impurify all of our precious bodily fluids.

Nuke's Scripting SVN

 

Offline Unknown Target

  • Get off my lawn!
  • 212
  • Push.Pull?
Re: Automation and robots, the perspective of a former HLPer
I think that's the hollywoodian vision of what an AI is. Let me express myself better at what I mean. I envision that current AIs are going to get better and better. We will laugh at calling them anywhere near "Strong AIs". All along the ways we will just say "Oh, this isn't AI at all, it's just Siri, a "natural language user interface"", "Oh that ain't AI, it's just an auto-driver", "Oh that ain't AI it's just a program that is managing the backstore of Amazon".

We will get ever increasingly smart and intuitive AIs, every increasingly aware of what we mean when we say things and why we say them, and increasingly able to respond to our more complex queries and so on, and in all that ramp we will still laugh by the prospect of saying these things are "Strong AI". Of course not! This is just the maid bot that knows how to clean my house from start to finish. That is just the pilot of the plane, it only knows how to drive planes! Etc., etc.

And then you'll have more general purpose AIs that serve you as a kind of software butlers, that do understand your personalities and tastes, that know many general things that humans know too. Even then people will not call them "Strong AI".

And then suddenly you'll have strong AIs and you wonder how on Earth did they do that so quickly and unexpectedly. :D Well, it did so by fooling you on the "unsmart" things AIs are doing today and will do tomorrow.

I've been thinking along these lines as well. I agree.

  

Offline AdmiralRalwood

  • 211
  • The Cthulhu programmer himself!
    • Skype
    • Steam
    • Twitter
Re: Automation and robots, the perspective of a former HLPer
all predictions will require mores law to remain accurate for another 25 years. i have a feeling that we will hit the semiconductor wall first. we dont know if the transition to and miniaturization of quantum computers will follow moors law. it might take awhile to catch up to semiconductor tech, so we might have a period of time without significant computer tech increases.
Well, they just built the first carbon nanotube computer...
Ph'nglui mglw'nafh Codethulhu GitHub wgah'nagl fhtagn.

schrödinbug (noun) - a bug that manifests itself in running software after a programmer notices that the code should never have worked in the first place.

When you gaze long into BMPMAN, BMPMAN also gazes into you.

"I am one of the best FREDders on Earth" -General Battuta

<Aesaar> literary criticism is vladimir putin

<MageKing17> "There's probably a reason the code is the way it is" is a very dangerous line of thought. :P
<MageKing17> Because the "reason" often turns out to be "nobody noticed it was wrong".
(the very next day)
<MageKing17> this ****ing code did it to me again
<MageKing17> "That doesn't really make sense to me, but I'll assume it was being done for a reason."
<MageKing17> **** ME
<MageKing17> THE REASON IS PEOPLE ARE STUPID
<MageKing17> ESPECIALLY ME

<MageKing17> God damn, I do not understand how this is breaking.
<MageKing17> Everything points to "this should work fine", and yet it's clearly not working.
<MjnMixael> 2 hours later... "God damn, how did this ever work at all?!"
(...)
<MageKing17> so
<MageKing17> more than two hours
<MageKing17> but once again we have reached the inevitable conclusion
<MageKing17> How did this code ever work in the first place!?

<@The_E> Welcome to OpenGL, where standards compliance is optional, and error reporting inconsistent

<MageKing17> It was all working perfectly until I actually tried it on an actual mission.

<IronWorks> I am useful for FSO stuff again. This is a red-letter day!
* z64555 erases "Thursday" and rewrites it in red ink

<MageKing17> TIL the entire homing code is held up by shoestrings and duct tape, basically.

 

Offline Nuke

  • Ka-Boom!
  • 212
  • Mutants Worship Me
Re: Automation and robots, the perspective of a former HLPer
well have fun making connections smaller when wires are already an atom thick. a silicon atom is 111 picometers across, intel's current process is 22nm, thats only three orders of magnitude of room in which to expand.  we will reach physical limits at some point. its a limitation of matter and it will kill moors law.

even if you do get a nanotube computer, you will need to play catchup for several years before you can surpass the capabilities of silicon devices. it does however allow for more 3-dimentional cpu design, and cnts are really good at geting rid of heat.
« Last Edit: September 28, 2013, 02:24:33 am by Nuke »
I can no longer sit back and allow communist infiltration, communist indoctrination, communist subversion, and the international communist conspiracy to sap and impurify all of our precious bodily fluids.

Nuke's Scripting SVN