Hard Light Productions Forums
Off-Topic Discussion => General Discussion => Topic started by: Mobius on September 25, 2013, 12:59:02 pm
-
Article written by High Max, what do you think about it? Here's the link: http://thelightblueribbon.wordpress.com/2013/09/25/automation-robots-and-our-society/
It is becoming more and more obvious that automation, robots, mechanization, and other forms of productivity increases will warrant a change on our economic system. Though outsourcing and people lacking adequate skills for certain jobs is also part of the reason, technological unemployment, eventually leading to structural unemployment, is not given much attention and will become more of an issue as we move into the future. If we don't prepare for it and allow the issue to grow while trying to preserve the obsolete model, there will be a system collapse. We can either do this the easier way or the hard way. We can smoothly transition and the issues will be minimized, or we can continues with business as usual and things will surely get worse before they get better.
[...]
As automation increases, we would continue to raise the basic income guarantee or a stimpent for each citizen while shortening work weeks. As Marshall Brain puts it, we could distribute the wealth and also gradually shorten work weeks, similar to how we changed our system during the great depression where the 40 hour work week became standard in the USA and when social security and retirement in the USA came into being. Increasing the basic income guarantee too much may disincentivize people from working through employment, so you would try to raise it to correspond with the decrease in demand of human workers per capita. Maybe one day far enough into the future, you would have a world similar to Star Trek in the sense that the number of volunteers would exceed the number of positions available, even if each worked only 2 or 3 hours a day, and with all our basic needs supplied to us as well as most luxuries and people owning nano-fabricators as well as nanobots that can self replicate, we would have close to unlimited free time. The motivation for working for those who want something to do would just be because they want to find meaning, not because they need to be a wage slave or corporate slave, and the dehumanization and objectification that is so common with our system as the term 'human capital' or 'civil servant' makes apparent would be drastically reduced. Maybe we would tax the robots in order to put money into people's pockets so they can have the purchasing power to buy the stuff that robots produce. We would likely have to find ways to change the tax code too, of course automation in government would reduce taxes too, perhaps. But at this point, our idea of economic growth would have to become obsolete since big corporations run by robots would have a difficult time growing unless demand increased; of course corporate owners can trade among themselves as well if demand in one company increases at the expense of another. But the fact is that growth cannot continue forever assuming we are stuck on this planet for a much longer time and it isn't sustainable, which would collapse the system as well. Also, factories may become obsolete to a large degree when people have advanced 3D printers, or better yet nano-fabricators.
[...]
The rate of change and technological advances makes this an interesting time to be alive and it would be nice to live as long as possible to see where it is all going. The future is a question mark waiting to be unveiled.
Marshall Brain speech on technology and unemployment: http://www.youtube.com/watch?v=DxxL0EcpvdQ
Why a basic income guarantee should become a human right: http://www.thenorthwindonline.com/?p=3867309
Problems with current system video and recent robotic advancements: http://www.youtube.com/watch?v=p8ZzMGuPtRo
-
I agree with what is in the article. Ever since I heard about the wonders of 3D printing, I've wondered what effect it would have on our manufacturing sector, and the economy in general. It seemed to me that we'd have to rethink how we do economy.
I also like the idea of basic income guarantee. Rewarding hard work is one thing, but forcing poverty as a punishment? As an artist, I know that I work hard on my artwork. I know of artists who spend more time working on art than most people put into their day job. Yet they're forced to live in poverty because in our system what they do doesn't make money. I know a lot of artists would be happy not being rich, and just making art. The only problem is that our system will kill you if you try that.
BIG would allow people to make the things that most societies claim they value ( art, volunteering, motherhood) yet aren't actually supported in our current system.
-
A little bump:
Canada did a basic income garauntee experiment in the 70's, before it got shut down. The results were never analyzed, apparently.
http://www.dominionpaper.ca/articles/4100 (http://www.dominionpaper.ca/articles/4100)
-
This has been an interest of mine for a long time. As a society, we need to confront that fact that the economic model that we have been pursuing, is about to fail. Now, the specific model that's going to fall over is debateable - certainly the post WW2 consumer and growth driven model, arguably all the way back to the industrial revolution. But the reality is that there simply will not be the kind of (often) low paying, low skilled jobs through which people A) Enter the workforce and B) Support themselves if they aren't well educated, or educated in a relevant, useful skill (Hello people with BAs!).
Think about the kinds of jobs we're going to lose just in the next 10 - 20 years. Anything related to driving for one - self driving technology is already pretty good, it's regulation that needs to catch up. Cars can also self park and navigate - once regulation catches up, taxi and truck drivers will go very quickly. Train drivers too, I suspect - trains are likely much easier than cars, though they may take longer due to the inertia associated with much larger infrastructure. Driverless mining vehicles (Dump trucks, water trucks etc.) are working right now in mines across the world - when they become the standard, the unskilled mining workforce will plummet, which will have a massive impact on my home state, and lots of other mining areas.
Automated checkouts at supermarkets are pretty much standard now - how long before they become standard everywhere? Sure, a lot of retail is about the shopping experience, so things like clothes stores and maybe electronics and that will still have people, but the corner store? The Petrol station? The cinema? There'll still have to be a person or two floating around to make sure people aren't just walking off with stuff (although there are technological ways to solve that issue too, at least in theory) but the numbers will be way down. Add that to the ever growing online market share and you're looking at big and accelerating job losses over a long period of time. Even stuff like food service - do you really need someone to take your McDonalds order, or can you press the buttons on the screen yourself? Sure, a high end restaurant will still have waiters, but the definition of high end may well vary - does someone really need to take your order at the cheap(ish) restaurant you take your date to before a movie? I don't really think so.
Even high skilled jobs aren't immune. In my own field, geology, you normally get started straight of uni doing a lot of logging of core and RC chips. But the CSIRO's new HyLogger is getting better and better at doing that hyperspectrally. There's still a place for geos looking at core, but if you can get accurate mineralogy from a machine, you need far fewer lower level guys in the field.
Those are just the examples I can come up with off the top of my head in five or ten minutes. Think about it for awhile and there aren't many low level jobs that can't be replaced (or have their human staff levels slashed) by improving technology. Build a better roomba, lay off thousands of cleaners worldwide. Build better hospital robots (and they already have them), lay off thousands of orderlies worldwide. Some jobs will take longer - anything requiring rapid or dextrous manipulation, or uniquely human interaction - but millions of low skill jobs will go in the next several decades.
And this is a massive problem. Think about how many young people rely on jobs like these to get themselves through uni, so they can be educated up to get one of the remaining high skill jobs? Or start in these low level roles to gain the kind of workforce experience that is essential for more advanced skills? Or the millions of people who have perfectly respectable jobs and then feed into the economy with what they earn. Without a solid working class base, the economic cycle grinds to a halt. Worse yet, history shows that when you have large numbers of economically disenfranchised, unemployed people - especially men - violence doesn't take long to follow.
In a lot of ways, the "Industrial Revolution" that we associate with the 17th and 18th centuries never really ended. It's come in fits and starts, but now that we've reached a point where we can see (on the realistically close horizon) the day where the combination of mechanical muscle and digital intelligence will make most human labour utterly irrelevant. Without some kind of economic reformation, our society simply can not survive that. I don't know if this is the answer, necessarily, but we need an answer, and sooner rather than later.
-
'High Max'?
You know this guy?
-
I've been thinking about these things for some ten years now. I am both in agreement and disagreement. There is some economic naiveté in some thoughts here. I'll try to express myself later this day or tomorrow.
-
why "former" HLPr?
-
Good post Black Wolf.
Unfortunately I think we all need to wait for the current system to collapse; otherwise we will simply be fighting it (i.e. revolution) if we tried to change it now. You can try to work your way up, but that will take years that we don't really have.
That being said, it's all continuous; collapse, rebuilding, etc, it's not going to be a stop/start affair. I think we'll just end up muddling through it all. The system seems to be too large with too much momentum, and the lack of any clear leadership means that we'll just clumsily plow through these social issues until we get somewhere on the other side. It's messy but I'm not sure if I see any other options.
High Max used to be an active community member, I forget if he was on the Robotech mod project or not, but my memory is telling me yes. Haven't seen him around in a few years, but then, I'm not the most active member. :)
-
i feel like it's less awkward in the long run if i just point out now that he was banned ages ago for editing all his posts into smilies
-
why "former" HLPr?
Because he acted like a complete moron at the slightest provocation, including editing all his posts repeatedly (both while discussions were ongoing as well as later, when he decided that he would just replace every. single. post. with various smilies).
Anyway, that's enough about him; let's go back to discussing the merits of his opinion piece here.
-
Ok, just a quicky here and I'll flee again.
The "naiveté" criticism I feel obligated to make is that I do sense a real lack of economic 101 from all those who are writing this kind of thesis. I am not entirely disbelieving that the "endgame" isn't something along those lines of "machines vs humans", and how capitalism will eventually weed out a lot of people into something like unemployment.
However.
The same has been said innumerous times throughout history. Take Agriculture. There was a time when 90-95% of the people worked in the fields. Now that number is somewhere around 2%. It did not happen that 90% of people became unemployed. What happened instead is that the machines who substituted human labor made agricultural products incredibly cheaper, and those people just went on to make lesser "important" things (what is more important in life than food?) and be paid for it. All the subsequent industrial revolutions were always steps into rationalizing and automatizing work, so that the general population went on to make other work. The products being produced in this automated part of the economy are then pulled into a very "nasty" deflationary trend (for machines do not worry not being paid at all), and disappear in the most part from the economic metrics (agriculture stops being the most important economic activity to being the 2-5% it is now, for instance).
So, all this process has been going on for the past 200/300 years, and we are better for it, not worst. No one cares right now that we are mostly "unemployable" in the agricultural business. We do other things, for humans now have the resources and the wealth to care about other things.
In the future, what this robotic revolution means is that the robots will subsitute, again, a lot of work currently done by humans. The wealth that is now paid and shared by humans to fuel these jobs and activities will *not* go to the robots though, it will go to the larger economy to do *other* things, for the robots will have, as I said, very few demands of their own (except maintenance, etc.etc.), and the things they will produce will deflate in value quite a lot.
So we see here the problem: the only thing we really need to care is that whether if there is still any economic activity worthy of porsuit that will create sufficient jobs for everyone. People in the 19th century, if they were aware of the agricultural revolution coming, they would probably have written similar essays to the one linked above, and how we would all be better off just working 1 day per week or something. This is probably because they would have had not the sufficient imagination required to imagine the 20th century economy with the kinds of jobs that existed then and didn't in the 19th. Likewise, there might well be new "job landscapes" that we aren't quite picturing yet, it might well be that things like art or entertainment will grow bigger and bigger to become the main economic driver of the wealthiest economy on the planet, or it might be a combination of something else.
Measures like the basic income are neat, and they might eventually come to pass, but I do not think such measures will arrive due to sheer unemployment.
-
[long post]
Well, you might have a point in some areas, but you're ignoring perhaps the most basic reason for human operators: flexibility. You already could fly a plane on autopilot from takeoff to landing, without any problems. Yet, pilots still fly. The reason is that no matter how advanced, no automaton is capable of reacting to anomalies as well as a human. A modern train driver pretty much only operates a throttle, brakes and a deadman. In theory, you could hand the controls off to the computer, yes. However, if there's an anomaly the computer didn't detect for some reason, or if the connection with the central command computer is cut, you'd better have someone out there to take manual controls. Same with cars and trucks, though taxi drivers can indeed get hit if automated cars become commonplace. Since the passenger could hold the deadman and operate the manual override if needed, a taxi driver would be only needed when the car has to move without passengers, something that only happens out of cities. While ranks of low-level manual laborers might thin somewhat, it's always good to have somebody to watch the robots, especially if they're directly interacting with people (automated mining vehicles do not, hence why they can be completely driverless).
I agree that staffing levels will drop, and very low level jobs could be indeed replaced, but I think that in general, tech will rather make jobs easier (and more boring), rather than replacing humans completely. Also, it won't happen very quickly, because hiring a college student to take orders in McDonald's is incredibly cheap, while electronics are expensive and also require expensive maintenance. Prices will drop, but it'll take some time before manual labor gets more expensive than automatons.
-
Dragon's correct, there will be quite a bit of infrastructure inertia to overcome before its robots, robots everywhere.
-
Ok, just a quicky here and I'll flee again.
The "naiveté" criticism I feel obligated to make is that I do sense a real lack of economic 101 from all those who are writing this kind of thesis. I am not entirely disbelieving that the "endgame" isn't something along those lines of "machines vs humans", and how capitalism will eventually weed out a lot of people into something like unemployment.
However.
The same has been said innumerous times throughout history. Take Agriculture. There was a time when 90-95% of the people worked in the fields. Now that number is somewhere around 2%. It did not happen that 90% of people became unemployed. What happened instead is that the machines who substituted human labor made agricultural products incredibly cheaper, and those people just went on to make lesser "important" things (what is more important in life than food?) and be paid for it. All the subsequent industrial revolutions were always steps into rationalizing and automatizing work, so that the general population went on to make other work. The products being produced in this automated part of the economy are then pulled into a very "nasty" deflationary trend (for machines do not worry not being paid at all), and disappear in the most part from the economic metrics (agriculture stops being the most important economic activity to being the 2-5% it is now, for instance).
So, all this process has been going on for the past 200/300 years, and we are better for it, not worst. No one cares right now that we are mostly "unemployable" in the agricultural business. We do other things, for humans now have the resources and the wealth to care about other things.
In the future, what this robotic revolution means is that the robots will subsitute, again, a lot of work currently done by humans. The wealth that is now paid and shared by humans to fuel these jobs and activities will *not* go to the robots though, it will go to the larger economy to do *other* things, for the robots will have, as I said, very few demands of their own (except maintenance, etc.etc.), and the things they will produce will deflate in value quite a lot.
So we see here the problem: the only thing we really need to care is that whether if there is still any economic activity worthy of porsuit that will create sufficient jobs for everyone. People in the 19th century, if they were aware of the agricultural revolution coming, they would probably have written similar essays to the one linked above, and how we would all be better off just working 1 day per week or something. This is probably because they would have had not the sufficient imagination required to imagine the 20th century economy with the kinds of jobs that existed then and didn't in the 19th. Likewise, there might well be new "job landscapes" that we aren't quite picturing yet, it might well be that things like art or entertainment will grow bigger and bigger to become the main economic driver of the wealthiest economy on the planet, or it might be a combination of something else.
Measures like the basic income are neat, and they might eventually come to pass, but I do not think such measures will arrive due to sheer unemployment.
I think you're missing two critical points.
Firstly, I don't think anybody's saying that this will end the world, or completely collapse society. Society and the human species will make the best of it and go on, obviously. But. Think about the closest analogue we have to the (almost inevitably) coming AI/robot/computerized workforce - the industrial revolution and mechanization meant that huge swathes of the workforce were no longer required. This led to the events you're talking about - increased productivity, no money to the new machines and it spread out into the economy to do other things (mostly support the extravagant lifestyles of the super rich. But it didn't enhance the lives of the lower classes, not by a long shot. As a direct result of the industrial revolution, England saw the Dickensian nightmare of poor houses, extreme poverty and social disaster that took decades - generations, really - to sort itself out. And that was without the other likely serious social issues we're going to have to deal with in the next fifty to one hundred years (massive overpopulation, global warming etc.).
I have no doubt whatsoever that we'd eventually sort ourselves out, find a new balance. What I'm worried about is the period between then and now - it's not likely to be pretty.
The second point I think you're missing is that while the coming changes are similar to previous technological leaps (agricultural mechanization, industrial revolution etc.) this will be the first time (with a few exceptions) when we're replacing the human ability to do things like think, solve novel problems and interact with other people. The machines of the future wont need us to do that. Sure, it's primitive now, but NASA already has self diagnostic and repair systems on things like Curiosity. it wont be that long before similar stuff makes its way into the industrial world, with the machine pre-programmed to solve 95% of the issues it'll come up against in the real world. Sure, people will still be needed to solve the truly novel, unanticipated problems, but how many of them are there liekly to be, really? Certainly not enough to require anything like the workforce today.
And remember, we aren't talking just about robots here, but a general technological tide that is affecting everything, all at once. Online shopping, 3d printing, self-driving cars and self-operating machinery, 3d-printed food, possibly even lab grown meat - all of those, plus all the dozens I can't think of, plus the hundreds of innovations that haven't been invented yet, all happening within the same short time period. Our technology is advancing way, way faster than our society, and people - lots and lots of people - will inevitably pay some kind of a price for that.
[long post]
Well, you might have a point in some areas, but you're ignoring perhaps the most basic reason for human operators: flexibility. You already could fly a plane on autopilot from takeoff to landing, without any problems. Yet, pilots still fly. The reason is that no matter how advanced, no automaton is capable of reacting to anomalies as well as a human. A modern train driver pretty much only operates a throttle, brakes and a deadman. In theory, you could hand the controls off to the computer, yes. However, if there's an anomaly the computer didn't detect for some reason, or if the connection with the central command computer is cut, you'd better have someone out there to take manual controls. Same with cars and trucks, though taxi drivers can indeed get hit if automated cars become commonplace. Since the passenger could hold the deadman and operate the manual override if needed, a taxi driver would be only needed when the car has to move without passengers, something that only happens out of cities. While ranks of low-level manual laborers might thin somewhat, it's always good to have somebody to watch the robots, especially if they're directly interacting with people (automated mining vehicles do not, hence why they can be completely driverless).
Remember that spanish train derailment a few months back? The one that happened because the driver was on his phone and speeding through a curve? A train driving program wouldn't have had those distractions. In fact, the drivers had been advocating an increase in the automation of the trains to prevent things like that. Do you think the families of the victims would support AI trains in future? What about that korean airliner that crashed a bit before that because the pilots were going too slowly? An automated landing system wouldn't have had that issue.
Remember, society doesn't demand perfection. We have acceptable risks, social tolerances, acceptable limits to collateral damage - all we need is for the AI to be better than people. And that's so much closer than you think.
Really, you're demonstrating is classic short term, small minded thinking, in line with famous comments like "640k should be enough for anybody" (apocryphal or not, as it may be). Sure, right now we need people watching the AI. But do you honestly think things will be like that in 2020? 2050? How long before the tech gets good enough to be acceptable? Or, imagine an intermediate step, where the tech isn't perfect, but could be contacted wirelessly by a trained operator in an emergency situation. If you had a next generation, super advanced AI autopilot, do you really need a pilot in every plane? Or do you just need one pilot sitting at air traffic control somewhere ready to jump virtually into the cockpit of any plane that's having trouble?
Sure, we're not there yet for every situation. Maybe humans are still better in a lot of places. But that is changing, and quickly.
I do agree with Starslayer that infrastructure inertia will be a factor that slows this sort of thing down. But nothing is going to stop it. We have to adapt.
-
heres a little experiment. watch an episode of how its made, and count the number of workers. you will not, depending on the episode, be able to use all your fingers counting workers. we are already quite atonomous as far as production goes. in some countries like china, it becomes economically viable to use men instead of machines, because laborers there are willing to take a job with a low wage, and the cost of hiring all the needed employees is less than the cost of the design, construction and maintenance of machines to do the same job. eventually a fully mechanized china will come into being as their economy continues to build. automation tech will continue to improve in the mean time, and the cost of automated factory equipment will drop. and this will lower the economic threshold of entry into automated production.
we have already seen this happen in the us. so a lot of the jobs are jobs handling first world problems, such as health care, information, banking, service, and sales. places where we want people in the loop. banking could be totally automated if it didnt need constant monitoring. nobody wants to pay a fortune on health care to go visit a robot doctor. service usually requires some troubleshooting, which has yet to be automated (and all attempts to do so fail horrifically). sales (and advertising) also panders to human nature and is not really something you can automate. the human interaction in information tech is mostly troubleshooting the systems we use, policing them, so it is mostly a maintenance job, a thing hard to automate. there are plenty of other jobs but the thing is they are very hard to automate. you kinda need to distance yourself from the notion of full automation, because no matter how precise your robots and decision making ais are, they will always need supervision and maintenance.
we will get to a point where society is still dependent on a handful of highly paid very highly skilled workers doing essential jobs, but where we will have a bunch of ballast jobs that dont really dont accomplish anything in particular, but are just used to give society something to do to keep them happy/occupied and to give them money to live on. you know the kinds of jobs politicians like to create. of course paying people to expend effort in accomplishing nothing is wasteful. so simply entitling everyone to a basic standard of living: food, shelter, basic medical, etc. where work grants you extra resources. one big problem in the us is that working a minimum wage job pretty much forfeits most government services. so making the transition from entitlement case to paid worker involves a decrease in your standard of living. its not just the employers its the landlords too. i had a situation once where despite being employed i couldn't afford the cheapest apartment in town, so i ended up moving (and havent had a job since, you might note that im not homeless and actually have faster internet, which wouldn't be possible if i still had that job).
so it might be better to consider something where some percentage (say 25% as an example) of the population are working essential jobs, the few jobs needed to keep an automated society running. these are the people who thrive in a work environment and wouldn't be happy just being idle. you would have people who do non essential jobs, stuff that enriches society but isnt essential, they would get paid grants to continue on with their works, this is where artists, musicians, designers, athletes, intellectuals, etc would thrive. everyone else has their basic human needs taken care of, but are not required to work. currency would only exist as a way to ration resources, everyone (even people who work) is granted a basic standard of living allowance. children also have an allowance paid to their parents for child rearing expenses. if you work an essential job you earn a larger ration of resources (more money) in addition to your basic allowance. if you want to contribute to society but need funding, you can apply for a grant to pay for it (much like how research is funded), you might also be supported by contributions from individuals (crowd funding). being a non contributing member of society would be tolerated, treating basic human needs as a civil right.
-
I think you're probably right, Nuke - some degree of government supported quasi-socialism along the lines of the Scandinavian states (http://en.wikipedia.org/wiki/Nordic_model) is probably where we'll end up. It's the getting there that worries me.
-
I agree with what is in the article. Ever since I heard about the wonders of 3D printing, I've wondered what effect it would have on our manufacturing sector, and the economy in general. It seemed to me that we'd have to rethink how we do economy.
I also like the idea of basic income guarantee. Rewarding hard work is one thing, but forcing poverty as a punishment? As an artist, I know that I work hard on my artwork. I know of artists who spend more time working on art than most people put into their day job. Yet they're forced to live in poverty because in our system what they do doesn't make money. I know a lot of artists would be happy not being rich, and just making art. The only problem is that our system will kill you if you try that.
BIG would allow people to make the things that most societies claim they value ( art, volunteering, motherhood) yet aren't actually supported in our current system.
Because most artists are stupidly stupid when it comes to advertising, promoting and selling their work. I'm not ****ting you: the more talented artists tend not be artists because they don't have a strong work ethic and ability to recognize business acumen (or find connections). Many of the "Great Masters" were either in constant debt or on the run, few were actually good businessmen or promoters of their work: that didn't come into being until the 1880's when art brokers permanently crippled the "Academic" mentality that dominated the Western market for years.
I could expound for hours why artists are stupid, but I'll keep it simple.
As an "aspiring artist" the biggest thing I say is that most artists are business idiots. As an artist, you aren't handed a job insomuch as you literally have to make one. Want to sell your art? Guess what, you have to find the means!
Many talented individuals I know failed the most vital part of success: contact lists, connections, etc. Want to be successful in the creative field? Guess what, you better start meeting people and networking. Networking = Oppurtunities.
-
I think you're missing two critical points.
Firstly, I don't think anybody's saying that this will end the world, or completely collapse society. Society and the human species will make the best of it and go on, obviously. But. Think about the closest analogue we have to the (almost inevitably) coming AI/robot/computerized workforce - the industrial revolution and mechanization meant that huge swathes of the workforce were no longer required. This led to the events you're talking about - increased productivity, no money to the new machines and it spread out into the economy to do other things (mostly support the extravagant lifestyles of the super rich. But it didn't enhance the lives of the lower classes, not by a long shot. As a direct result of the industrial revolution, England saw the Dickensian nightmare of poor houses, extreme poverty and social disaster that took decades - generations, really - to sort itself out. And that was without the other likely serious social issues we're going to have to deal with in the next fifty to one hundred years (massive overpopulation, global warming etc.).
This is historical rewriting. There was a reason why people flooded industrial cities from the country side and that reason was mostly because it was better to live in the "Dickensian nightmare" than in the poverty-ridden hellscape that was the country side then. Today we even have a direct analogue to what happened, just look at the chinese exodus from the rural zones to the cities. They are willing to endure the ****ty jobs that the likes of Foxconn and others give them for they are ten times better than what they had in the country side.
This idea that it will get worse before getting better is the usual cliché of a crisis and we should question it.
I am also extremely sceptical about these so-called "serious social issues" you are discussing here. I feel like I'm in 1913 and people are discussing the terrible problems that horse manure are going to create in cities with the "current trends". Trends never follow linearly, and I laugh at anyone who really thinks they can predict what will happen in 50 years (quote me anyone who predicted correctly circa 1913 the kind of world we would have in 1963).
I have no doubt whatsoever that we'd eventually sort ourselves out, find a new balance. What I'm worried about is the period between then and now - it's not likely to be pretty.
Here we agree. It's never pretty. That's the usual problem of capitalism.... it doesn't have a "human face" so to speak. It is relentless, cruel, rational and pityless. It's a damned rollercoaster and we have to fasten our seatbelts.
The second point I think you're missing is that while the coming changes are similar to previous technological leaps (agricultural mechanization, industrial revolution etc.) this will be the first time (with a few exceptions) when we're replacing the human ability to do things like think, solve novel problems and interact with other people. The machines of the future wont need us to do that. Sure, it's primitive now, but NASA already has self diagnostic and repair systems on things like Curiosity. it wont be that long before similar stuff makes its way into the industrial world, with the machine pre-programmed to solve 95% of the issues it'll come up against in the real world. Sure, people will still be needed to solve the truly novel, unanticipated problems, but how many of them are there liekly to be, really? Certainly not enough to require anything like the workforce today.
Again, I did not miss one bit of a thing. You did miss what I did say however. You failed to recognize that before the 20th century, most people were just farmers or factory workers. If you told them that factories and farms were mostly mechanized in the middle of the 20th century, they would give me the exact same speech you wrote just up here, claiming that these "mechanized machines" (ah) would solve "95%" of the issues. What you miss is that this nightmare scenario you describe already happened in our world. Agriculture is "95%" of the main issues of the 19th century, and that's just solved.
Just as late as 1970, I know Carter made some measures to limit the development and research of mechanization of agriculture because it was "killing jobs". It was bollocks economics then, it is still bollocks economics now.
And remember, we aren't talking just about robots here, but a general technological tide that is affecting everything, all at once. Online shopping, 3d printing, self-driving cars and self-operating machinery, 3d-printed food, possibly even lab grown meat - all of those, plus all the dozens I can't think of, plus the hundreds of innovations that haven't been invented yet, all happening within the same short time period. Our technology is advancing way, way faster than our society, and people - lots and lots of people - will inevitably pay some kind of a price for that.
The price of these services being overwhelmingly deflated in value and becoming accessible to everyone. Yeah, the horror. The kind of fear you are describing here is the equivalent to the luddites'. Taxis becoming accessible to more people for they are automated; trucks becoming more accessible to small companies for they are automated; shopping becoming fully automated and brought to everyone's door by robots (the nightmare!); "tele jobs" (or however you call them) disappear substituted by plain AIs (I can already see the tears shed by those who love that kind of job); the manufacturing jobs (those where people repeat the same actions a thousand times per day... what a loss to the world!); accounting and spreadsheet management (the horror!).
People will save money from all these services becoming cheaper and spend it in new things. Those "new things" will become the markets and jobs of the future.
Sure, we're not there yet for every situation. Maybe humans are still better in a lot of places. But that is changing, and quickly.
Just a quickie here. I do agree with Black Wolf that the jobs of pilots, drivers, a lot of managing jobs, omg I could make a list of hundreds of jobs are absolutely threatened by the coming of AI and robots.
And that's just amazingly good.
-
The change is slow and will be slow. Media and various futurists love to paint 3D printers, self-driving cars, advanced AI as being part of a revolution just around the corner. That is more scifi than reality. Automation was here for a long time already and it will take many decades until all the menial tasks are truly eliminated, if ever. There is more than enough time to adapt and lots of jobs that wont be replaced. Some expansion of welfare systems or basic income is a good idea, tough.
When we develop general AI similar to a human brain, thats the time to start worrying. But I think that wont happen for a century, at least.
-
General strong AI will be here within 20 years, at most.
-
General strong AI will be here within 20 years, at most.
As has been tradition for the past 40 years.
So I guess we'll get a package deal of strong AGI and commercial fusion reactors?
-
That was too facetious especially coming from you The_E, whom I am not generally expecting that kind of stuff.
Fusion is harder than was expected, and AI will probably bring unexpected difficulties as well. However, I see no real barriers here. AI "already" exists in very fragmented low brow applications (like Siri or Search or Auto driving, etc.), and it will only improve exponentially. We do not need "strong AI" for AI to be useful. It is useful in the entirety of its "ramping up", which is only a very big incentive to R&D it.
"Strong AI" is a simple name for a complex feat, and we will slowly recognize that the practical AIs that we have on our hands and around us are increasingly smarter and adaptive until we suddenly realise we can actually call them "strong AI".
I do think such an AI will be built before 2030 in the lab, it will probably just reach commercial applicability ten years later, etc. (Just like "Watson" will only reach the rest of us in a few years, perhaps 10).
-
I'm a strong believer in the saying that "If we know how to do it, it isn't AI".
Yes, we can expect our software agents to get ever more helpful and sophisticated, but equating it with strong AI is in my opinion the wrong way to think about it (There's a lot of really interesting ethics debates to be had on the question whether or not developing an AI would be a moral thing to do, and a lot of ancillary questions whether or not an AI patterned upon human consciousness would even be willing (or able) to do useful work).
AI, as in "superhuman superfast intelligence that improves itself as needed" is imho a pipe dream; whether it is a dream or a nightmare depends entirely on how dystopian you're feeling.
-
I think that's the hollywoodian vision of what an AI is. Let me express myself better at what I mean. I envision that current AIs are going to get better and better. We will laugh at calling them anywhere near "Strong AIs". All along the ways we will just say "Oh, this isn't AI at all, it's just Siri, a "natural language user interface"", "Oh that ain't AI, it's just an auto-driver", "Oh that ain't AI it's just a program that is managing the backstore of Amazon".
We will get ever increasingly smart and intuitive AIs, every increasingly aware of what we mean when we say things and why we say them, and increasingly able to respond to our more complex queries and so on, and in all that ramp we will still laugh by the prospect of saying these things are "Strong AI". Of course not! This is just the maid bot that knows how to clean my house from start to finish. That is just the pilot of the plane, it only knows how to drive planes! Etc., etc.
And then you'll have more general purpose AIs that serve you as a kind of software butlers, that do understand your personalities and tastes, that know many general things that humans know too. Even then people will not call them "Strong AI".
And then suddenly you'll have strong AIs and you wonder how on Earth did they do that so quickly and unexpectedly. :D Well, it did so by fooling you on the "unsmart" things AIs are doing today and will do tomorrow.
-
That was too facetious especially coming from you The_E, whom I am not generally expecting that kind of stuff.
Fusion is harder than was expected, and AI will probably bring unexpected difficulties as well. However, I see no real barriers here. AI "already" exists in very fragmented low brow applications (like Siri or Search or Auto driving, etc.), and it will only improve exponentially. We do not need "strong AI" for AI to be useful. It is useful in the entirety of its "ramping up", which is only a very big incentive to R&D it.
"Strong AI" is a simple name for a complex feat, and we will slowly recognize that the practical AIs that we have on our hands and around us are increasingly smarter and adaptive until we suddenly realise we can actually call them "strong AI".
I do think such an AI will be built before 2030 in the lab, it will probably just reach commercial applicability ten years later, etc. (Just like "Watson" will only reach the rest of us in a few years, perhaps 10).
the only reason fusion is taking so long is because all the money got dumped on the most expensive, most complex system possible, the mother****ing tokamak. i have a feeling small fusion (polywell/dpf/the unnamed thing the skunkworks is working on) will have a working reactor in less than iter's timeframe (much less iter and demo). oh and it will be an american reactor that goes to the military first.
i dont think ai will come as quickly either. all predictions will require mores law to remain accurate for another 25 years. i have a feeling that we will hit the semiconductor wall first. we dont know if the transition to and miniaturization of quantum computers will follow moors law. it might take awhile to catch up to semiconductor tech, so we might have a period of time without significant computer tech increases.
-
I think that's the hollywoodian vision of what an AI is. Let me express myself better at what I mean. I envision that current AIs are going to get better and better. We will laugh at calling them anywhere near "Strong AIs". All along the ways we will just say "Oh, this isn't AI at all, it's just Siri, a "natural language user interface"", "Oh that ain't AI, it's just an auto-driver", "Oh that ain't AI it's just a program that is managing the backstore of Amazon".
We will get ever increasingly smart and intuitive AIs, every increasingly aware of what we mean when we say things and why we say them, and increasingly able to respond to our more complex queries and so on, and in all that ramp we will still laugh by the prospect of saying these things are "Strong AI". Of course not! This is just the maid bot that knows how to clean my house from start to finish. That is just the pilot of the plane, it only knows how to drive planes! Etc., etc.
And then you'll have more general purpose AIs that serve you as a kind of software butlers, that do understand your personalities and tastes, that know many general things that humans know too. Even then people will not call them "Strong AI".
And then suddenly you'll have strong AIs and you wonder how on Earth did they do that so quickly and unexpectedly. :D Well, it did so by fooling you on the "unsmart" things AIs are doing today and will do tomorrow.
I've been thinking along these lines as well. I agree.
-
all predictions will require mores law to remain accurate for another 25 years. i have a feeling that we will hit the semiconductor wall first. we dont know if the transition to and miniaturization of quantum computers will follow moors law. it might take awhile to catch up to semiconductor tech, so we might have a period of time without significant computer tech increases.
Well, they just built the first carbon nanotube computer...
-
well have fun making connections smaller when wires are already an atom thick. a silicon atom is 111 picometers across, intel's current process is 22nm, thats only three orders of magnitude of room in which to expand. we will reach physical limits at some point. its a limitation of matter and it will kill moors law.
even if you do get a nanotube computer, you will need to play catchup for several years before you can surpass the capabilities of silicon devices. it does however allow for more 3-dimentional cpu design, and cnts are really good at geting rid of heat.
-
Old bump, but relevant;
http://news.yahoo.com/blogs/sideshow/switzerland-to-vote-on--2-800-monthly-%E2%80%98basic-income%E2%80%99-minimum-for-adults-181937885.html