Tagged: artificial intelligence
September 26, 2016 at 10:49 am #10459TabithaKeymaster
If you believe everything you read, you could probably be quite worried about the prospect of a super intelligent, killer AI. The Guardian, a British newspaper, warned recently that “we’re like children playing with a bomb,” and a recent Newsweek headline reads, “Artificial Intelligence Is Coming, and It Could Wipe Us Out.”
Bostrom, Oxford philosopher aggregates the results of four different surveys of groups such as participants in a conference called “Philosophy and Theory of AI,” held in 2011 in Thessaloniki, Greece, and members of the Greek Association for Artificial Intelligence. His findings are presented as probabilities that human-level AI will be attained by a certain time:
By 2022: 10%
By 2040: 50%
By 2075: 90%
When do you expect we’ll see human general intelligence?September 26, 2016 at 12:55 pm #10472PaulKeymaster
The first mistake Nick Bostrom made with his survey is assuming we can properly quantify human intelligence. I discuss this in more detail in my blog, but in short, we can’t given our current understanding of the brain. The measures used by people like Bostrom and Ray Kurzweil are too simple to be a good proxy to intelligence.
Secondly, as discussed in a post by Oren Etzioni for MIT Technology Review, Bostrom’s survey is not representative of expert opinion. Etzioni’s own survey of the Fellows of the American Association for Artificial Intelligence suggests that 92.5% of experts think “superintelligence is beyond the foreseeable horizon.” One respondent of this survey went as far as referring to Bostrom as the “Donald Trump of AI.”
So, in summary, human level intelligent (however you want to quantify it) is likely to be centuries away, if it is even possible.September 26, 2016 at 5:24 pm #10501RobinParticipant
I think he said general intelligence so he means just as flexible as a human. My guess is 2040 and before then we can discuss what non general intelligence is.September 27, 2016 at 5:30 pm #10571ShaunParticipant
I agree that one of the major challenges here is defining what the finish line is. Sam Harris talks a lot about how impossible it is to determine if we have produced a conscious being vs an intelligent machine and I think it is damn near as hard to confirm we have produced general AI.
People have been hoping for this for decades and I think it will continue to be a thing of science fiction for my lifetime.September 27, 2016 at 7:17 pm #10575PaulKeymaster
So I discussed this with Dr Hauser after the talk and he gave an interesting justification for his view that it will be decades not centuries until AI surpasses human intelligence. He agreed with me that measures of human intelligence used by people like Ray Kurzweil and Nick Bostrom are far too simplistic. However, the brain is still of a finite complexity and therefore his argument still stands, but the relationship between the level of AI and human intelligence is shifted to the right. i.e. human-level AI occurs at a later date. His opinion is that this date might be pushed back five or ten years, since the rate of improvement is so great.
In my opinion this will be more like centuries. I think there will be a long plateau between mastering narrow AI and building a general AI (however you define it), since I don’t think they lie on the same exponential growth curve.
You must be logged in to reply to this topic.