Panel: Creating Human-level AI: How and When?

This topic contains 1 reply, has 2 voices, and was last updated by  Kevin 1 year, 3 months ago.

  • Author
  • #12907

    Check out this panel from the Future of Life Institute with some of the foremost thinkers in the field, including Demis Hassabis and Jürgen Schmidhuber. They discuss how we can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked. What do you think? Whose opinion do you relate to the most and why?


    It seems that a lot of the current AI work and especially the current practical implementations are focussed on specific tasks. To the guy from IBMs point, they seem to be good at augmenting or improving on us in specific areas such as assessing health risks or looking for potential security issues etc.

    I see these fields evolving and maturing fairly rapidly, and in my space (security stuff) they are likely to become a great help over the next few years in terms of providing greater insights, finding more and more subtle security issues and automating security actions and processes. These developments actually help both organisations and people – we find and respond to more issues more quickly, and we can free up our operational teams to focus on harder, more in depth issues, rather than dealing with ‘low hanging fruit – more security and better careers.

    I’d guess similar trends will be mirrored in other areas like healthcare etc.

    Regarding general, fully context aware AI that can handle completely abstract concepts I think this is quite a way off, but we could all be surprised!

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.

Thank you! Your subscription has been confirmed. You'll hear from us soon.

Log in with your credentials


Forgot your details?

Create Account