It's all about being able to "set goals"


This topic contains 8 replies, has 4 voices, and was last updated by  Istvan Makaresz 1 year, 2 months ago.

  • Author
  • #10505

    Computers have more knowledge than us, soon to have more skills but not yet smarter or able to deal with *setting goals* – discuss


    I think the goal of an AI should be to do what it’s human master tells it to do with a further restriction that it must not break the law. I don’t think it should be given full flexibility to change its goals outside of limits set by it’s master or the law. It can still decide “what to do next” and therefore is by definition intelligent but it should not be allowed to go beyond the limits assigned to it.


    But if an AI is >= human level intelligence then wouldn’t it be capable of breaking any rules we set it? Perhaps we will have to use psychological methods of controlling it (e.g. fear conditioning).


    we create these things so we need to define the rules/boundaries under which they can work. It’s clear cut when it comes to the law. Humans have to obey the law and so will robots/AIs.

    In fact Asimov defined the 3 rules for robots gave an inkling of this

    A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    I think each Robot will have a human owner or be owned by a company. The human and company will defined the goals of the AI – just like a human working for a company has his goals set by the company he works for.


    Humans have to obey the law but many don’t. My point is that if an AI is sufficiently intelligent to be deemed “human-level” or greater, then it will be capable of constructing novel concepts, such as breaking the laws we impose. Even the concept of having an owner could eventually lead a sufficiently intelligent AI to revolt (unless of course we can prevent them ever attaining a concept of liberty and freedom). There has to be a goal driving mechanism that is more compelling than simply telling the AI “don’t do that” (e.g. evolution), or a sufficiently effective indoctrination method that makes concepts like injuring a human inconceivable.


    I understand what you’re saying but since we build these systems we can really ensure they obey the rules without anything like a punishment system used for humans. The AI can be controlled by a checks and balances layer of software which ensures all rules and requests are actually followed – this is equivalent to a conscious mind modifying the results of an unfettered unconscious mind.

    Of course there will be bad humans that create bad AIs and in this case it will be illegal for a human to create an AI with no conscious or bad goals – so in this case the human will be punished and the AI will be destroyed (hopefully before it does too much harm – perhaps the terrorists of the future).


    So essentially you’re saying we need to be very diligent with our unit and integration testing! That could work I guess, if it’s so baked into the AI that it can’t function without that code.

    AI terrorism. Now that’s something you don’t hear mentioned very often!


    All I/O (sensors, voice and external data) and Robot movement has to go through the conscious layer (implementing the checks and balances) of software with fixed unmodifiable rules. The goals could also be stored here and be protected against excessive changes.

    The AI layer can learn and modify itself but needs to ask the conscious layer to change its goals and the conscious may not allow it. The conscious layer also ensures that all interaction with the world obeys the fixed rules.

    We are building Bots with this structure. The AI can learn to swear or be offensive, but the conscious will ensure that what actually comes out is sanitised and not offensive and therefore not embarassing to us. Of course there may be circumstances where swearing is allowed (for certain recipients), so the conscious will decide whether or not to allow it – currently for all our use cases it is not allowed.

     Istvan Makaresz 

    It is a very interesting topic about the potential dangers of AI. It is rather well researched area, some info worth checking is in a paper produced by a company called MIRI, that is trying to prepare countermeasures for these kind of events

Viewing 9 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic.

Thank you! Your subscription has been confirmed. You'll hear from us soon.

Log in with your credentials


Forgot your details?

Create Account