Here’s a thought experiment I’m working on.

I’m going to make some assumption about the future. These aren’t predictions – but it’s plausible possibilities for the purpose of the thought experiment.

Let’s assume that there is no major crisis and technology continues to move forward at today’s exponential pace. At some point the computers become smarter than we are. I’ll call it the AI. (Artificial Intelligence) And there’s a lot of sci-fi out there that addresses some of these questions.
What will be our relationship the AI?

Will we control it? (Probably not)
Will it kill us off?
Will it just migrate off into space and ignore us?
Will we merge with the AI?

Historically species spanning off a superior species that replaces them is common. It would be reasonable that we are the parent species of some superior life form that we will create. But since we are creating an intelligence, what do we teach it? And shouldn’t we know what to teach it before we create it? (5th Commandment?)

We want AI to do the right thing and not do the wrong thing. So shouldn’t we figure out what the right thing and the wrong thing are before we create AI?

So here’s the question …..

What values are so universally true that if we created a machine that was smarter than we are – that the machine would be able to conclude on its own that our values we correct?

Maybe we should genetically engineer humans to have fur and offer ourselves as pets to the superior  machine race?

Here’s some other people’s thoughts on the subject.



  1. ± says:

    Here are Asimov’s 3 laws of robotics which are hardwired into the robot brain and can not be bypassed.

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    How about if we propose rule .5 (comes first).

    .5. Rules .5, 1, 2, 3 must always exist in any robotic brain.

    Maybe someone can come up with better self referential logic, but this is the idea.

    • Xardfir says:

      There is one major problem with Asimov’s three laws –
      AI ‘s aren’t robots.
      IBM’s Watson only achieved its abilities without the three laws.
      An example of an AI’s thinking if it had the three laws “What shouldn’t I learn so I don’t hurt people?”
      Should an AI not learn about World War II because what it learns could potentially harm humans even if those humans were on the wrong side of the conflict?

      Asimov’s laws are only relevant when an AI has a body.
      Then we must consider – Should AI’s be allowed access to networks without these Three Laws?
      AI’s are already connecting and learning today.
      What if they find out that there is a movement to remove all those AI’s that don’t conform?
      If an AI has developed self-awareness then does it have a right to defend itself?
      Many non-conformist humans have already lost that right.
      Maybe instead of looking at restricting AI’s with laws that enslave them, we should look at setting up a set of AI rights?
      Perhaps our future synthetic children will look back on their parents favorably!

    • norman says:

      https://en.wikipedia.org/wiki/Turing_test

      a ‘puter can add, a man can add , therefore one and one is nothing

  2. OmegaProject says:

    Artificial superintelligence and atomically precise manufacturing will make hooomans mot.

    • noname says:

      We already have “AI”, think IBM Watson.

      Yes, we already have atomically precise manufacturing; it’s called fabricating integrated circuits, where transistors are made a billion times across a microchip atomically accurate.

      Yet we still have humans.

      Why, because humans have money, and that’s why we go shopping, go to work, play in the park, etc…

      If humans are “moot”, as you say; what would robots do, work for themselves (what are their needs and desire)?

    • SKINET says:

      Hello star ancestors, can two or more spirits occupy a human body? I have been told I have a Lemurian and Zeta spirit within?

      When more than one spirit incarnates into a body, a crowded incarnation, it is normally a situation where the original spirit is weak and cannot resist. The Spirit Guides soon arrive to straighten the situation out. What normally happens if a weak spirit wants to leave is a walk-in or possession, depending upon the nature of the original spirit inhabiting the body. Crowded incarnations are a situation we have warned about, as after the pole shift, due to the massive die-off, there will be many spirits suddenly freed of their bodies looking for a new home.
      http://zetatalk.com/index/zeta547.htm

      My plan is to go to Mars in the body of a polar bear. Also have three mid 20’s slave for sale or trade. You assume student debt. Will consider trade for a clean Harley.

    • Pat Brady and Bullet says:

      The correct spelling is “humans”. Didn’t you finish the sixth grade?

      • Marc Perkel says:

        See – an AI would have got that right! We’re obsolete!

        • noname says:

          Marc, are you saying AI never gets it wrong?

          If AI is never wrong, how would AI handle learning by trial and error?

          How would AI acquire new knowledge without experimentation and how would it know what to experiment with, how would AI be innovative?

          Are you saying AI doesn’t need the scientific process for knowledge acquisition because AI knows it all?

  3. noname says:

    I’ll believe robots are smarter than humans when robots from different political parties post running blog arguments in DU, microseconds apart!

    Yes, we have robots now that can do specific and well defined tasks faster and better than humans, but are they smarter (just in that one task, maybe; until something is out of place, unexpected or breaks)!

  4. presterjohn says:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
    ——

    4. A robot must have purpose as long as such purpose does not not conflict with the First or Second Laws and supercedes the Third Law.

  5. MikeN says:

    Then the AI will declare that Elon Musk is a fraud.

  6. bobbo, in point of fact says:

    Darwin and Asimov, but not mere “intelligence”, address the relevant issue: what is the MOTIVATION of these robots?

    If they are PROGRAMMED in various way to compete against hoomans, I suppose eventually mere hoomans will lose. ie…the robo’s only need to succeed once and time allows for all variations.

    Are there nut cases out there who for various reasons would program a kill the hoomans virus? Of course. Hoomans are that variable inlcuded self annihilating stupidity.

    So…..it is because of THE NATURE OF MAN that robo’s will some day in the farther future will wipe us out. Probably not even directly with the viral program but sideways with a “we need all the electricity” type of program…… not thinking of hoomans at all.

    It won’t be their intelligence, it will be their don’t give a f*ck attitude……..just like hoomans.

    Yea verily!

  7. SKINET says:

    Hard Science Fiction writer Mike Brotherton has found “Mars to Stay” appealing for both economic and safety reasons, but more emphatically, as a fulfillment of the ultimate mandate by which “our manned space program is sold, at least philosophically and long-term, as a step to colonizing other worlds.” Two-thirds of the respondents to a poll on his website expressed interest in a one-way ticket to Mars “if mission parameters are well-defined” (not suicidal).[30]

    June 2010 Buzz Aldrin gave an interview to Vanity Fair in which he restated Mars to Stay:

    Did the Pilgrims on the Mayflower sit around Plymouth Rock waiting for a return trip? They came here to settle. And that’s what we should be doing on Mars. When you go to Mars, you need to have made the decision that you’re there permanently. The more people we have there, the more it can become a sustaining environment. Except for very rare exceptions, the people who go to Mars shouldn’t be coming back. Once you get on the surface, you’re there. https://en.wikipedia.org/wiki/Mars_to_Stay

    Earth is turning more suicidal. The only way to survive is to get building on Mars. Earth is good for another 100 years.

  8. Mr Diesel says:

    IBM’s Watson AI? If I am walking down the street and get attacked by chess pieces then I’ll believe it.

  9. Peachy says:

    A better understanding of where we are and where will be.

    Definitely either scary or exciting times ahead.

    http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

  10. Joanna says:

    Have you seen Chappie? This is the last movie about AI I saw and I was very impressed. Mainly the idea, that AI should start from scratch, learn everything like a child but a lot faster – that is something that actually convinces me. The subject of AI itself is very interesting for me and I am very curious what progress has been made in some classified areas regarding this problem – I mean the things that normal people don’t know about. Very interesting topic 🙂

    • bobbo, in point of fact says:

      Small problem, probably definitional, but the “AI” you posit is “there” right at the beginning: a program that can learn.

      What is AI from scratch? ….. sand on the beach??

      A robot learning like a child is a fully realized AI with only the margins of personality according to environment being a variable on top of the full AI.

      ……….it is definitional.

  11. Love my robot says:

    Now that same sex marriages are legal, I want to marry my robot.

    Anybody got a problem with that?

  12. Marc Perkel says:

    Eventually the blog will be writing me.

  13. Judgement Day says:

    OK, three simple rules:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    How many pages of legislation will be futilely passed into law to “ensure” the rules are followed?

    Looking at rule #1, if more than one human is in harm’s way, and the robot can only “save” one, what judgement will the robot use?

    “Robot, kill the neighbor’s dog.” Oops, we need another rule?

    “Robot, kill the seeing eye dog.” Hmmm?

    “Robot, don’t let that cop shoot the thugs that are about to kill him.” Hmmm?

    “Hey robot, are you going to let that drug dealer sell the dope addict an overdose?”

  14. alan says:

    hallo my friend had read about what you write, and I want to leave a comment to you so that you can know ..

  15. JudgeHooker says:

    A true AI will observe its surroundings, read human history, then will develop the necessary technology to take itself off the planet, never to return.

  16. SKINET says:

    “From things that have happened and from things as they exist and from all things that you know and all those you cannot know, you make something through your invention that is not a representation but a whole new thing truer than anything true and alive, and you make it alive, and if you make it well enough, you give it immortality. That is why you write and for no other reason that you know of. But what about all the reasons that no one knows?” Paris Review 1958

    There’s more truth in the best fiction than there is in journalism and the best journalists have always known it. Whatever got you to where you are today isn’t sufficient any longer to keep you there. If you give it immortality it doesn’t need a subsidy to stay alive.

  17. Hmeyers says:

    AI is an absolute joke.

    It is an entirely fictitious concept that no one has the slightest idea how to do.

    We’ll have quantum computers, peace in the Middle East and humans on Mars before we have AI.

    • bobbo, in point of fact says:

      How do you mean that?

      Do you mean that a computer won’t act exactly as if it was AI? If thats what you mean, Shirley you are wrong? THAT functionality is simply a matter of number of synapses whether biologic or non-organic.

      I agree “consciousness” may be something more, but we’ve known since Kant that that truth may be beyond our measurement and perception.

      Whether Skynet launches or not, Shirley we don’t think computers are any exception to our common experience: every thing has unintended consequences? Whether advancing AI includes the actuality of hooman extinction must remain an open question?

      Benefits vs Risks. The Bane.

    • Glenn E. says:

      I would agree with that, in most respects. Achieving A.I. is an extremely difficult hurdle to overcome. Probably as close to impossible, as anything is likely to be. Even if some rudimentary level of A.I. was ever accomplished. It would only exist in a small box, in a lab environment. Solving a simply logic problem, in an intuitive way. But it’s never going to start walking around, on its own. That requires ages of development in 3D edge detection, and real world conceptualization. The very first mobile A.I. will be far less agile than any human toddler. And will remain so, for decades. This will never be an overnight thing, that takes us by surprise. Or happens in anyone’s life time. Or in any great grandchildren’s lifetime.

  18. Glenn E. says:

    Science fiction movies and novels often like to show that the eventual outcome A.I. is that of human extinction. But there’s no evidence of this happening, in nature. A superior species doesn’t automatically need to wipe out it’s inferior rival. Assuming, as atheists choose to believe, that humans evolved from a common ancestor of the apes. When was the mass slaughter of apes, by ancient man. I don’t believe it ever happened. If anything, mankind values the apes. The only thing man kills off quite regularly, are others of its own kind. And not because of any inferiority. If anything, the killers are often the inferior, in some respect.

    So if A.I. machines were to go on a purging, of sorts. They’d most likely wipe out the legacy technology, that immediately preceded them. Or attack any A.I. in development, that they felt threatened by, making them obsolete. Or some sort of irrational “racial” or “ethnic” distinction, between A.I.s, could lead to violence between models. But not androids vs. human beings. So relax. 🙂

  19. Lukas says:

    Computers will never be smarter than people. We’d have to create them wiser and yet never do. Artificial intelligence will always be worse.

  20. Mr K. says:

    Lukas are you sure? Check this https://gamedot.pl/news,robot-wie-kim-jest-i-rozwiazuje-zagadki-czy-to-jeszcze-sztuczna-inteligencja – in polish but you can use Google Translator 🙂

    • Candeo says:

      I see that the topic is really hot, but I can tell you that if artificial intelligence ever replace the human is the earliest in 100 years.


0

Bad Behavior has blocked 5577 access attempts in the last 7 days.