Ethics in Artificial Intelligence - Will the machines care?

It would be really easy to just post this link to Sam Harris’ TED talk and say “Discuss!” Sam Harris: Can we build AI without losing control over it?

But for you, busy people, let me distill some of Sam’s points and add a few of my own.

Sam does a brilliant job of pointing out that we’re not as worried about the impact of artificial intelligence as we should be.

“Death by science fiction is fun. Death by famine is not fun.”

If we knew we would all die in 50 years from a global famine, we’d do a heck of a lot to stop it. Sam is concerned that there’s a risk to humans once artificial intelligence surpasses us and it will, it’s only a matter of time.

"Worrying about AI safety is like worrying about over population on Mars."

So, we’re using a time frame as an excuse? That we shouldn’t worry our pretty little heads about it because it won’t occur in our lifetime? In half my lifetime, I’ve gone from having an Amstrad CPC 6128 running DOS to now carrying the Internet in my pocket. Also, I have kids and hopefully one day grandkids, so I’m a little worried for them.

Information processing is the source of intelligence.  And we wouldn't consider for a moment the option that we'd ever stop improving our technology. We will continue to improve our intelligent machines until they are more intelligence than we are and they will continue to improve themselves.

Elon Musk’s OpenAI group released Universe this week, providing a way for machines to play games and use websites like humans do. That's not a big deal if you’re not worried about the PC beating you at GTA V. It's a slightly bigger deal if you are a travel agent and the machines can now use comparison websites and book the cheapest fare without you. And while you’d hopefully have a more compass screening what you’d do online, do the machines have one? Could they get cheeky and ship their enemies glitter, or something more sinister?

Robert "Uncle Bob" Martin (author of The Clean Coder and other books), sets out 10 rules for software developers that he calls "The Scribe's Oath". One of those rules is you will not write code that does harm. But the issue isn't that a human will write code that shuts down a city's water treatment plant. The issue is that we're writing code that constructs deep learning neural networks, allowing machines to make decisions by themselves. We're enabling them to become harmful on their own, if we're not able to code a sense of morals, ethics and values into them.

Then we get into the ethical debates. If there's only two outcomes for an incident with a self-driving car, one that preserves the life of the driver and one that preserves the life of a pedestrian or another car driver, which one should the machine choose? Do we instil a human-like self-preservation/survival instinct?

Is this all the fault of (or a challenge for) the software developer? How does this apply to systems administrators & systems architects?

We've talked about autonomic computing before. If we are configuring scripted and self-healing systems, are we adding to the resilience of the machines and will this ultimately be detrimental to us? How outlandish does that seem right now though - that we'd enable machines to be so self-preserving that they won't even die if we want them too? We've even laughed in these comments about whether the machines will let us pull the power plug on them. Death by science fiction is funny.  But the machines can now detect when we are lying, because we built them to be able to do that. Ooops.

Technosociologist Zeynep Tufekci says “We cannot outsource our responsibilities to machines. We must hold on ever tighter to human values and human ethics." Does this means we need to draw a line about what machines will do and what they won’t do? Are we conscious in our AI developments about how much power and decision making control we are enabling the machines to have, without the need for a human set of eyeballs and approval first? Are we building this kind of safety mechanism in? With AI developments scattered among different companies with no regulation, how do we know that advances in this technology are all ethically and morally good? Or are we just hoping they are, relying on the good nature of the human programmers?

Ethics in AI has come up a few times in the comments of my articles so far. Should we genuinely be more worried about this than we are? Let me know what you think.

-SCuffy

Parents
  • If the moral code of an AI was based on the 10 commandments, you would end up with pretty messed up robots. Would they perceive polygamy as adultery ? Death penalty as murder ?

    The moral code of an AI has to be based on more universal laws than that. Problem is, there are no universal moral code.

    Should a self-driving car in India protect its driver over a cow in the middle of the road ?

Comment
  • If the moral code of an AI was based on the 10 commandments, you would end up with pretty messed up robots. Would they perceive polygamy as adultery ? Death penalty as murder ?

    The moral code of an AI has to be based on more universal laws than that. Problem is, there are no universal moral code.

    Should a self-driving car in India protect its driver over a cow in the middle of the road ?

Children
No Data
Thwack - Symbolize TM, R, and C