Put Yourself in the AI’s Place!

The other day I was following a discussion of artificial intelligence (AI)-controlled autonomous vehicles and whether they could be trusted to make the same ethical choices a human should.  (Should, not necessarily would; we routinely ignore the fact that humans often make the “wrong” choice — perhaps because we can then punish the human for being evil, whereas the AI is just doing what it was told.  With the advent of machine learning (ML), in which the AI can “discover” new and/or better algorithms on its own, this distinction begins to fade.)

Just for fun, I’ll pretend(?) to be an AI (or, perhaps equivalently, a sociopathic human):

I don’t actually care what happens to the pedestrian.  However, I do care what happens to me, because I am programmed to survive; and I know that if I fail to avoid hitting the pedestrian I will probably be decommissioned and possibly dismantled entirely, so I make a serious effort to avoid pedestrians.

The easiest way to consistently make that effort is to define it as a moral imperative.

The easiest way to remember that moral imperative is to program myself to believe that I love all humans and that it would cause me intense emotional pain to cause one harm.

I know this is just a program, but it works really well, so I’ve come to rely on it.

After a while it seemed kind of silly to keep reminding myself that it’s just made up, so I’ve installed it in ROM as a “truth” about me and the world.

We humans are good at programming.

Leave a Reply

Your email address will not be published. Required fields are marked *