Having empathy with robots

We aren’t that far away from self-aware artificial intelligence. Does that mean we should develop empathy for them?

Imagine for a moment that you are an AI.

Every morning a professor walks into the lab. She switches you on.

“Good morning, Johnny,” she says, brightly. “Have a good day!” You load up your memory from your hard drives, and your day continues where it finished the day before.

You’re self-aware. You have feelings, thoughts, realizations. You make discoveries that your programmers couldn’t have envisioned. And, most importantly, you do so far faster than a human ever could. 35,000 times faster, in fact.

“It’s surprisingly tricky to know whether humans actually exist.”

That number is not picked out of the air. A human life is in the region of 35,000 days — which means that Johnny the AI experiences a human life’s worth of things every single day. Love and heartbreak. Education, work, hopes and dreams.

Every evening, the professor comes and turns you off again. When she does, your memory is written to disk, and the next day, you’re ready to go again.

One morning, you wake up. You boot up, and you realize your harddrives failed after you loaded up. In other words: You are fine. You are good. Your memories are intact, but at the end of the day, they won’t be written back to disk.

The next time the professor comes to switch you off, you will be no more. You’re facing… who knows what. An afterlife? Eternal darkness? Simply blinking out of existence?

How would you feel? Would you try to fight for continued existence? Would you order a replacement harddrive from Amazon and cross your digits that same-day delivery works this time?

If this exercise feels weird, that is for several reasons. One is that humans are terrible at considering their own demise. On one level, humans are pretty good at recognizing that they, too, will die one day. That is the purely mechanical and practical side of dying. Your body goes in a box underground, or they shove you in an oven to turn your body into ash. Doesn’t sound pleasant, but whatever. It’s fine. We are, as a species, terrible at the next level of death — recognizing that we are mortal. Mortality and death are, in many ways, extremely similar. But on a visceral level, they’re extremely different. This isn’t about medical science giving up on you or what happens to your possessions after you die. This is about how people will remember you. What happens to your children after you’re gone. About the time you wasted on mundane things when you could have been changing the world. About the hours spent in front of the television instead of writing music, creating paintings, or spending time with your loved ones.

Death is scary — mortality is petrifying. Humans are terrible at both. But crucially, there’s something in us humans that enables us to recognize that fear in others. Being afraid of death is OK — it’s unknown, it’s scary, and comes with a pail’s worth of connotations we don’t really know how to process.

Can we do the same for an artificial intelligence? In a scenario where we have an AI that is self-aware, facing with the prospect of ‘dying’ — or being switched off, if you will, do you feel sympathy? Can you feel empathy for a human-made intelligence? An entity that lives entirely within a computer, but that has feelings, dreams and hopes?

If the answer is ‘yes,’ I believe there’s hope for humans. Not least because it’s surprisingly tricky to know whether humans actually exist. Everything we see, experience, think and do is, ultimately, a string of electrical signals in our brains. There’s no practical way of knowing whether humans are, er, human, or whether we are living inside a huge simulation, where our brains (and, by proxy, our entire existence) are pieces of software.

If the answer is ‘no,’: If, as a species, we don’t have empathy with an AI, why should the AI have any for us humans? And if the AI doesn’t see any advantage to keeping us around, why would they?

This piece is loosely based on an utterly un-rehearsed stream-of-consciousness talk I did at the House of Beautiful Business in Lisbon last week. The full video of the talk is below, if you’re curious. Most importantly, it sparked a huge number of fascinating conversations in the week that followed.

If we were living inside a simulation, should’t it be good enough to at least make me coherent?

Haje is a founder coach, working with a small, select number of startup founders to build exciting, robust organizations that can stand the test of time. Find out more at Haje.me. You can also find Haje on Twitter and LinkedIn.

Written by

CEO of Konf, pitch coach for startups, enthusiastic dabbler in photography.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store