Recently it was revealed that some Swedish researchers had been doing a study on the Reddit ‘change my mind’ subreddit, a place where people post controversial views and challenge others to present arguments against them. The researchers trained up some high-level AIs to respond to comments to see if they were better at changing people’s minds than the human commenters.
When the world at large found out about the study, people were outraged. Debate ensued over whether this entire experiment was unethical or not. I heard about the controversy on a podcast where the hosts seemed baffled and even annoyed by people’s outrage.
As far as these hosts were concerned, there could be no ethical violation, since no one has a guarantee of basically anything when they’re posting on reddit. Plus, famously, on the internet no one knows if you’re a dog, so why would people feel they had the right to assume that they were talking to another human?
Well, how about because dogs don’t use fucking computers?
I think our instinctive recoil at the idea that what we thought was a person is not one is actually a precious and telling reaction.
It is the collective response of human beings saying, We don’t want to exist in a world filled with things that act like humans but are not them.
And yes, we are already manipulated by algorithms and advertising and everything else. But also, we did not think those things were fucking people.
I think there is actually something profoundly ethical about this reaction against the Reddit experiment. It suggests that we think other people actually have inherent value, even if they’re just random commenters on some subreddit. The implication is that talking to a human being, any human being, is essentially worth something.
To be honest, the fact that so many people believe talking to another human has inherent worth feels like the best thing I’ve heard about people in general in a while.
So I wish I believed that this reaction would last.
But I fear that humans are so good at attributing emotions to things that function like humans that very quickly it will seem not that big of a deal whether that thing we’re chatting with is actually a person or not. We’ll feel like they are one anyway, because we’ll instinctively attribute interior, feelings-laden lives to them as long as they say the right words. Especially once AIs start appearing as videos of what look like real people having real expressions.
Right now, we can feel what the big deal is. The thing inside that defines a living human being feels important and irreplaceable to us. We think it actually matters whether we are talking to a human that cares about us or an AI that is very good at simulating caring.
I think those of us who have lived a good portion of time before AI have a choice that people after may not, because they’ll have been raised without really ever perceiving the difference between something done by a human and something that isn’t. AI will be so omnipresent and pervasive that it won’t occur to them as a thing to care about.
But right now, we are having that instinctive reaction based our knowledge that human beings are not just less efficient computers.
And we can choose to honour that reaction, in all kinds of ways, if we want to.
So, no, I won’t be using any AI to write to you, ever. Even if it’s just pixels on my screen and pixels on your screen, I feel a human connection to you when I sit down to try to figure out what might be helpful to hear and to say it in a way that lands.
I am guided in figuring out what to say by that sense of connection, by my ability and desire to imagine how life is for you, an internet stranger who is like me in crucial ways. When I can follow that thread skilfully enough, I get to help you believe in what I know is true: you can stop feeling like your own worst enemy and use your brains and skills the way that matters most to you.
Even if I could get a machine to connect with you in my place, why the fuck would I want to?
Jane, I am in alarmed agreement that this is bound to become normalized. The humanness of your writing is profoundly helpful, and I'll be right with you to the end, reminding the next generations that it wasn't always a world full of robots we've personified—even as they roll their eyes and think I'm a crazy old man for holding out to the bitter end as an advocate for purely human communication.
Thanks a bunch, Jane. Yes, I care and appreciate your thoughts on this. Keep trekking! Best from Albuquerque, New Mexico! Dale Harris