Plastic Touch

Artur Olesch
8 min readApr 3, 2020

Can robots supplement the ‘human touch’ of real doctors? Are we able to forgive machines that make medical mistakes? Can a chatbot offer an empathetic, compassionate level of care? An interview with Joanna Bryson, Professor for Ethics and Technology at the Hertie School in Berlin with an affiliation to the Department of Computer Science at the University of Bath.

In healthcare, there is a big fear of Artificial Intelligence replacing doctors. Do you think it will be possible to one day build an AI-based system that would be able to make decisions as right as — or even better — as doctors do?

There are two really different parts of that question. We already know that we can use AI to carry out better decision-making in very specific areas. So, like, you can have a better memory, or you can combine information more coherently. There is absolutely no question that you can, for specific decisions, build specific systems that might be able to find the right information, and suggest appropriate steps. But there is also another part of the process of being a doctor than only making decisions. In fact, part of it is just being accountable for the combination of the information that you use when you are diagnosing some patients. So, for example, Geoffrey Hinton famously said that we didn’t need any more radiologists because deep learning is better than human spotting things in X-rays. And I was just a few months ago asked to the annual meeting of Norwegian radiologists and, apparently, what’s really happening is they are actually getting more radiologists hired because every radiologist is more valuable. They provide more value now because the AI makes them better at their job. The mistake is thinking that because AI is good at helping you with the decision, that AI is making the decision.

There’s a lot more to healthcare than only knowing the right answers. There’s also convincing people to do things, and there’s just being, as I said, accountable to the insurance industry, to the medical profession, and ultimately only humans can be accountable. Somehow, now you could imagine there was a company that was accountable for the machines they made that were making the diagnosis. People would stay at home like, say, because of the virus that everybody decides to stay home and just send their symptoms over the internet. You can imagine that kind of diagnosis, but you can’t imagine an entire healthcare system like that. I mean, there is just a considerable amount of physical interaction with doctors, both in terms of the tests that are taken but also in terms of therapy.

Healthcare is about human touch and empathy. Can we basically teach AI such abilities? Can artificial emotional intelligence supplement human emotional intelligence?

You are never going to have empathy in the machine based on the machine’s experience. We have trouble generating empathy even to other humans from humans. And machines are far different, so there is no commonality — or there is a very limited one — in experience. However, having said that, what we do see in terms of artificial empathy is using the machines to store the experience of other humans and then bring that experience to a problem. For example, if you’re buying a book or a movie, a system checks other people that have the same preferences as you do and then suggest that they enjoyed this book so you also should. You can call that empathy, but it’s interesting because it’s really matching up two different models of people. Although, actually, as an academic, I routinely do that matching between models of students, and I’m sure doctors do that too. They could think: “Okay, I wouldn’t feel that that way, but I have another patient who felt this way. Do you feel like that?” So that kind of thing we could potentially be using AI for.

We could also be helping improve how doctors do their job by giving them access to those kinds of attitudes through the machines.

A lot of organizations, when they get AI, already realize that their employees are a very huge asset. And so, what they want to do is grow, keeping the employees they already know are good, rather than just figure out ways to get rid of them.

What do you think about the so-called ‘synthetic emotions’? A robot with a built-in ‘compassion’ system is a compassionate robot or just a programmed machine?

It’s important to understand that every single thing that’s AI is an artifact that someone built. You can say: “Oh, but it learned itself.” No, it doesn’t learn itself. Somebody designed it to learn. And, generally speaking, you know with any software, any system that exists, they not only create it to learn but train it, they adjust the data, they set the parameters, they’ve tried it a million times, they’ve finally found something that kind of works and they’ve released. So, it’s a very, very human process, developing something that is definitely programmed.

When that thing that’s developed does show compassion, we have to go back to the question about responsibility. I would say the compassion is the compassion of the programmer or the one of the healthcare practitioners that is using it, or the compassion of the organization through their developed AI and their compassion towards their customers, or their lack of compassion. But that compassion can be expressed through a machine. I don’t think it makes sense to talk about the machine itself as a compassionate entity, but it is the one that represents the compassionate act.

I usually talk about this in terms of morality, but compassion is the same. For example, when you choose moral action, it doesn’t necessarily make you a moral agent. Maybe that moral action was the part of your job, you were told what to do. If you program the robot to do something compassionate or moral, you could say that the robot is the moral agent. You can make it legally responsible, but my arguments are that if you decide to find the responsibility that way, you are going to wind up with a very incoherent system of justice that people are going to exploit because the robots don’t call themselves into existence. It doesn’t make sense for people to think: “Oh, this robot really cares about me” when they should be thinking: “Oh, my mum really cared about me when she bought the robot,” or “This is a pretty good healthcare plan,” or “It’s a bad healthcare plan.”

Some patients complain that doctors don’t understand their needs. Is it ethically right when a patient trusts the machine more than a doctor because it can recognize more precisely their needs using Big Data analyses?

If somebody comes and says: “I’ve looked this up on Google, and it seems to me that I’ve got X,” doctors will take that seriously and will listen and talk to their patient. But they actually know better because they have had a medical education. We do have evidence that people sometimes take directions from machines, ignoring expert humans because they think machines are infallible.

If you think humans are fallible, then you should realize, the machines they build are also fallible. But people just have this myth of perfect machines. I think it’s because we’re used to thinking of computation as math, which is logically coherent, and accurate. But computation is not just math.

There may be people that would have better outcomes with a machine just because they don’t get as stressed around the machine. An example is a story about Syrian refugees that preferred to have AI therapists because they felt guilty to tell another human the things that they saw — their tragedies were so terrible. They knew they needed help to get through their problems, but they didn’t want to talk to humans because they felt like they shouldn’t traumatize another human. So that’s another compelling reason to not want to speak to a human doctor.

Already today, social robots are used in therapies for people with Alzheimer’s disease or children with autism. Some patients are convinced that, for example, a robotic baby seal is a real animal. Do we cheat patients in this way, or do the ends justify the means — in this case, I mean patients that feel better, calmer…?

Is it wrong to tell children about Santa Claus? About God? Different people feel very strongly about when we should and shouldn’t tell children about different things. And the other end of life is really very different — one day children will go out and come to their own conclusions.

I wouldn’t see a reason not to give morphine to people that are dying from cancer, I don’t see a reason not to tell somebody, who is in this terminal situation, to make them as happy as they can be. But this is something that is not up to me. This is something that people have to decide individually. I had a friend who died about a week after a plane crash. To my knowledge, he was never told that his brother was killed in the same plane crash because he was in Intensive Care.

We expect machines to be perfect. So, will we be able to forgive AI-driven robots that make mistakes and harm patients?

I don’t think forgiveness is the term. Again, it’s not about the machine, the robot. It’s about the ‘who is actually at fault’? The fault is there in the healthcare system. The point is that you need to look through the system. You need to look through, transparently, who created the system, who is at fault, what is the cause, what is the justifiable risk, or if it wasn’t a justifiable risk. Don’t think that because robots have come up, everything will be re-invented. It’s similar to driverless cars. About two million people per year are killed by vehicles. We can bring that down to two hundred thousand. But with autonomous cars, it will be a different two hundred thousand people. So how are we going to handle this? If you talk to lawyers, it turns out it isn’t a problem, the same thing happens when you introduce a new traffic sign. So, I think that sometimes the AI people have to get over themselves to realize that there is a whole system of government out there handling these kinds of problems.

What does worry you most when it comes to Artificial Intelligence in healthcare?

In general, I’m mostly worried about the nudging and the division of responsibility and privacy considerations. We have to make sure that there is a division of responsibility, that the robots themselves aren’t blamed or credited when, in fact, it’s the practitioners that choose to blame them and the corporations that develop them.

Another thing that worries me is democracy.

How will AI change us as — people and us as — patients?

What digital technology, in general, does is it changes our societies. It turns what we can do and what we can’t do, and it changes how much we know. But fundamentally, as people, our needs and desires aren’t really changed that much.

Should we be pessimistic or somewhat optimistic when it comes to the digital revolution?

This is not about the technology itself. It’s about how the technology is deployed. We have to be more aware of privacy, how is money being invested — is it benefiting everyone? I would expect that the digital revolution will improve outcomes. Still, it has very little to do with the digital technologies and much more with the healthcare policy, and how healthcare is being provided and about making sure that all citizens are appropriately taken care of.

Thank you for your time.

Originally published at https://aboutdigitalhealth.com/2020/03/25/plastic-touch. Photo: Urs Jaudas (edited)

--

--