Recently, Blake Lemoine, a Google AI engineer, caught the attention of the tech world by claiming that an AI computer program is sentient. With this, the AI in question, named LaMDA (short for Language Model for Dialogue Applications), called into question the ethics and understanding behind the development of a computer that can feel, think and express emotion, and what it actually means to be fully sentient.
The ongoing development and focus on engineered machine sentience takes us back to the 1950’s and The Turing test, where a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human was tested and examined. We’ve come on leaps and bounds since then and whilst it’s extremely exciting to see the ongoing development of technology such as AI and the efficiencies it can bring, there’s an element of ethics that cannot be ignored, especially when it comes to healthcare.
When thinking about sentient AI, should people be able to understand and recognise when they are talking to a computer vs when they are interacting with a human being? Patients may behave differently depending on the type of interaction they are having and being able to distinguish who you’re talking to is important. With this, ensuring an element of transparency and openness will remain vital.
Likewise, trust is an extremely important part of implementing such technology on a broad scale, especially when it comes to something as personal and confidential as healthcare. Society trusts the training and expertise of their doctors and GP’s and understands the provenance of the advice given, with many basing treatment and life saving diagnoses on 40+ years of experience. Will patients trust a computer or algorithm to have this same nuance and human connection? It’s fair to say there’s an element of nuance in healthcare that can’t be made explicit or coded up, no matter how much certification the software may have.
It’s important to note, however, that similar hesitations were expressed when online banking and social media were first going mainstream, and now people accept them into their lives willingly. With this, there is likely room to manoeuvre and progress when it comes to tech; many patients are probably open to some extra technology within the healthcare domain, particularly if it can be proven to reduce waiting times and backlogs caused by the pandemic.
Healthcare is inherently human and personal, and in person we are able to probe and critique the advice you are getting and ask questions on the rationale behind the options you’re presented with. Systems like Babylon and chatbots have been a great addition to the healthcare ecosystem, helping to effectively triage and provide support when picking up on certain triggers and key words in a conversation, but are missing the emotional and human touchpoints that so many rely on. When you consider mental health on top of this, and all the nuances this brings in terms of sensitive discussions and all-important rapports with highly trained counsellors, it gets trickier and you can see why a face-to-face meeting would be needed over anything virtual or machine-led in this scenario.
The solution is not simple and it’s unlikely we’ll ever find a one size fits all solution. Higher stakes conversations where difficult news may need to be delivered or symptom checking needs to be done manually and in person means a flexible, hybrid approach is needed here, as is the way of the world post Covid. Whilst there’s no denying that AI and technology can bring an incredible amount of efficiency, speed and innovation to the healthcare industry, it must be monitored and balanced, with flexibility at the forefront to ensure patient needs are met, whether in person, virtually, or a mix of both.
By Dr Chris Vincent, Principal – Human Factors and Ergonomics (HFE) Sector Lead Healthcare at PDD