AI in Healthcare -Part III

Gubing Wang
5 min readJul 22, 2020
Photo by h heyerlein on Unsplash

“How to find the synergy between AI and human?”

This is one key topic in the International Conference on Machine Learning (ICML 2020). While machine learning (ML) is based on the idea that machines should be able to learn and adapt through experience, AI refers to a broader idea where machines can execute tasks “smartly.” Artificial Intelligence applies machine learning and other techniques to solve actual problems.

I am specifically interested in the application of AI in the healthcare sector, so I focused on keynotes, panels, workshops and socials related to healthcare during ICML 2020, and I summarised my learnings and reflections (from the lense of design) below.

This article is about some applications of AI in assistive autonomy and dementia care.

Brenna Argall works with patients with spinal cord injuries and depending on the severity of the injury, patients need different levels of assistance with moving around, eating, and carrying out other daily activities. Brenna’s lab has been developing assistive robots (assistive machine + sensing + computing + AI) to help the patients regain more independence. Throughout, they have been investigating a smooth collaboration of patients with AI to ensure the patients’ safety and quality of life.

Source: presentation slides of Brenna Argall in ICML 2020

Brenna pinpointed four research directions in developing ML for assistive autonomy:

  1. Feedback signal for learning: should it be a reward? supervised label? correction? demonstration? explicit? implicit?
  2. Masked and filtered information: can the autonomy trust the human?
  3. Temporal considerations: as a smart system to be used for a long time, how might the autonomy adjusts as time goes on?
  4. Human-autonomy co-adaptation: how might human and autonomy adapt to each other smoothly?

I find these four directions are on investigating the interactions between AI and human from different angles.

With the development of technology, the feedback generated by a patient could be more diverse. For instance, the patient might be able to speak to his/her smart wheelchair, how might the AI make use of these qualitative data? and how might the AI distinguish “stop” and “stop!!!”? Emotion-classifying AI based on the tone of voice isn’t anything new, a neural network named SoundNet, which can classify anger from audio data in 1.2 seconds. Yet, as far as I am aware, publications on voice-controlled wheelchairs have not touched upon emotion-classification.

“Can AI trust human” has also been discussed a lot in autonomous driving. How might AI tell the user that his/her command is not appropriate without interfering the confidence of the users, especially so when the users are patients who might suffer a progressive disease?

Along with time, the physiology of the patient changes, and so does his/her familiarization with the smart wheelchair. Possibly, the cognitive ability of the patient also changes, which might affect the rate of adaptation from the patient. The wheelchair should ideally be dynamic, to adapt to the changes seamlessly. The importance of a user-friendly interface is highlighted by Brenna. For both joystick and head array users, a well-designed interface reduced the response time significantly.

For co-adaptation, as the patient learns how to use the smart wheelchair (or robotics autonomy in general) over time, the wheelchair will gradually give control to the patient (as shown in the figure below). In the following panel discussion, Brenna shared that all the patients she has worked with would like to have control over their wheelchair, and this might change as the technology develops, e.g., people are more familiar with autonomous driving.

Source: presentation slides of Brenna Argall in ICML 2020

The projects in Brenna’s lab share a common theme: customization. The assistive robots can be customized to each individual in terms of one’s capabilities and preferences; and as these aspects change over time, the assistive robots are also able to adapt timely to fit the individual.

This flexible customization is enabled by sensing technologies and AI. The sensors gather enough data just in time for AI to process and constantly learn about the user. Accommodating differences between individuals has gained attention in design for dementia care. In addition, since dementia is progressive, the capabilities and preferences of people living with dementia will change over time. Therefore, in the assistive technologies designed for dementia care, AI has been increasingly incorporated. Here are some examples:

  1. cognitive games which are customized to each individual based on their health status, habits, medications etc. and can adapt over time;
  2. ambient assistive technology, for example, the AI learns the daily routine of a person living with dementia based on his/her location data, and gives alerts to the caregiver if a deviation from the routine is detected;
  3. cognitive assistive technology (also referred to as zero effort technology) for example, give stepwise instructions to the person living with dementia about how to wash hands in the appropriate moments;
  4. prognosis, to help the person living with dementia and their family members to get prepared for what will happen next.

It is worth mentioning that some caregivers said they would feel undervalued if they were told about what to do directly by a system, instead, they would like to be involved in the decision-making process [ source: prototype evaluation sessions in my PhD].

As Brenna pointed out: in the future, the patients with spinal cord injuries might be used to delegate full control to their smart wheelchair, what would it imply for dementia care?

I find delegating some desktop work that caregivers have to do to the AI could free the caregivers to have more person-to-person contact with people living with dementia, and this might be the new domain that caregivers find their values in.

Indeed, the caregivers still have a role to play in care plan making before General AI becomes a reality, and people living with dementia (if they are willing to) could also be empowered by getting involved in care planning.

If we zoom out from dementia care to healthcare, as the false positives and false negatives in AI decrease along with technology advancement, would the working culture in healthcare be changed accordingly? Will healthcare workers feel comfortable to carry out the instructions given by the AI? Where would be the new tasks that healthcare workers find their values?

--

--

Gubing Wang

design for healthcare & social good, global citizen.