Striking a balance between Autonomy and Safety

First Published: Tue May 11 2021
Last Updated: Mon Jun 20 2022

InteliCare’s 24/7, in-home technology gives you smart insights, direct to your phone, that helps detect and prevent falls and health issues before they happen.

That’s the pitch given by Intellicare, a five year old company that is hoping to use Artificial Intelligence and in-home IoT devices to create a sort of virtual care facility.

The vision is to use devices that track a person around the house (via a pendant worn by that person) to build up a data set of “normal activity”, that they can then use to catch changes of behaviour down the track. Things like changes in bathroom activity could indicate the onset of a Urinary Tract Infection, or lack of movement in areas where there really shouldn’t be might indicate that the person has had a fall. This will allow the persons family or Care Agency to be more proactive in providing the care that an elderly person or person with disability might require in home, without the need for frequent visits from human staff.

Combine this with a suite of tools for Agencies to be able to manage their clients and plans for ever more complex monitoring devices and you’ve got a recipe for the Virtual Care Facility of the future!

On the face of it, it sounds pretty good. Let’s face it we want to keep our ageing and disabled populations in their own homes as much as possible. Developing a system that is unobtrusive and provides health data that can prevent escalation of issues that if left unchecked can lead to institutionalisation, can only be a good thing.

On the other hand there are some issues that need to be addressed.

Firstly there’s the subject of autonomy. If a person agrees to the installation of this service and is comfortable with their movements and activities being tracked, that’s one thing, but what if the system becomes mandatory? Intellicare is already talking about their system as being similar to a house alarm for Insurance Agencies. Will this or similar systems become requirements for health insurance in the future? Will In Home Care Agencies require this or systems like this?

What ability does the person being monitored have to turn off the system if they do desire? The elderly and PWD might not want the system to know when they’re engaging in intimate activities.

Secondly there’s the subject of what data is being captured and how it’s being used. Intellicare describes itself as “an AI company, coupled with healthcare”. Their services depend on a constant flow of data building up an intimate picture of the activities of the people being monitored. At first this will be via sensors that monitor when a person enters a room, (basically each room has a beacon that senses when the pendant gets to within a certain range). However Intellicare is already talking about expanding out their IoT range to include devices that they describe as “radar like” that can monitor heart rates remotely. This would remove the need for the pendant, as the “people radars” would be able track from second to second the exact location of everyone in range.

The data generated is currently used to inform their clients (the families, Care Agencies or possibly Insurance Agencies) of the health and status of each person under their care. However what else could that data be used for? Does Intellicare intend to monetise the data further by generating insights that it can sell to third parties? What happens when a person leaves the scheme? Does their data remain or does Intellicare delete it? Is there provision for the use of this data in academic research?

These are all questions that need answers if we’re going to be going to be placing ourselves into the hands of Machine learning. No matter how fancy the tech, Informed Consent should always be required for stuff like this.