The field of artificial intelligence (AI) has made great leaps in recent decades and the world has begun to pay more and more attention to its capabilities. However, this new interest has been seasoned with its own amount of mistrust and concern. As evidence of this, a study by Melissa D. McCradden, Tasmie Sarker, and P. Alison Paprica reflected popular perception of AI in such vital and sensitive areas as healthcare.
Specifically in the area of health, it has taken giant steps and has taken advantage of the myriad uses of AI to improve its services. However, its evolutionary path is just beginning. For this reason, the researchers found it relevant to investigate how these changes were perceived by the population and how possible it is that their use will be accepted normally in the future.
To do this, a qualitative research model was prepared that worked with six focus groups or focus groups in two Canadian cities in Ontario: Sudbury and Mississauga. The data collected during 2019 were the ones that led to the recent publication of his study and the complementary informative note presented by McCradden and Paprica in The Conversation medium.
General opinions about AI in our daily lives
To start the interviews in the six groups, the researchers chose to first listen to the opinions of the attendees about artificial intelligence in general. In this way, a basis was established from which to compare whether the perception of AI in health care became more or less favorable than the general one.
Among the responses received, one given by one of the Mississauga focus group participants caught the attention of Paprica and McCradden, so they couldn’t help but highlight it:
“You can literally create a Terminator, something that is artificially intelligent, or the Matrix…[luego] it goes wrong, it tries to take over the world and humans have to fight it. Or it can go the absolute opposite where [presta] help … androids … implants … Like I said, it’s unlimited both ways. “
With this, the vision of the sampled participants is generally represented. Despite not having possible knowledge in the area, everyone was able to recognize the benefits that AI could bring to society. However, they also eagerly pointed out all the negative possibilities that could come hand in hand with the arrival of artificial intelligence in our daily lives.
“He comes across as friendly and helpful, but he’s always looking and listening… So I’m excited about the possibilities, but concerned about the implications and personal privacy,” commented another participant, this time from the Sudbury focus group.
In total, there were 41 carefully selected participants for the focus groups. The samples were almost evenly divided with 21 men and 20 women. For their part, the age ranges were much more dispersed, ranging from 25 to 65 years. Likewise, none were experts in AI or had great initial knowledge in the area.
Perception of AI in healthcare: a less threatening possibility
Once the initial perception of AI was collected, the researchers wanted to determine if it changed when linked to health care. For this, they designed not one but three different hypothetical models (with different levels of participation of AI) in which AI was reflected as part of health systems.
To the surprise of the researchers, the concern was not so common in any of the scenarios. In fact, the majority of the participants agreed that the use of artificial intelligence could unleash changes and improvements highly beneficial for medicine and for scientific study.
Overall, respondents considered the use of health data in conjunction with AI capabilities to be of general benefit. They also recognized that she has capabilities for analysis and management of big data that humans can only dream of. Therefore, as a consequence, its implementation in medicine could make studies, diagnoses and treatments much more successful and effective.
Everything has its ‘but’
“We found that members of the public supported the use of health data in three realistic health AI research scenarios, but its approval had conditions and limits,” commented Paprica and McCradden.
Despite initial favorable reactions, focus group participants generally put up ‘buts’ for their acceptance of artificial intelligence. In other words, while their overall perception of AI in healthcare is mostly positive, this doesn’t imply that they have no concerns. Some that, in addition, must be addressed so that they are fully satisfied with the implementation of the new technology.
Privacy: a decisive factor for the favorable perception of AI in medicine
One of the main elements in which the six focus groups agreed was in the management of privacy. In general, they are concerned that medical data initially obtained may later end up being used for other purposes. This, in addition, without the original owners of that information knowing.
Also mentioned above were concerns about the continuous surveillance that we could be subjected to due to AI. An element that, according to several participants, is a problem that we already have today.
Finally, the participants highlighted that the “It is important to be sure that privacy and transparency about how data is used in health AI research will be protected”Commented the lead author duo.
On the development of AI in medicine and its possible autonomy
On the other hand, a detail that also worried the participants was the possibility that we became too dependent on artificial intelligence. Because of this, as a consequence, we could end up losing skills by entrusting processes to automation.
Here everyone agreed that their perception of AI in healthcare would be favorable only if it is kept as a tool. In short, they consider that artificial intelligence manipulation and final conclusions should still lie in the hands of humans. In this way, AI becomes a facilitator of work, but does not monopolize or hinder human work and its interrelations.
Learn and grow
“The focus group participants – none of whom were AI experts – had some important ideas and concrete suggestions on how to make AI research in health more accountable and acceptable to the public,” commented the authors.
Indeed, the individuals selected for this study were not knowledgeable about the area. However, their perspectives became enriching in the eyes of the researchers since they expressed the concerns and fears of the common denominator.
In short, thanks to their study, the scientists were able to delineate the areas in which AI applied to health care could improve to increase its acceptance among the general public.
“By understanding and addressing public concerns, we can establish reliable and socially beneficial ways to use health data in AI research,” conclude Paprica and McCradden.
Conditionally positive: a qualitative study of public perceptions about using health data for artificial intelligence research: http://dx.doi.org/10.1136/bmjopen-2020-039798