We invite contributions for a thematic issue focused on artificial intelligence (AI) and safety and quality in health care.
We are interested in papers on the conceptual, computational, empirical, ethical and legal dimensions of AI as they relate to patient safety and quality in health care. We welcome contributions from all disciplines and encourage all modalities of research including, but not limited to, systematic or scoping reviews, epidemiological surveys, randomized controlled trials, observational studies, mixed methods and case series.
For empirical papers, especially those using AI methods such as deep learning or machine learning, we request authors to include a statement reflecting on the potential biases of the data they are presenting. Perspectives articles are also welcomed.
While we are interested in papers speaking to all dimensions of AI and health-care safety and quality, we are particularly interested in contributions that focus on the following four domains:
- AI in health care and the risks related to patient privacy, safety and quality: The rate of innovation in AI technologies is faster than the evolution in standards of care and training. This dynamic has implications for technology adoption decisions, integration within clinical workflows, clinician and patient acceptance, perceptions of quality and equity in the consult experience (e.g. when substituting a human for an AI chatbot) and the legal liability exposure of clinicians or caregivers.
- Ethical issues related to the use of AI-enhanced decision support systems: Given that AI algorithms are usually trained and validated using existing data, the biases built into the data, from the way they are collected to the way they are generated by human decisions, are likely to be exaggerated by the extraction process. For example, most large clinical trials do not oversample for under-represented groups, as they are less easy to reach than majority groups. When such data are used to discover new therapeutic targets, the results may inadvertently miss minority populations with specific genetic variations that mute the efficacy of the drugs being developed.
- The appropriate use of AI in the clinical decision-making process: AI can bring clarity to the risk–benefit assessment of treatment options during staging and prognostication of illness, given its ability to predict an individual’s disease course. However, where the emotional elements of a decision, such as transfer to palliative care, are significant, the role of interpersonal factors such as the patient–doctor relationship may weigh more heavily than a rational AI recommendation. For example, when AI solutions are hardwired into the electronic medical records system, which is where hospital decision support systems are integrated, junior clinicians may hesitate to ignore recommendations with the imprimatur of evidence. Hence, the interactions between the socioemotional aspects of decision-making and data-driven AI solutions deserve more exploration.
- The impact of AI-enhanced data from consumer wearables for health-seeking behaviors: The proliferation and consumer acceptance of connected wearables that produce vital data could improve daily health behaviors, such as ambulation, that translate into higher quality of life for individuals. However, data from such devices are of varying quality. The Apple Watch, for example, has received validation for its electrocardiogram AI algorithm in the USA, China and 28 other countries (https://9to5mac.com/2021/07/19/here-are-all-the-countries-that-now-support-ecg-app-on-apple-watch-with-watchos-7-6/ (accessed 16 September 2022)). But most devices are not similarly certified, nor have many jurisdictions worldwide validated the clinical quality of such data for themselves. Thus, still to be explored are device-enabled patient health-seeking behaviors around the world, the quality of health care in primary settings where such devices are prevalent and the extension of hospital care to home, based on the use of such devices for patient monitoring.
Dateline for submissions to the special issue is 1 May 2023.
Please contact Prof. Phillip Phan (pphan@jhu.edu) or Dr. Cybele Lara Abad (crabad@up.edu.ph) to discuss potential paper ideas.