open access publication

Article, 2024

A scientometric analysis of fairness in health AI literature

Plos Global Public Health, ISSN 2767-3375, Volume 4, 1, 10.1371/journal.pgph.0002513

Contributors

Alberto I.R.I. [1] Alberto N.R.I. [1] Altinel Y. [2] Blacker S. [3] Binotti W.W. [4] Celi L.A. [5] [6] Chua T. [7] Fiske A. [8] Griffin M. [6] Karaca G. [9] Mokolo N. [10] Naawu D.K.N. [10] Patscheider J. (Corresponding author) Petushkov A. [11] Quion J.M. (Corresponding author) [12] Senteio C. [13] Taisbak S. Tirnova I. [14] Tokashiki H. Velasquez A. [15] Yaghy A. [4] Yap K. [16]

Affiliations

  1. [1] College of Medicine
  2. [NORA names: Philippines; Asia, South];
  3. [2] University of Health Sciences
  4. [NORA names: Turkey; Asia, Middle East; OECD];
  5. [3] York University
  6. [NORA names: Canada; America, North; OECD];
  7. [4] Center for Ophthalmic Drug Delivery
  8. [NORA names: United States; America, North; OECD];
  9. [5] Harvard School of Public Health
  10. [NORA names: United States; America, North; OECD];

Abstract

Artificial intelligence (AI) and machine learning are central components of today’s medical environment. The fairness of AI, i.e. the ability of AI to be free from bias, has repeatedly come into question. This study investigates the diversity of members of academia whose scholarship poses questions about the fairness of AI. The articles that combine the topics of fairness, artificial intelligence, and medicine were selected from Pubmed, Google Scholar, and Embase using keywords. Eligibility and data extraction from the articles were done manually and cross-checked by another author for accuracy. Articles were selected for further analysis, cleaned, and organized in Microsoft Excel; spatial diagrams were generated using Public Tableau. Additional graphs were generated using Matplotlib and Seaborn. Linear and logistic regressions were conducted using Python to measure the relationship between funding status, number of citations, and the gender demographics of the authorship team. We identified 375 eligible publications, including research and review articles concerning AI and fairness in healthcare. Analysis of the bibliographic data revealed that there is an overrepresentation of authors that are white, male, and are from high-income countries, especially in the roles of first and last author. Additionally, analysis showed that papers whose authors are based in higher-income countries were more likely to be cited more often and published in higher impact journals. These findings highlight the lack of diversity among the authors in the AI fairness community whose work gains the largest readership, potentially compromising the very impartiality that the AI fairness community is working towards.

Data Provider: Elsevier