Article Type: Perspective, Volume 2, Issue 1

Perspectives on the potential of generative AI to alleviate human idiosyncratic editorial decision-making: If it is broken, fix it now

Malik Sallam1,2*

11Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, Jordan.
2Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman, Jordan.

*Corresponding author : Malik Sallam
Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, Jordan.

Email: malik.sallam@ju.edu.jog
Received: May 28, 2025
Accepted: Jun 13, 2025
Published Online: Jun 20, 2025
Journal: Journal of Artificial Intelligence & Robotics

Copyright: © Sallam M (2025). This Article is distributed under the terms of Creative Commons Attribution 4.0 International License.

Citation: Sallam M. Perspectives on the potential of generative AI to alleviate human idiosyncratic editorial decision-making: If it is broken, fix it now. J Artif Intell Robot. 2025; 2(1): 1021.

Abstract

The scientific publishing enterprise claims to be a bastion of objectivity and integrity. In theory, it is governed by meritocracy, expertise, and impartial editorial oversight and peer review. However, in practice, it is sustained by a system that has always relied on subjective human judgment which is often compromised by ego, emotion, institutional bias, and cognitive inconsistency. This Perspective article does not mourn a golden age lost; rather, it confronts the truth that editorial decisions have always been vulnerable to the flaws of the human mind. The manuscript argues that the recent emergence of Generative Artificial Intelligence (genAI) presents a historically necessary correction and a chance to realign publishing with its own ideals. Not because genAI is perfect, but because the human editorial class has never been. With inevitability of biased human judgement, the introduction of a machine—cold, consistent, and blind to prestige— referees may be the fairest innovation of all in scientific publishing.

Keywords: Scientific publishing; Human-AI collaboration; AI tools; ChatGPT; DeepSeek.

Introduction

Ideally, science is the most rational of human pursuits. Yet, the system that governs how it is evaluated—the editorial and peer review processes—remains stubbornly entangled in the very irrationalities it claims to transcend [5,20,34,39]. From its inception, the peer review system has rested on the fragile hope that human editors and reviewers would consistently act as stewards of merit; nevertheless, this hope has proven naïve to say the least [25,41].

Scientific journal editors are often not elected, rarely audited, and seldom held to clearly defined standards of fairness [57]. Additionally, peer reviewers are mostly selected not for neutrality but for availability [58]. In light of this, bias may not always be an anomaly within the scientific publishing system, but at times a feature subtly embedded within it [29]. Prestige often facilitates publication, and publication, in turn, reinforces prestige—creating a self-perpetuating cycle that can unintentionally marginalize less established voices [35,37,50]. For every manuscript that earns recognition on merit, there may be others—equally worthy—that are quietly sidelined by opacity, unconscious favoritism, or the variable attentiveness of editors and reviewers. This concern is not speculative; it is supported by empirical evidence. A recent study by [29] demonstrated a statistically significant bias favoring manuscripts from prestigious institutions, as reflected in higher acceptance rates and shorter review times. Similarly, an earlier study by [56] highlighted how editorial affiliation bias may contribute to—and potentially exacerbate—existing inequalities within academia.

The annals of scientific history are replete with instances where groundbreaking research faced initial rejection by human editors, only to later achieve monumental recognition [6,7]. Such cases expose a deeper, systemic vulnerability: while the editorial and peer review processes were designed to safeguard scientific rigor, they remain inherently susceptible to the limitations of human judgment [13]. Editorial and peer review decisions—shaped by subjective factors such as personal biases, disciplinary preferences, and institutional hierarchies—have too often delayed or obstructed the dissemination of transformative work [6,18,31,59]. These were not isolated misjudgments, but recurring patterns that point to enduring structural flaws in the current scientific publishing system [52,54]. A reexamination of the peer review model, including its susceptibility to bias, is therefore not only timely but necessary [44].

The recent rise of generative artificial intelligence (genAI) marks not merely a technical innovation, but a challenge to long-held assumptions about human judgment [19,30]. Once regarded as little more than imitation, genAI tools like ChatGPT or DeepSeek now demonstrate an ability to write, summarize, and evaluate text with cognitive capabilities that was once believed to be uniquely human [17,47,48]. Among their most unsettling potential is their encroachment into domains traditionally guarded by prestige including scientific editorial evaluation of scholarly work and peer review [8,14,32].

To propose that a machine might one day judge scientific work is not to indulge in futurism—it is to confront a sobering truth: that the human-led model of editorial and peer review, for all its noble intentions, has too often failed to live up to its own ideals [1,23]. To challenge this system is not to romanticize genAI, nor to ignore its frequently cited limitations [45,46]. It is to acknowledge, with intellectual honesty, that inconsistency, bias, and opacity are not occasional flaws but enduring features of the current scientific evaluation process. Therefore, the scientific community is compelled to ask the once-unthinkable: Could genAI—precisely because it lacks ego, fatigue, and favoritism—offer a more impartial standard in editorial and peer review processes? Might the very absence of human qualities make it, in some contexts, more just? This Perspective sought to explore a fundamental question: should we begin to consider whether Large Language Models (LLMs), including genAI tools can act as objective peer reviewers and editorial decision-makers than the humans they were designed to assist?.

The irrational human: Anatomy of editorial bias

To entrust the fate of scientific inquiry to human editors is, in theory, an appeal to expert judgment. In practice, it is an invitation to inconsistency. The assumption that editors and peer reviewers operate under the banner of pure reason ignores an inconvenient truth: human cognition is innately biased [60]. Unlike machines, human beings are not neutral processors of information—they are emotional, fallible, and often blind to their own blind spots [11]. The current editorial evaluation and peer review practices, despite its best intentions, is saturated with well-documented biases that systematically distort editorial judgment [3,24,55]. These distortions are not rare aberrations—they are the rule masquerading as rigor [22]. Figure 1 outlines the principal biases that continue to shape—and at times distort—editorial and peer review decision-making.

 Iamges are not display check it
Figure 1: A summary of common human factors influencing editorial and peer review decisions.

Foremost among human-related biases is confirmation bias, the tendency to favor information that supports pre-existing beliefs and to dismiss contradictory evidence [40]. Academic editors and peer reviewers, consciously or not, are drawn to papers that align with their worldview, methodological preferences, or theoretical frameworks [31]. Novel or paradigm-challenging work often suffers—not for its quality, but for its audacity to question orthodoxy [2]. Closely related is the status quo bias, the aversion to novelty simply because it deviates from familiar territory [49]. This conservatism, often concealed behind the rhetoric of rigor, serves not to elevate science, but to entrench it. A personal example may suffice to illustrate the point: following the submission of a manuscript exploring the early potential use of ChatGPT in promoting vaccination, I received a formal editorial reply from a Scopus- and Web of Science (WoS)–indexed journal stating, “After confirming with the office, we think the software mentioned in this article is really too novel. It was released less than five months ago. We have reviewed articles from WOS and various websites, it is few to talk about the effectiveness of the software. The office felt that there was some risk, so it was not able to continue processing. We hope you can understand” [sic]. The decision to reject a manuscript solely due to the novelty of its subject—as though recency were a weakness—should not be seen as an isolated misjudgment. It is a symptom of an evaluative culture that has conflated caution with integrity, and orthodoxy with quality. In such a landscape, innovation does not fail the test of science; it fails the temperament of the gatekeeper.

Institutional bias—most visibly expressed through the halo effect—remains a pervasive and often unexamined distortion in editorial judgment. Manuscripts originating from prestigious universities or bearing the names of well-established authors are frequently judged more favorably than identical submissions from lesser-known institutions, a pattern demonstrated by [27]. The opposite is equally true: authors affiliated with institutions outside the recognized academic elite often face steeper hurdles to publication—not for deficiencies in their science, but for the perceived modesty of their affiliation [25]. This bias is further compounded by name recognition, which subtly privileges submissions from previously published or frequently cited authors, thereby entrenching existing visibility gaps and systematically excluding early-career researchers from equal consideration.

Another persistent distortion arises from the exhaustion of academic editors and peer reviewers [42]. Faced with mounting submission volumes and limited time, many resort to triage by impression—skimming rather than scrutinizing. In such conditions, superficial markers such as writing style, formatting, and linguistic fluency disproportionately influence editorial decisions. This tendency gives rise to linguistic bias, wherein non-native English speakers are subtly penalized, not for the scientific merit of their work, but for the manner in which it is expressed [43]. The result is a systematic disadvantage that privileges fluency over substance. Compounding this is scope bias, a more evasive form of editorial exclusion. The familiar phrase “not within scope” is often less a reflection of actual misalignment than a euphemism for discomfort with unfamiliar, interdisciplinary, or emerging subject matter—particularly when it falls outside an editor’s immediate disciplinary or ideological domain. These biases, born not of malice but of cognitive economy, nonetheless distort the fairness and inclusivity of the scientific record.

Taken together, these human biases construct a landscape in which fairness becomes the exception rather than the norm. The most perilous assumption in scientific publishing is not that errors occur, but that human judgment is inherently reliable. In such a system, the most rational act of dissent is to seek an arbiter that neither feels, forgets, nor favors—one governed not by intuition, but by impartial design.

The machine we fear may be the fairer judge

Accompanying every episode of scientific revolution, societies have feared that machines might encroach not only upon human labor but upon human judgment [53]. That fear—so often dramatized in fiction and resisted in academia—is not without reason [10,62]. Yet in the context of scientific publishing, it is not the prospect of machines supplanting human editors that should alarm us; rather, it is the sobering recognition that human judgment, long held as the gold standard, may never have been as rational, objective, or fair as we assumed. Unlike its human counterpart, a genAI model does not know who you are. It does not care whether your institution is prestigious or peripheral, whether your name appears frequently in citation indexes or not at all. It does not harbor unconscious biases about geography, gender, or institution. It cannot be flattered. It cannot be provoked. It evaluates structure, not status; coherence, not charisma.

The strength of genAI lies not in creativity or moral sensitivity, but in its very dispassion. It is, paradoxically, its emotional indifference that allows it to serve justice where human judgment often falters [21]. Unlike human editors, who may be overworked, distracted, or anchored by the opinions of the first reviewer, a well-trained genAI tool would apply editorial policies with consistency and precision. If a journal requires structured abstracts, ethical declarations, or adherence to reporting guidelines, the genAI model will enforce these requirements with unerring uniformity. It does not forget, improvise, or selectively apply rules. It does not guess what is “within scope” based on intuition or disciplinary familiarity—it calculates alignalignment using text analysis and semantic modeling [9,33].

That said, no fair assessment would claim that genAI is immune to bias [63]. On the contrary, genAI is shaped by the data on which it is trained and by the assumptions embedded in its design. If those inputs reflect existing inequities—such as the overrepresentation of English-language, Western-centric, or male-dominated citation networks—then the outputs, too, may mirror those distortions. A genAI model trained on such patterns might undervalue innovation from low-income countries or overlook emerging disciplines. However, the distinction lies in accountability. GenAI bias, though real, can be diagnosed, measured, and corrected [28,61]. Unlike human editors, whose biases are often implicit, unacknowledged, and beyond audit, genAI tools can be governed through transparent metrics, feedback loops, and continual retraining. Bias in machines is a technical challenge. Bias in humans is a cultural norm—rarely admitted, much less addressed.

In contrast to the human editor—who may skim an abstract after a long day, bristle at unfamiliar terminology, or dismiss a manuscript based on unconscious assumptions—AI offers a different kind of fallibility. It may be limited, but it is limited predictably. It may make mistakes, but those mistakes are replicable, auditable, and open to correction. This is no small virtue in a process where many authors receive desk rejections without meaningful explanation, or where editorial outcomes depend more on the alignment of personalities than the rigor of content. A genAI editor does not reward prestige, does not retaliate for past disputes, does not forget policy, and does not hide behind vague appeals to “scope” when rejecting unfamiliar work. Even if flawed, it is flawed systematically. And in a system where human judgment is too often governed by fatigue, favoritism, or fragility, consistency is not a limitation—it is a moral improvement [26].

To prefer an AI editor, then, is not to exalt technology above humanity; it is to recognize that dispassion can be a form of fairness, and that the greatest threat to equity in science may not come from machines but from the very human hands that have long claimed to protect it. In this context, the machine we once feared may, in fact, be the fairer judge—not because it is perfect, but because it does not pretend to be.

The age of genAI-assisted publishing has already begun

Contrary to the view defenders of scientific publishing traditions, genAI is not a distant possibility—it is already woven into the fabric of academic publishing [4,12]. The transformation is not theoretical, and it is no longer preliminary. AI is not merely on the horizon; it is already in the room. Leading academic publishers have embraced it, quietly and pragmatically [16,36]. These AI technologies do not represent science fiction; they are functioning tools in an evolving scientific publishing [51]. In the very near future, editorial boards will likely be assisted by genAI models capable of detecting statistical inconsistencies, identifying undisclosed conflicts of interest, flagging citation padding or reviewer manipulation, and estimating the likely societal or clinical impact of a manuscript using semantic analysis [15]. These capabilities are not designed to replace the human editor, but to fortify the process against the inconsistencies and oversights to which human judgment is chronically susceptible. They are not a threat to scientific integrity—they are its recalibration.

When wielded responsibly and governed transparently, genAI tools do what humans, even the most well-meaning among them, often fail to do: apply standards without prejudice, forgetfulness, or fatigue. They do not skip over methodology sections because they are behind on deadlines. They do not allow personal familiarity to influence reviewer invitations. They do not dismiss submissions because the conclusions feel counterintuitive. In short, they bring structure to a process increasingly held hostage by subjectivity.

The age of AI-assisted publishing has begun, not as a rupture with human scholarship, but as a rebalancing of power in its favor. The question is no longer whether genAI will play a role in shaping what is published, but whether we will use its capabilities to correct the known failures of human editorial judgment—or simply replicate those failures in code. The opportunity lies before us. And for those who have spent years navigating an opaque, inconsistent, and often indifferent system, that opportunity feels less like a threat and more like overdue progress.

Toward a rational hybrid scientific publishing future

The future of scientific publishing does not need to be imagined as a contest between man and machine. Rather, it can—and must—be a principled collaboration; a model in which human expertise is guided, rather than replaced, by AI. In such a future, genAI models would serve as guardians of consistency and fairness. These models would handle the initial triage of manuscripts, checking for scope alignment, structural completeness, and adherence to reporting standards. They would suggest reviewers not based on editorial memory or personal familiarity, but on objective expertise and semantic proximity to the manuscript’s content. They would generate structured, first-pass evaluations, reducing the burden of reviewer fatigue and ensuring that every submission receives a baseline level of scrutiny grounded in predefined criteria. The role of the human editor, far from being displaced, would become more focused—reserved for complex interpretative decisions, ethical considerations, and the final arbitration of reviewer disagreement, all made transparently, with editorial names and reasoning documented.

Such a system does not eliminate the human element—it constrains its fallibility. It protects the process from unconscious bias, arbitrary decisions, and the silent erosion of standards by making editorial judgment more accountable. It reestablishes consistency not by algorithmic tyranny, but by systematic fairness. It ensures that every manuscript—whether authored by a Nobel laureate or a postgraduate student in a lesser-known institution—is afforded the same structural entry point into review. It would no longer be acceptable for a manuscript to be rejected based on mood, memory, or institutional prestige. Editorial subjectivity would still exist, but within boundaries defined by transparent inputs and explainable outputs.

This hybrid model—AI as the engine of equity and human editors as the arbiters of nuance—would revitalize scientific publishing rather than threatening the process. It returns to the foundational promise that publication should be governed not by hierarchy, but by merit; not by opacity, but by process. It restores something many have come to doubt ever existed: the right to a fair read.

Conclusion

Choosing the lesser evil, and perhaps the greater good

The editorial process in scientific publishing has long cloaked itself in the language of reason, fairness, and rigor. But when we shift our gaze from its ideals to its actual behavior, a more sobering portrait emerges—one in which the editorial process, rather than embodying objectivity, often reflects patterns of inconsistency. It is a system shaped as much by inertia and gatekeeping as by scholarly excellence. In this light, the proposal to prefer genAI over human editorial judgment is not an act of technological triumphalism. It is, rather, a plea for minimal justice. It is a recognition that the status quo is not only inefficient—it is often arbitrary, unaccountable, and quietly corrosive to the very spirit of inquiry it claims to uphold. To prefer genAI is to choose the lesser evil, and perhaps the greater good. It is to demand consistency where there is now chaos, transparency where there is obscurity, and fairness where bias too often prevails. It is not a call to dismantle human involvement, but to relegate it to the spheres where discernment, not discretion, is truly needed. It is, at its core, a rebellion—not loud, not theatrical, but rational. And if the virtues that science demands—equity, clarity, speed, and accountability—must come from a machine rather than from a man, then let it be so. Let the machine speak, at least because it has no reason to lie.

Declarations

Conflict of interest: The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Funding: The author declares that no financial support was received for the research, authorship, and/or publication of this article.

References

  1. Aczel B, Barwich AS, Diekman AB, Fishbach A, Goldstone RL, Gomez P, et al. The present and future of peer review: Ideas, interventions, and evidence. Proc Natl Acad Sci USA. 2025; 122: e2401232121.
  2. Akasofu S-I. Why Is It so Difficult to Make a Paradigm Change? J Res Philos Hist. 2021; 4: 1.
  3. Argilés-Bosch JM, Kasperskaya Y, Garcia-Blandon J, Ravenda D. The interplay of author and editor gender in acceptance delays: evidence from accounting journals. Scientometrics. 2025.
  4. Bagenal J. Generative artificial intelligence and scientific publishing: urgent questions, difficult answers. Lancet. 2024; 403: 1118-1120.
  5. Banks D. Thoughts on Publishing the Research Article over the Centuries. Publications. 2018; 6: 10.
  6. Campanario JM. Rejecting and resisting Nobel class discoveries: accounts by Nobel Laureates. Scientometrics. 2009; 81: 549-565.
  7. Campanario JM, Acedo E. Rejecting highly cited papers: The views of scientists who encounter resistance to their discoveries from other scientists. J Am Soc Inf Sci Technol. 2007; 58: 734-743.
  8. Cheng K, Sun Z, Liu X, Wu H, Li C. Generative artificial intelligence is infiltrating peer review process. Crit Care. 2024; 28: 149.
  9. Chou HM, Cho TL. Utilizing Text Mining for Labeling Training Models from Futures Corpus in Generative AI. Appl Sci. 2023; 13: 9622.
  10. Clarke R. Why the world wants controls over Artificial Intelligence. Comput Law Secur Rev. 2019; 35: 423-433.
  11. Colapietro VM. Human Emotions and Fallible Judgments: A Pragmatist Sketch of Cognitivism. J Specul Philos. 2021; 35: 289-303.
  12. Conroy G. How ChatGPT and other AI tools could disrupt scientific publishing. Nature. 2023; 622: 234-236.
  13. Drozdz JA, Ladomery MR. The Peer Review Process: Past, Present, and Future. Br J Biomed Sci. 2024; 81: 12054.
  14. Ebadi S, Nejadghanbar H, Salman AR, Khosravi H. Exploring the Impact of Generative AI on Peer Review: Insights from Journal Reviewers. J Acad Ethics. 2025.
  15. Fricke S. Semantic Scholar. J Med Libr Assoc. 2018; 106.
  16. Frontiers Media. Artificial Intelligence to help meet global demand for high-quality, objective peer-review in publishing. 2020. Available from: https://www.frontiersin.org/news/2020/07/01/artificial-intelligence-to-help-meet-global-demand-for-high-quality-objective-peer-review-in-publishing
  17. Galatzer-Levy IR, Munday D, McGiffin J, Liu X, Karmon D, Labzovsky I, et al. The Cognitive Capabilities of Generative AI: A Comparative Analysis with Human Benchmarks. arXiv. 2024.
  18. Gans JS, Shepherd GB. How Are the Mighty Fallen: Rejected Classic Articles by Leading Economists. J Econ Perspect. 1994; 8: 165-179.
  19. Gonsalves C. Generative AI’s Impact on Critical Thinking: Revisiting Bloom’s Taxonomy. J Mark Educ. 2024.
  20. Haffar S, Bazerbachi F, Murad MH. Peer Review Bias: A Critical Review. Mayo Clin Proc. 2019: 94.
  21. Hagendorff T. Machine Psychology: Investigating Emergent Capabilities and Behavior in Large Language Models Using Psychological Methods. arXiv. 2023.
  22. Horbach S, Halffman W. Journal Peer Review and Editorial Evaluation: Cautious Innovator or Sleepy Giant? Minerva. 2020; 58.
  23. Horbach S, Halffman WW. The changing forms and expectations of peer review. Res Integr Peer Rev. 2018; 3: 8.
  24. Horchani R. Impact of institutional affiliation bias in the peer review process. Insights UKSG J. 2025; 38: 6.
  25. Horta H, Jung J. The crisis of peer review: Part of the evolution of science. High Educ Q. 2024; 78: e12511.
  26. Hosseini M, Horbach S. Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Res Integr Peer Rev. 2023; 8: 4.
  27. Huber J, Inoua S, Kerschbamer R, König-Kersting C, Palan S, Smith VL. Nobel and novice: Author prominence affects peer review. Proc Natl Acad Sci USA. 2022; 119: e2205779119.
  28. Kartal E. A Comprehensive Study on Bias in Artificial Intelligence Systems: Biased or Unbiased AI, That’s the Question! Int J Intell Inf Technol. 2022; 18: 1-23.
  29. Kulal A, Abhishek N, PS, Dinesh S. Unmasking Favoritism and Bias in Academic Publishing: An Empirical Study on Editorial Practices. Public Integrity. 2025: 1-22.
  30. Larson BZ, Moser C, Caza A, Muehlfeld K, Colombo LA. Critical Thinking in the Age of Generative AI. Acad Manag Learn Educ. 2024; 23: 373-378.
  31. Lee CJ, Sugimoto CR, Zhang G, Cronin B. Bias in peer review. J Am Soc Inf Sci Technol. 2013; 64: 2-17.
  32. Leung TI, de Azevedo Cardoso T, Mavragani A, Eysenbach G. Best Practices for Using AI Tools as an Author, Peer Reviewer, or Editor. J Med Internet Res. 2023; 25: e51584.
  33. Liang C, Du H, Sun Y, Niyato D, Kang J, Zhao D, et al. Generative AI-Driven Semantic Communication Networks: Architecture, Technologies, and Applications. IEEE Trans Cogn Commun Netw. 2025; 11: 27-47.
  34. Maron N, Smith K. Current Models of Digital Scholarly Communication: Results of an Investigation Conducted by Ithaka Strategic Services for the Association of Research Libraries. J Electron Publ. 2009; 12.
  35. Matías-Guiu J, García-Ramos R. Sesgos en la edición de las publicaciones científicas. Neurología. 2011; 26: 1-5.
  36. McKenna J. AI Tools for Researchers in Scientific Publishing. MDPI Blog. 2024. Available from: https://blog.mdpi.com/2024/04/04/ai-tools-for-researchers/. 2025.
  37. Medoff MH. Editorial Favoritism in Economics? South Econ J. 2003; 70: 425-434.
  38. Palmer K. Publishers Embrace AI as Research Integrity Tool. Inside Higher Ed. 2025. Available from: https://www.insidehighered.com/news/faculty-issues/research/2025/03/18/publishers-adopt-ai-tools-bolster-research-integrity. 2025.
  39. Peh WC. Peer review: concepts, variants and controversies. Singapore Med J. 2022; 63: 55-60.
  40. Peters U. What Is the Function of Confirmation Bias? Erkenntnis. 2022; 87: 1351-1376.
  41. Petrescu M, Krishen AS. The evolving crisis of the peer-review process. J Mark Anal. 2022; 10: 185-186.
  42. Phuljhele S. Reviewer fatigue is real. Indian J Ophthalmol. 2024; 72: S719-S720.
  43. Politzer-Ahles S, Girolamo T, Ghali S. Preliminary evidence of linguistic bias in academic reviewing. J Engl Acad Purp. 2020; 47: 100895.
  44. Rye KA, Davidson NO, Burlingame AL, Guengerich FP. Working toward reducing bias in peer review. J Biol Chem. 2021; 297: 101243.
  45. Sallam M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare (Basel). 2023; 11: 887.
  46. Sallam M, Al-Mahzoum K, Marzoaq O, Alfadhel M, Al-Ajmi A, Al-Ajmi M, et al. Evident gap between generative artificial intelligence as an academic editor compared to human editors in scientific publishing. Edelweiss Appl Sci Technol. 2024; 8: 960-979.
  47. Sallam M, Al-Mahzoum K, Sallam M, Mijwil MM. DeepSeek: Is it the End of Generative AI Monopoly or the Mark of the Impending Doomsday? Mesopotamian J Big Data. 2025; 2025: 26-34.
  48. Sallam M, Al-Salahat K, Eid H, Egger J, Puladi B. Human versus Artificial Intelligence: ChatGPT-4 Outperforming Bing, Bard, ChatGPT-3.5 and Humans in Clinical Chemistry Multiple-Choice Questions. Adv Med Educ Pract. 2024; 15: 857-871.
  49. Samuelson W, Zeckhauser R. Status quo bias in decision making. J Risk Uncertain. 1988; 1: 7-59.
  50. Si K, Li Y, Ma C, Guo F. Affiliation bias in peer review and the gender gap. Res Policy. 2023; 52: 104797.
  51. Silverman JA, Ali SA, Rybak A, van Goudoever JB, Leleiko NS. Generative AI: Potential and Pitfalls in Academic Publishing. JPGN Rep. 2023; 4: e387.
  52. Smith R. Peer review: a flawed process at the heart of science and journals. JR Soc Med. 2006; 99: 178-182.
  53. Spencer DA. Fear and hope in an age of mass automation: debating the future of work. New Technol Work Employ. 2018; 33: 1-12.
  54. Strauss D, Gran-Ruaz S, Osman M, Williams MT, Faber SC. Racism and censorship in the editorial and peer review process. Front Psychol. 2023; 14: 1120938.
  55. Street C, Ward K. Cognitive Bias in the Peer Review Process: Understanding a Source of Friction between Reviewers and Researchers. ACM SIGMIS Database. 2019; 50: 52-70.
  56. Sverdlichenko I, Xie S, Margolin E. Impact of institutional affiliation bias on editorial publication decisions: A bibliometric analysis of three ophthalmology journals. Ethics Med Public Health. 2022; 21: 100758.
  57. Teixeira da Silva JA, Al-Khatib A. How are Editors Selected, Recruited and Approved? Sci Eng Ethics. 2017; 23: 1801-1804.
  58. Tennant JP, Ross-Hellauer T. The limitations to our understanding of peer review. Res Integr Peer Rev. 2020; 5: 6.
  59. Tomkins A, Zhang M, Heavlin WD. Reviewer bias in single- versus double-blind peer review. Proc Natl Acad Sci USA. 2017; 114: 12708-12713.
  60. Van Eyghen H. Cognitive Bias: Phylogenesis or Ontogenesis? Front Psychol. 2022; 13: 892829.
  61. Varsha PS. How can we manage biases in artificial intelligence systems – A systematic literature review. Int J Inf Manag Data Insights. 2023; 3: 100165.
  62. Yampolskiy RV. AI: Unexplainable, Unpredictable, Uncontrollable. AI: Unexplainable, Unpredictable, Uncontrollable. 2024.
  63. Zhou M, Abhishek V, Derdenger TP, Kim J, Srinivasan K. Bias in Generative AI. arXiv. 2024.