Scientific Neutrality in the Age of Artificial Intelligence: A Critical Analysis of the Value-Free Ideal

Authors

  • Budi Harto Sekolah Tinggi Ilmu Ekonomi El Hakim, Indonesia
  • Ridha Ahida Universitas Islam Negeri Sjech M. Djamil Djambek Bukittinggi, Indonesia

DOI:

https://doi.org/10.58485/elrusyd.v10i2.486

Keywords:

Scientific neutrality, value-free ideal, artificial intelligence (AI), Thomas Kuhn, critical analysis

Abstract

The debate on the neutrality of scientific knowledge whether science is value-free or value-laden has been a central discussion in the philosophy of science from the era of logical positivism to the present. The positivist tradition of Carnap and Reichenbach, along with Popper’s falsificationism, argues that the process of scientific justification must be separated from non-epistemic values in order to secure objectivity. However, the use of artificial intelligence (AI) in contemporary scientific research presents empirical evidence that challenges this ideal of value-free science. This study critically examines how the use of AI in science supports the value-laden position advocated by Thomas Kuhn, Helen Longino, and feminist epistemology. Employing a qualitative method with a content analysis approach, the study is analyzed through a philosophical analytical framework. The findings identify three major positions: neopositivism, which defends the value-free ideal; the Kuhnian position, which acknowledges the role of epistemic values; and the radical value-laden position. The discussion demonstrates that artificial intelligence substantiates the value-laden view through four dimensions: algorithmic bias as a manifestation of social values, value-laden design choices in artificial intelligence systems, the incommensurability of artificial intelligence paradigms, and situated objectivity, which requires explicit recognition of embedded values. The study concludes that artificial intelligence not only confirms but reinforces the argument that the value-free ideal is a philosophical illusion, and that responsible science requires critical reflexivity toward the values embedded within scientific practice.

References

Alvarado, R. (2023). AI as an Epistemic Technology. Science and Engineering Ethics, 29, 32. https://doi.org/10.1007/s11948-023-00451-3

Andrus, M., Dean, S., Gilbert, T. K., Lambert, N., & Zick, T. (2021). AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks. https://doi.org/10.48550/arXiv.2102.04255

Balagopalan, A., Madras, D., Yang, D. H., Hadfield-Menell, D., & Ghassemi, M. (2023). Judging facts, judging norms: Training machine learning models to judge humans requires a modified approach to labeling data. Science Advances, 9(19). https://doi.org/10.1126/sciadv.abq0701

Bar-Gil, O. (2025). Examining trends in AI ethics across countries and institutions via quantitative discourse analysis. AI & Society. https://doi.org/10.1007/s00146-025-02673-4

Baroud, N., Ardila, Y., Akmal, F., & Sabrina, R. (2025). Opportunities and Challenges for Islamic Education Teachers in Using Artificial Intelligence in Learning. Muaddib: Journal of Islamic Teaching and Learning, 1(2), 1–11. https://doi.org/https://muaddib.intischolar.id/index.php/muaddib/article/view/6

Birhane, A. (2022). The unseen Black faces of AI algorithms. Nature, 610(7932), 451–452. https://doi.org/10.1038/d41586-022-03050-7

Brown, M. J. (2024). For values in science: Assessing recent arguments for the ideal of value-free science. Synthese, 204(1), 1–15. https://doi.org/10.1007/s11229-024-04762-1

Bueter, A. (2022). Bias as an epistemic notion. Studies in History and Philosophy of Science, 91, 307–315. https://doi.org/10.1016/j.shpsa.2021.12.002

Burgess, J. (2022). Everyday data cultures: beyond Big Critique and the technological sublime. AI & Society, 38(2). https://doi.org/10.1007/s00146-022-01503-1

Cross, J. L., Choma, M. A., & Onofrey, J. A. (2024). Bias in medical AI: Implications for clinical decision-making. PLOS Digital Health. https://doi.org/10.1371/journal.pdig.0000651

Deckker, D., & Sumanasekara, S. (2025). Bias in AI Models: Origins, Impact, and Mitigation Strategies. Journal of Advanced Artificial Intelligence (JAAI), 3(3), 234–247. https://doi.org/10.18178/JAAI.2025.3.3.234-247

Delgado-Chaves, F. M., Jennings, M. J., Atalaia, A., & Baumbach, L. (2025). Transforming literature screening: The emerging role of large language models in systematic reviews. Proceedings of the National Academy of Sciences (PNAS), 122(2), e2411962122–e2411962122. https://doi.org/10.1073/pnas.2411962122

Dhar, V. (2024). The Paradigm Shifts in Artificial Intelligence. Communications of the ACM, 67(11), 50–59. https://doi.org/10.1145/3664804

Dignum, V. (2022). Responsible Artificial Intelligence – from Principles to Practice. In arXiv preprint: Vol. arXiv:2205.10785. arXiv. https://doi.org/10.48550/arXiv.2205.10785

Dotan, R., & Milli, S. (2019). Value-laden Disciplinary Shifts in Machine Learning. In arXiv preprint arXiv:1912.01172. https://doi.org/10.48550/arXiv.1912.01172

Douglas, H., & Branch, T. Y. (2024). The social contract for science and the value-free ideal. Synthese, 203, 40. https://doi.org/10.1007/s11229-023-04477-9

Engkizar, E., Jaafar, A., Masuwd, M. A., Rahman, I., Datres, D., Taufan, M., Akmal, F., Dasrizal, D., Oktavia, G., Yusrial, Y., & Febriani, A. (2025). Challenges and Steps in Living Quran and Hadith Research: An Introduction. International Journal of Multidisciplinary Research of Higher Education (IJMURHICA, 8(3), 426–435. https://doi.org/https://doi.org/10.24036/ijmurhica.v8i3.396

Engkizar, E., Jaafar, A., Sarianto, D., Ayad, N., Rahman, A., Febriani, A., Oktavia, G., Guspita, R., & Rahman, I. (2024). Analysis of Quran Education Problems in Majority Muslim Countries. International Journal of Islamic Studies Higher Education, 3(1), 65–80. https://doi.org/https://doi.org/10.24036/insight.v3i1.209

Engkizar, E, Jaafar, A., Alias, M., Guspita, B., & Albizar, R. (2025). Utilisation of Artificial Intelligence in Qur’anic Learning: Innovation or Threat? Journal of Quranic Teaching and Learning, 1(2), 1–17. https://joqer.intischolar.id/index.php/joqer/index

Engkizar, Engkizar, Jaafar, A., Taufan, M., Rahman, I., Oktavia, G., & Guspita, R. (2023). Quran Teacher: Future Profession or Devotion to the Ummah? International Journal of Multidisciplinary Research of Higher Education (IJMURHICA), 6(4), 196–210. https://doi.org/https://doi.org/10.24036/ijmurhica.v6i4.321

Engkizar, Engkizar, Muslim, H., Mulyadi, I., & Putra, Y. A. (2025). Ten Criteria for an Ideal Teacher to Memorize the Quran. Journal of Theory and Research Memorization Quran, 1(1), 26–39. https://joqer.intischolar.id/index.php/joqer

Faqih, Nadaa, & Indallah, S. M. (2025). Standpoint Epistemology in Feminist Philosophy of Science: An Analysis of Sandra Harding’s Thought. Prosiding Konferensi Integrasi Interkoneksi Islam Dan Sains (KIIIS), 6(1).

Ferrario, A., Facchini, A., & Termine, A. (2024). Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems. Minds & Machines, 34, 30. https://doi.org/10.1007/s11023-024-09681-1

Greene, T., Dhurandhar, A., & Shmueli, G. (2022). Atomist or Holist? A Diagnosis and Vision for More Productive Interdisciplinary AI Ethics Dialogue. In arXiv preprint arXiv:2208.09174. https://doi.org/10.48550/arXiv.2208.09174

Hagendorff, T. (2024). Mapping the Ethics of Generative AI: A Comprehensive Scoping Review. Minds & Machines, 34, 39. https://doi.org/10.1007/s11023-024-09694-w

Hagendorff, T., & Danks, D. (2022). Ethical and methodological challenges in building morally informed AI systems. AI and Ethics, 3(2), 553–566. https://doi.org/10.1007/s43681-022-00188-y

Haryanto, R., & Paryanto, P. (2023). Thomas Kuhn’s Scientific Revolution Paradigm and Its Relevance to Islamic Studies. Zawiyah: Jurnal Pemikiran Islam, 9(2), (not provided)-(not provided). https://doi.org/10.31332/zjpi.v9i2.6904

Hasanzadeh, F., Josephson, C. B., Waters, G., Adedinsewo, D., Azizi, Z., & White, J. A. (2025). Bias recognition and mitigation strategies in artificial intelligence healthcare applications. Npj Digital Medicine, 8, 154. https://doi.org/10.1038/s41746-025-01503-7

Huang, L. T.-L., Chen, H.-Y., Lin, Y.-T., Huang, T.-R., & Hung, T.-W. (2022). Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy. Feminist Philosophy Quarterly, 8(3/4). https://doi.org/10.5206/fpq/2022.3/4.14347

Jedličková, A. (2025). Ethical approaches in designing autonomous and intelligent systems: a comprehensive survey towards responsible development. AI & Society, 40(4), 2703–2716. https://doi.org/10.1007/s00146-024-02040-9

Johnson, D. G., & Verdicchio, M. (2025). The sociotechnical entanglement of AI and values. AI & Society, 40(1), 67–76. https://doi.org/10.1007/s00146-023-01852-5

Johnson, G. M. (2023). Are Algorithms Value-Free? Feminist Theoretical Virtues in Machine Learning. Journal of Moral Philosophy, 21(1–2), 27–61. https://doi.org/10.1163/17455243-20234372

Kassymova, G. K., Talgatov, Y. K., Arpentieva, M. R., Abishev, A. R., & Menshikov, P. V. (2025). Artificial Intelligence in the Development of the Theory and Practices of Self-Directed Learning. Multidisciplinary Journal of Thought and Research, 1(3), 66–79. https://mujoter.intischolar.id/index.php/mujoter/article/view/19

Khan, A. A., Badshah, S., Liang, P., Waseem, M., Khan, B., Ahmad, A., Fahmideh, M., Niazi, M., & Akbar, M. A. (2022). Ethics of AI: A Systematic Literature Review of Principles and Challenges. EASE ’22: Proceedings of the 26th International Conference on Evaluation and Assessment in Software Engineering, 383–392. https://doi.org/10.1145/3530019.3531329

Leslie, D. (2020). Understanding bias in facial recognition technologies. In arXiv preprint arXiv:2010.07023. https://doi.org/10.48550/arXiv.2010.07023

Monaro, S., Gullick, J., & West, S. (2022). Qualitative Data Analysis for Health Research: A Step-by-Step Example of Phenomenological Interpretation. The Qualitative Report, 27(4), 1040–1057. https://doi.org/10.46743/2160-3715/2022.5249

Mitra, S. (2020). An Analysis of the Falsification Criterion of Karl Popper: A Critical Review. Tattva Journal of Philosophy, 12(1), 1-18. https://doi.org/10.12726/tjp.23.1

Nazer, L. H., Zatarah, R., Waldrip, S., Chen Ke, J. X., Moukheiber, M., Khanna, A. K., Hicklen, R. S., Moukheiber, L., Moukheiber, D., Ma, H., & Mathur, P. (2023). Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital Health. https://doi.org/10.1371/journal.pdig.0000278

Norton, J. D. (2021). The Material Theory of Induction. University of Calgary Press. https://doi.org/10.2307/j.ctv25wxcb5

Parker, W. S. (2024). The Epistemic Projection Approach to Values in Science. Philosophy of Science, 91(1), 18–36. https://doi.org/10.1017/psa.2023.107

Politi, V. (2024). The value-free ideal, the autonomy thesis, and cognitive diversity. Synthese, 204(1), 1–21. https://doi.org/10.1007/s11229-024-04673-1

Resnik, D. B., & Hosseini, M. (2025). The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool. AI Ethics, 5((Issue date: April 2025)), 1499–1521. https://doi.org/10.1007/s43681-024-00493-8

Resnik, D. B., Hosseini, M., & Hauswald, R. (2025). Autonomous Artificial Intelligence, Scientific Research, and Human Values (2025th-06–13th ed.). https://doi.org/10.31219/osf.io/b4m25_v1

Rutting, L., Vervoort, J., Mees, H., Pereira, L., Veeger, M., Muiderman, K., Mangnus, A., Winkler, K., Olsson, P., Hichert, T., Lane, R., Bottega Pergher, B., Christiaens, L., Bansal, N., Hendriks, A., & Driessen, P. (2023). Disruptive seeds: a scenario approach to explore power shifts in sustainability transformations. Sustainability Science, 18(3), 1117–1133. https://doi.org/10.1007/s11625-022-01251-7

Schintler, L. A., McNeely, C. L., & Witte, J. (2023). A Critical Examination of the Ethics of AI-Mediated Peer Review. https://doi.org/10.48550/arXiv.2309.12356

Stamenkovic, P. (2024). Straightening the ‘value-laden turn’: minimising the influence of extra-scientific values in science. Synthese, 203, 20. https://doi.org/10.1007/s11229-023-04446-2

Trächtler, J. (2024). The world as witty agent—Donna Haraway on the object of knowledge. Frontiers in Psychology, 15. https://doi.org/10.3389/fpsyg.2024.1389575

Vagelli, M. (2024). Styles of Science and the Pluralist Turn: Between Inclusion and Exclusion. Revue de Synthèse, 145(3–4), 1–39. https://doi.org/10.1163/19552343-14234053

Ward, Z. B. (2021). On value-laden science. Studies in History and Philosophy of Science Part A, 85, 54-62. https://doi.org/10.1016/j.shpsa.2020.09.006

Widder, D. G. (2024). Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24), 1295–1304. https://doi.org/10.1145/3630106.3658973

Ye, A. (2024). And Then the Hammer Broke: Reflections on Machine Ethics from Feminist Philosophy of Science. ArXiv. https://doi.org/10.48550/arXiv.2403.05805

Zajko, M. (2021). Conservative AI and social inequality: conceptualizing alternatives to bias through social theory. AI & Society, 36(3), 1047–1056. https://doi.org/10.1007/s00146-021-01153-9

Zanetti, L., Chiffi, D., & Petrini, L. (2023). Epistemic and Non-epistemic Values in Earthquake Engineering. Science and Engineering Ethics, 29, 18. https://doi.org/10.1007/s11948-023-00438-0

Downloads

Published

2025-12-09

How to Cite

Harto, B., & Ahida, R. (2025). Scientific Neutrality in the Age of Artificial Intelligence: A Critical Analysis of the Value-Free Ideal. El-Rusyd, 10(2), 179–190. https://doi.org/10.58485/elrusyd.v10i2.486

Issue

Section

Articles