The Impact of Artificial Intelligence on Ethical Decision-Making in Intelligent Systems.

Authors

  • Rafid Khaleefah جامعة البصرة – العراق , Basrah University-Iraq , Université de Bassorah-Irak Author

DOI:

https://doi.org/10.61856/6kyazw30

Keywords:

Artificial Intelligence(AI); Ethical Decision-Making; Algorithmic Bias; AI Governance; Moral Agency.

Abstract

 This study investigates the relationship between artificial intelligence (AI) and ethical decision-making in intelligent systems. As AI technologies are increasingly deployed in critical domains such as healthcare, law, and autonomous systems, the ethical risks they introduce – such as bias, opacity, and the absence of emotional or contextual judgements – require immediate attention. These risks are classified across three dimensions: technological uncertainty, limitations in human morality, and complex interactions between human and non-human agents. The paper examines the feasibility of embedding ethical reasoning into AI systems using normative theories such as utilitarianism, deontology, and virtue ethics. It also analyses global regulatory frameworks and emerging interdisciplinary approaches, illustrating the importance of culturally responsive, transparent, and accountable AI governance. A multi-domain ethical risk analysis framework is proposed to help developers, policymakers, and ethicists evaluate and mitigate ethical concerns throughout the AI lifecycle. The study concludes with recommendations for future interdisciplinary research, including operationalising ethics in AI designs and developing anticipatory governance models. This work aims to support the creation of intelligent systems that are not only technically robust but also ethically aligned with human values.

References

Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261.

https://doi.org/10.1080/09528130050111428

Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning: Limitations and opportunities. MIT Press. ISBN: 978 0262048613. https://fairmlbook.org/

Batool, A., Zowghi, D., & Bano, M. (2025). AI governance: A systematic literature review. AI and Ethics, 5, 3265–3279. https://doi.org/10.1007/s43681-024-00653-w

Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Springer. https://doi.org/10.1007/978-3-319-60648-4

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings of the Conference on Fairness, Accountability and Transparency (pp.77–91). https://proceedings.mlr.press/v81/buolamwini18a.html

Calo, R. (2015). Robotics and the lessons of cyberlaw. California Law Review, 103(3), 513–563. https://doi.org/10.2139/ssrn.2402972

Cave, S., Dihal, K., & Dillon, S. (2018). Portrayals and perceptions of AI and why they matter. Nature Machine Intelligence, 1(2), 1–3. https://www.lcfi.ac.uk/resources/portrayals-and-perceptions-ai-and-why-they-matter

Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), 311–313. https://doi.org/10.1038/538311a

Crawford, K., & Paglen, T. (2021). Excavating AI: The politics of images in machine learning training sets. AI & Society, 36, 1–12. https://doi.org/10.1007/s00146-021-01162-8

Daly, A., Hagendorff, T., Hui, L., Mann, M., Marda, V., Wagner, B., Wang, W., & Witteborn, S. (2019). Artificial Intelligence, Governance and Ethics: Global Perspectives. SSRN. https://doi.org/10.2139/SSRN.3414805

Dent, K. (2020). Ethical considerations for AI researchers. https://doi.org/10.48550/arXiv.2006.07558

Dignum, V. (2019). Responsible Artificial Intelligence: How to develop and use AI responsibly. Springer. https://doi.org/10.1007/978-3-030-30371-6

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. https://arxiv.org/abs/1702.08608

Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7

European Commission – High-Level Expert Group on AI. (2019). Ethics guidelines for trustworthy artificial intelligence. European Commission. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52019DC0168

Farisco, M., Evers, K., & Salles, A. (2020). Towards establishing criteria for the ethical analysis of artificial intelligence. Science and Engineering Ethics, 26, 2001–2026. https://doi.org/10.1007/s11948-020-00238-w

Ferrer, X., van Nuenen, T., Such, J. M., Coté, M., & Criado, N. (2020). Bias and discrimination in AI: A cross-disciplinary perspective. IEEE Technology and Society Magazine, 39(3), 72–80. https://doi.org/10.1109/MTS.2021.3056293

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Centre Research Publication, (2020-1). https://doi.org/10.2139/ssrn.3518482

Floridi, L. (2019). Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology, 32(2), 185–193.

https://doi.org/10.1007/s13347-019-00354‑X

Floridi, L. (2023). AI as agency without intelligence: On ChatGPT, large language models, and other generative models. Philosophy & Technology, 36(1), 15–38. https://doi.org/10.1007/s13347-023-00643-6

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Schafer, B. (2018). AI4People - An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5.

Gasser, U., & Almeida, V. A. F. (2017). A layered model for AI governance. IEEE Internet Computing, 21(6), 58–62. https://doi.org/10.1109/MIC.2017.4180835.

Guan, H., Dong, L., & Zhao, A. (2022). Ethical Risk Factors and Mechanisms in Artificial Intelligence Decision Making. Behavioral Sciences, 12(9), 343. https://doi.org/10.3390/bs12090343

Herrera Poyatos, A., Del Ser, J., López de Prado, M., Wang, F.-Y., Herrera Viedma, E., & Herrera, F. (2025). Responsible artificial intelligence systems: A roadmap to society's trust through trustworthy AI, auditability, accountability, and governance. arXiv preprint. https://doi.org/10.48550/arXiv.2503.04739

IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2018). Ethically Aligned Design: A vision for prioritizing human well-being with autonomous and intelligent systems (Version 2). IEEE Standards Association. https://standards.ieee.org/wp-content/uploads/import/documents/other/ead_v2.pdf

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Kluge Corrêa, N., Galvão, C., Santos, J. W., Del Pino, C., Pontes Pinto, E., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., Terem, E., & de Oliveira, N. (2022). Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns, 4(10), Article 100857. https://doi.org/10.1016/j.patter.2023.100857

Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392

Mennella, C., Maniscalco, U., De Pietro, G., & Esposito, M. (2024). Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon, 10(4), Article e26297. https://doi.org/10.1016/j.heliyon.2024.e26297

Metzinger, T. (2020). Europe's approach to regulating AI. In J. B. Bullock, Y.-C. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang. (Eds.). The Oxford Handbook of AI Governance. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780197579329.001.0001

Miller, T., Howe, P., & Sonenberg, L. (2020). Explainable AI: Understanding, trust, and acceptance. Artificial Intelligence, 287, Article 103385. https://doi.org/10.1016/j.artint.2020.103385

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679

Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4), 18–21. https://doi.org/10.1109/MIS.2006.80

Nagel, E. (2008). Gödel's Proof (Rev. ed.). New York University Press. ISBN: 9780814758373.

Radanliev, P., Santos, O., Brandon-Jones, A., & Joinson, A. (2024). Ethics and responsible AI deployment. Frontiers in Artificial Intelligence, 7, Article 1377011. https://doi.org/10.3389/frai.2024.1377011

Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., ... & Lazer, D. (2019). Machine behaviour. Nature, 568(7753), 477–486. https://doi.org/10.1038/s41586-019-1138-y

Russell, S. J., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson Education. ISBN: 9780134610993

Schrage, M., & Kiron, D. (2025, January). The great power shift: How intelligent choice architectures rewrite decision rights. MIT Sloan Management Review. https://sloanreview.mit.edu/article/the-great-power-shift-how-intelligent-choice-architectures-rewrite-decision-rights/

Seeamber, R., & Badea, C. (2023). If we aim to build morality into an artificial agent, how might we begin to go about doing so?. IEEE Intelligent Systems, 38(6), 35–41. https://doi.org/10.1109/MIS.2023.3320875

Srivastava, B., & Rossi, F. (2018). Towards a composable bias rating of AI services. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. (pp. 284–289). https://doi.org/10.1145/3278721.3278744

Tigard, D. W. (2021). Responsible AI and moral responsibility: A common appreciation. AI and Ethics, 1(2), 113–117. https://doi.org/10.1007/s43681-020-00009-0

Tractenberg, R. E. (2023). What is ethical AI? Leading or participating in an ethical team and/or working in statistics, data science, and artificial intelligence. SocArXiv. https://doi.org/10.31235/osf.io/8e6pv

Vakkuri, V., Jantunen, M., Halme, E., Kemell, K. K., Nguyen-Duc, A., Mikkonen, T., & Abrahamsson, P. (2021). The time for the AI (Ethics) maturity model is now. In Proceedings of the 53rd Hawaii International Conference on System Sciences. arXiv preprint arXiv:2101.12701. https://doi.org/10.48550/arXiv.2101.12701

van Wynsberghe, A. (2013). Designing robots for care: Care-centred value-sensitive design. Science and Engineering Ethics, 19(2), 407–433. https://doi.org/10.1007/s11948-011-9343-6

Wallach, W., & Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong. Oxford University Press. https://academic.oup.com/book/10768

Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. Centre for the Governance of AI, Future of Humanity Institute, University of Oxford. https://doi.org/10.7910/DVN/SGFRYA

Zhang, Y., Wu, J., Yu, F., & Xu, L. (2023). Moral judgments of human vs. AI agents in moral dilemmas. Behavioral Sciences, 13(2), 181. https://doi.org/10.3390/bs13020181

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs. ISBN‑13: 978 1610395694.

Downloads

Published

08/27/2025

How to Cite

Khaleefah, R. (2025). The Impact of Artificial Intelligence on Ethical Decision-Making in Intelligent Systems. Gateway Journal for Modern Studies and Research (GJMSR), 2(3). https://doi.org/10.61856/6kyazw30