Machine Learning in Automated Decision Systems with Fairness Awareness

Steven Henry
Ladoke Akintola University of Technology

View / Download Full Article (PDF)

Abstract

Machine learning is increasingly used in important areas such as employment, healthcare, criminal justice, and banking. However, concerns have been raised that these systems may reinforce or amplify existing social biases. This paper discusses Fairness-Aware Machine Learning (FAML), an emerging field aimed at improving the fairness of machine learning models by incorporating constraints and fairness-aware design principles. We examine key fairness concepts including equal opportunity and equalized odds. In addition, we review algorithmic strategies for reducing bias through pre-processing, in-processing, and post-processing techniques. The study also explores the relationship between fairness, transparency, and accountability in automated decision-making systems. Through case studies and theoretical analysis, we demonstrate that integrating fairness considerations into machine learning systems can significantly reduce bias, improve public trust, and enhance system accountability. Finally, we outline key challenges and propose future research directions to support the development of ethical and socially responsible automated decision systems.

Keywords

Fairness-aware machine learning, Algorithmic bias, Automated decision systems, Discrimination mitigation, Ethical AI, Fairness metrics, Socio-technical systems, Responsible AI.

References

[1] Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning. fairmlbook.org.

[2] Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163.

[3] Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Proceedings of Innovations in Theoretical Computer Science.

[4] Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference.

[5] Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability and Transparency (FAT).

[6] Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6), 1–35.

[7] Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. AAAI/ACM Conference on AI Ethics and Society.

[8] Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica.

[9] Ustun, B., & Gebru, T. (2019). Fairness audits and audits of fairness. ACM FAccT.

[10] Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1–33.

[11] Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap. FAT Conference.

[12] Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need? CHI Conference on Human Factors in Computing Systems.

[13] Binns, R. (2020). On the apparent conflict between individual and group fairness. Proceedings of ACM FAccT.

[14] Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and Discrimination.

[15] European Commission. (2021). Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act).