Ethics and bias mitigation in AI systems: Technical and regulatory trends
Ética y mitigación de sesgos en sistemas de IA: Tendencias técnicas y regulatorias
Main Article Content
Abstract
This article critically analyzes technical and regulatory strategies to mitigate biases in artificial intelligence (AI) systems, an urgent challenge given the social impact of these technologies. Through a documentary review of articles in Scopus (2018–2022) in Spanish and English, four thematic axes were identified: bias detection techniques, mitigation methods (data rebalancing, adversarial debiasing), international regulatory frameworks (EU, U.S., OECD), and challenges in real-world implementation (trade-offs between equity and performance). The results reveal that, despite advances in fairness-aware algorithms, gaps persist between theory and practice, especially in industrial contexts. It is concluded that ethics in AI requires multidisciplinary approaches that integrate technical solutions with adaptable governance, community participation, and continuous audits
Downloads
Article Details
References (SEE)
Addey, C. (2021). Passports to the Global South, UN flags, favourite experts: understanding the interplay between UNESCO and the OECD within the SDG4 context. Globalisation, Societies and Education, 19, 593 - 604. https://doi.org/10.1080/14767724.2020.1862643
Akintola, B., Jagboro, G., Ojo, G., & Odediran, S. (2020). Effectiveness of Mechanisms for Enforcement of Ethical Standards in the Construction Industry. Journal of Construction Business and Management, 4(1), 1–12. https://doi.org/10.15641/JCBM.4.1.530
Auld, E., Rappleye, J., & Morris, P. (2018). PISA for Development: how the OECD and World Bank shaped education governance post-2015. Comparative Education, 55, 197 - 219. https://doi.org/10.1080/03050068.2018.1538635
Balabin, A. (2019). The Implementation of Corporate Governance Standards in Large Russian Companies. Proceedings of the International Scientific Conference "Far East Con" (ISCFEC 2018). https://doi.org/10.2991/iscfec-18.2019.24
Baros, J., Sotola, V., Bilik, P., Martínek, R., Jaros, R., Danys, L., & Simoník, P. (2022). Review of Fundamental Active Current Extraction Techniques for SAPF. Sensors (Basel, Switzerland), 22. https://doi.org/10.3390/s22207985
Baykurt, B. (2022). Algorithmic accountability in U. S. cities: Transparency, impact, and political economy. Big Data & Society, 9. https://doi.org/10.1177/20539517221115426
Borges Machín, A. Y. y González Bravo, Y. L. (2022). Educación comunitaria para un envejecimiento activo: experiencia en construcción desde el autodesarrollo. Región Científica, 1(1), 202212. https://doi.org/10.58763/rc202213
Busuioc, M. (2020). Accountable Artificial Intelligence: Holding Algorithms to Account. Public Administration Review, 81, 825 - 836. https://doi.org/10.1111/puar.13293
Carter, E., Onyeador, I., & Lewis, N. (2020). Developing & delivering effective anti-bias training: Challenges & recommendations. Behavioral Science & Policy, 6, 57 - 70. https://doi.org/10.1177/237946152000600106
Chalmers, P. (2018). Model-Based Measures for Detecting and Quantifying Response Bias. Psychometrika, 83, 696 - 732. https://doi.org/10.1007/s11336-018-9626-9
Cruz, I., Troffaes, M., Lindström, J., & Sahlin, U. (2022). A robust Bayesian bias‐adjusted random effects model for consideration of uncertainty about bias terms in evidence synthesis. Statistics in Medicine, 41, 3365 - 3379. https://doi.org/10.1002/sim.9422
Czarnowska, P., Vyas, Y., & Shah, K. (2021). Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics. Transactions of the Association for Computational Linguistics, 9, 1249-1267. https://doi.org/10.1162/tacl_a_00425
De Paolis Kaluza, M., Jain, S., & Radivojac, P. (2022). An Approach to Identifying and Quantifying Bias in Biomedical Data. Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, 28, 311 - 322. https://doi.org/10.1142/9789811270611_0029
Delobelle, P., Tokpo, E., Calders, T., & Berendt, B. (2022). Measuring Fairness with Biased Rulers: A Comparative Study on Bias Metrics for Pre-trained Language Models. , 1693-1706. https://doi.org/10.18653/v1/2022.naacl-main.122
Dobler, C. C., Morrow, A. S., & Kamath, C. C. (2019). Clinicians' cognitive biases: a potential barrier to implementation of evidence-based clinical practice. BMJ evidence-based medicine, 24(4), 137–140. https://doi.org/10.1136/bmjebm-2018-111074Fernández-Castilla, B., Declercq, L., Jamshidi, L., Beretvas, S., Onghena, P., & Van Den Noortgate, W. (2019). Detecting Selection Bias in Meta-Analyses with Multiple Outcomes: A Simulation Study. The Journal of Experimental Education, 89, 125 - 144. https://doi.org/10.1080/00220973.2019.1582470
Floridi, L. (2019). Establishing the rules for building trustworthy AI. Nature Machine Intelligence, 1, 261-262. https://doi.org/10.1038/S42256-019-0055-Y
Goldfarb-Tarrant, S., Marchant, R., Sánchez, R., Pandya, M., & Lopez, A. (2020). Intrinsic Bias Metrics Do Not Correlate with Application Bias. ArXiv, abs/2012.15859. https://doi.org/10.18653/v1/2021.acl-long.150
Gómez Cano, C. A. (2022). Ingreso, permanencia y estrategias para el fomento de los Semilleros de Investigación en una IES de Colombia. Región Científica, 1(1), 20226. https://doi.org/10.58763/rc20226
Gómez Miranda, O. M. (2022). La franquicia: de la inversión al emprendimiento. Región Científica, 1(1), 20229. https://doi.org/10.58763/rc20229
Gómez-Cano, C. y Sánchez-Castillo, V. (2021). Evaluación del nivel de madurez en la gestión de proyectos de una empresa prestadora de servicios públicos. Económicas CUC, 42(2), 133-144. https://doi.org/10.17981/econcuc.42.2.2021.Org.7
Gray, C. (2022). Overcoming Political Fragmentation: The Potential of Meso-Level Mechanisms. International Journal of Health Policy and Management, 12. https://doi.org/10.34172/ijhpm.2022.7075
Guzmán, D. L., Gómez-Cano, C. y Sánchez-Castillo, V. (2022). Construcción del Estado a partir de la participación ciudadana. Revista Academia & Derecho, 14(25). https://doi.org/10.18041/2215-8944/academia.25.10601
Han, X., Baldwin, T., & Cohn, T. (2022). Towards Equal Opportunity Fairness through Adversarial Learning. ArXiv, abs/2203.06317. https://doi.org/10.48550/arXiv.2203.06317
Heiden, B., Tonino-Heiden, B., Obermüller, T., Loipold, C., & Wissounig, W. (2020). Rising from systemic to industrial artificial intelligence applications (AIA) for predictive decision making (PDM): Four examples. En Y. Bi, R. Bhatia & S. Kapoor (Eds.), Intelligent systems and applications. IntelliSys 2019 (Advances in Intelligent Systems and Computing, 1038, pp. 1222–1233). Springer. https://doi.org/10.1007/978-3-030-29513-4_94Higuera Carrillo, E. L. (2022). Aspectos clave en agroproyectos con enfoque comercial: Una aproximación desde las concepciones epistemológicas sobre el problema rural agrario en Colombia. Región Científica, 1(1), 20224. https://doi.org/10.58763/rc20224
Hoyos Chavarro, Y. A., Melo Zamudio, J. C., & Sánchez Castillo, V. (2022). Sistematización de la experiencia de circuito corto de comercialización estudio de caso Tibasosa, Boyacá. Región Científica, 1(1), 20228. https://doi.org/10.58763/rc20228
Kelly, C., Karthikesalingam, A., Suleyman, M., Corrado, G., & King, D. (2019). Key challenges for delivering clinical impact with artificial intelligence. BMC Medicine, 17. https://doi.org/10.1186/s12916-019-1426-2
Kimura, A., Antón-Oldenburg, M., & Pinderhughes, E. (2021). Developing and Teaching an Anti-Bias Curriculum in a Public Elementary School: Leadership, K-1 Teachers’, and Young Children’s Experiences. Journal of Research in Childhood Education, 36, 183 - 202. https://doi.org/10.1080/02568543.2021.1912222
Kinavey, H., & Cool, C. (2019). The Broken Lens: How Anti-Fat Bias in Psychotherapy is Harming Our Clients and What To Do About It. Women & Therapy, 42, 116 - 130. https://doi.org/10.1080/02703149.2018.1524070
Langenkamp, M., Costa, A., & Cheung, C. (2020). Hiring Fairly in the Age of Algorithms. ArXiv, abs/2004.07132. https://doi.org/10.2139/ssrn.3723046
Ledesma, F. y Malave-González, B. E. (2022). Patrones de comunicación científica sobre E-commerce: un estudio bibliométrico en la base de datos Scopus. Región Científica, 1(1), 202214. https://doi.org/10.58763/rc202214
Lin, L., & Chu, H. (2018). Quantifying publication bias in meta‐analysis. Biometrics, 74. https://doi.org/10.1111/biom.12817
Lyu, Y., Lu, H., Lee, M., Schmitt, G., & Lim, B. (2022). IF-City: Intelligible Fair City Planning to Measure, Explain and Mitigate Inequality. IEEE Transactions on Visualization and Computer Graphics, 30, 3749-3766. https://doi.org/10.1109/TVCG.2023.3239909
Madaio, M., Egede, L., Subramonyam, H., Vaughan, J., & Wallach, H. (2021). Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support. Proceedings of the ACM on Human-Computer Interaction, 6, 1 - 26. https://doi.org/10.1145/3512899
Mazen, J., & Tong, X. (2020). Bias Correction for Replacement Samples in Longitudinal Research. Multivariate Behavioral Research, 56, 805 - 827. https://doi.org/10.1080/00273171.2020.1794774
McGregor, L., Murray, D., & Ng, V. (2019). International human rights law as a framework for algorithmic accountability. International and Comparative Law Quarterly, 68, 309 - 343. https://doi.org/10.1017/S0020589319000046
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys (CSUR), 54, 1 - 35. https://doi.org/10.1145/3457607
Miroshnikov, A., Kotsiopoulos, K., Franks, R., & Kannan, A. (2020). Wasserstein-based fairness interpretability framework for machine learning models. Machine Learning, 111, 3307 - 3357. https://doi.org/10.1007/s10994-022-06213-9
Mogrovejo Andrade, J. M. (2022). Estrategias resilientes y mecanismos de las organizaciones para mitigar los efectos ocasionados por la pandemia a nivel internacional. Región Científica, 1(1), 202211. https://doi.org/10.58763/rc202211
Mökander, J., Juneja, P., Watson, D., & Floridi, L. (2022). The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: what can they learn from each other?. Minds and Machines, 32, 751 - 758. https://doi.org/10.1007/s11023-022-09612-y
Morgan, A., Chaiyachati, K., Weissman, G., & Liao, J. (2018). Eliminating Gender-Based Bias in Academic Medicine: More Than Naming the “Elephant in the Room”. Journal of General Internal Medicine, 33, 966-968. https://doi.org/10.1007/s11606-018-4411-0
Ngxande, M., Tapamo, J., & Burke, M. (2019). Bias Remediation in Driver Drowsiness Detection Systems Using Generative Adversarial Networks. IEEE Access, 8, 55592-55601. https://doi.org/10.1109/ACCESS.2020.2981912
Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernández, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., Broelemann, K., Kasneci, G., Tiropanis, T., & Staab, S. (2020). Bias in data‐driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10. https://doi.org/10.1002/widm.1356
Orozco Castillo, E. A. (2022). Experiencias en torno al emprendimiento femenino. Región Científica, 1(1), 20227. https://doi.org/10.58763/rc20225
Pérez Gamboa, A. J., García Acevedo, Y. y García Batán, J. (2019). Proyecto de vida y proceso formativo universitario: un estudio exploratorio en la Universidad de Camagüey. Trasnsformación, 15(3), 280-296. http://scielo.sld.cu/scielo.php?script=sci_arttext&pid=S2077-29552019000300280
Pérez-Gamboa, A. J., Gómez-Cano, C., & Sánchez-Castillo, V. (2022). Decision making in university contexts based on knowledge management systems. Data & Metadata, 2, 92. https://doi.org/10.56294/dm202292
Peters, U. (2022). Algorithmic Political Bias in Artificial Intelligence Systems. Philosophy & Technology, 35. https://doi.org/10.1007/s13347-022-00512-8
Petersen, J., Ranker, L., Barnard-Mayers, R., Maclehose, R., & Fox, M. (2021). A systematic review of quantitative bias analysis applied to epidemiological research. International journal of epidemiology. https://doi.org/10.1093/ije/dyab061
Petrenko, A. (2020). OECD acts as instruments of soft international law. Law Review of Kyiv University of Law. https://doi.org/10.36695/2219-5521.3.2020.74
Pospisil, D., & Bair, W. (2022). Accounting for Bias in the Estimation of r2 between Two Sets of Noisy Neural Responses. The Journal of Neuroscience, 42, 9343 - 9355. https://doi.org/10.1523/JNEUROSCI.0198-22.2022
Ricardo Jiménez, L. S. (2022). Dimensiones de emprendimiento: Relación educativa. El caso del programa cumbre. Región Científica, 1(1), 202210. https://doi.org/10.58763/rc202210
Ringe, W., & Ruof, C. (2020). Regulating Fintech in the EU: the Case for a Guided Sandbox. European Journal of Risk Regulation, 11, 604 - 629. https://doi.org/10.1017/err.2020.8
Rodríguez-Torres, E., Gómez-Cano, C., & Sánchez-Castillo, V. (2022). Management information systems and their impact on business decision making. Data & Metadata, 1, 21. https://doi.org/10.56294/dm202221
Royal, K. (2019). Survey research methods: A guide for creating post-stratification weights to correct for sample bias. Education in the Health Professions, 2, 48 - 50. https://doi.org/10.4103/EHP.EHP_8_19
Rus, C., Luppes, J., Oosterhuis, H., & Schoenmacker, G. (2022). Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation. ArXiv, abs/2209.09592. https://doi.org/10.48550/arXiv.2209.09592
Sanabria Martínez, M. J. (2022). Construir nuevos espacios sostenibles respetando la diversidad cultural desde el nivel local. Región Científica, 1(1), 20222. https://doi.org/10.58763/rc20222
Shen, A., Han, X., Cohn, T., Baldwin, T., & Frermann, L. (2022). Does Representational Fairness Imply Empirical Fairness? 81-95. https://doi.org/10.18653/v1/2022.findings-aacl.8
Sherman, L., Cantor, A., Milman, A., & Kiparsky, M. (2020). Examining the complex relationship between innovation and regulation through a survey of wastewater utility managers. Journal of environmental management, 260, 110025. https://doi.org/10.1016/j.jenvman.2019.110025
Simpson, A., & Dervin, F. (2019). Global and intercultural competences for whom? By whom? For what purpose?: an example from the Asia Society and the OECD. Compare: A Journal of Comparative and International Education, 49, 672 - 677. https://doi.org/10.1080/03057925.2019.1586194
Thompson, J. (2021). Mental Models and Interpretability in AI Fairness Tools and Code Environments. In: Stephanidis, C., et al. HCI International 2021 - Late Breaking Papers: Multimodality, eXtended Reality, and Artificial Intelligence. HCII 2021. Lecture Notes in Computer Science, vol 13095. Springer, Cham. https://doi.org/10.1007/978-3-030-90963-5_43
Vaccari, V., & Gardinier, M. (2019). Toward one world or many? A comparative analysis of OECD and UNESCO global education policy documents. International Journal of Development Education and Global Learning. https://doi.org/10.18546/IJDEGL.11.1.05
Wexler, J., Pushkarna, M., Bolukbasi, T., Wattenberg, M., Viégas, F., & Wilson, J. (2019). The What-If Tool: Interactive Probing of Machine Learning Models. IEEE Transactions on Visualization and Computer Graphics, 26, 56-65. https://doi.org/10.1109/TVCG.2019.2934619
Yam, J., & Skorburg, J. (2021). From human resources to human rights: Impact assessments for hiring algorithms. Ethics and Information Technology, 23, 611 - 623. https://doi.org/10.1007/s10676-021-09599-7
Yarborough, M. (2021). Moving towards less biased research. BMJ Open Science, 5. https://doi.org/10.1136/bmjos-2020-100116
Zahid, A., Khan, M., Khan, A., Kamiran, F., & Nasir, B. (2020). Modeling, Quantifying and Visualizing Media Bias on Twitter. IEEE Access, 8, 81812-81821. https://doi.org/10.1109/ACCESS.2020.2990800
Zapp, M. (2020). The authority of science and the legitimacy of international organisations: OECD, UNESCO and World Bank in global education governance. Compare: A Journal of Comparative and International Education, 51, 1022 - 1041. https://doi.org/10.1080/03057925.2019.1702503