Responsible AI

We are committed to advancing the field of Responsible Artificial Intelligence (AI) by prioritizing two critical dimensions: Explainable AI (XAI) and bias reduction in machine learning algorithms. Our research and development endeavors are firmly grounded in addressing the ethical, social, and technical challenges associated with AI systems, ensuring that they are transparent, fair, and accountable.

Faculty: Corey Jackson, Jacob Thebault-Spieker
Students: Tallal Ahmad, Yaxuan Yin

Learn about our Responsible AI projects:  

This is an accordion element with a series of buttons that open and close related content panels.

Socio-technical Audits in ML decision-making

This research aims to mitigate algorithmic bias in machine learning through a socio-technical framework applied to algorithmic audits. The project aims to cultivate collaboration between machine learning developers and the broader public, with the objective of minimizing adverse outcomes for all demographic groups arising from the application of machine learning in various decision-making contexts. The project undertakes the task of redefining fairness, acknowledging its inherent contextuality and adaptability, while actively integrating public opinions and attitudes into the process of algorithmic audits. This project positions fairness as a multifaceted phenomenon with social, historical, contextual, and geographical dimensions.


Jackson, C. B., Ahmad, T., Saxena, D. (2023). Re-imagining Fairness in Machine Learning: A Framework for Building in Socio-cultural and Contextual Awareness. [Position paper presented in the Supporting User Engagement in Testing, Auditing, and Contesting AI workshop]. CSCW. Minneapolis, MN, USA