Fairness in AI

Galdi, Chiara
Cybersecurity Seminar Series, Ditributed Team Algorithms, Aix-Marseille University, Campus Luminy, 31 March 2026, Marseille, France

Artificial intelligence and big data systems increasingly influence decisions in areas such as security, finance, and public services. While often perceived as objective, these systems can inherit and amplify biases present in the data and in the design choices made during development. This presentation explores how bias originates in society and progressively propagates into technological systems - through human cognitive biases, data collection and labeling practices, and algorithmic design decisions. By examining different sources of bias and common fairness metrics, we discuss how unintended discrimination can emerge even in well-intentioned systems. A case study on face recognition illustrates how performance disparities across demographic groups reveal these underlying mechanisms. Ultimately, the talk highlights that achieving fairness in AI requires understanding the entire pipeline - from societal context and data generation to model design and deployment - rather than treating bias solely as a technical problem.


Type:
Talk
City:
Marseille
Date:
2026-03-31
Department:
Sécurité numérique
Eurecom Ref:
8709
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Cybersecurity Seminar Series, Ditributed Team Algorithms, Aix-Marseille University, Campus Luminy, 31 March 2026, Marseille, France and is available at :
See also:

PERMALINK : https://www.eurecom.fr/publication/8709