Katrine Bay and Gabriella A. M. Kierulff successfully defended their Master thesis on Enhancing Face Recognition Models with Synthetic Child Data

Despite the widespread adoption of facial recognition technology in various applications,
significant challenges persist in ensuring fair and unbiased performance across different
age groups. These challenges are particularly pronounced in scenarios involving children,
who are often underrepresented in training data sets, leading to biased algorithms that
perform with higher accuracy on adults.
This thesis investigates the enhancement of demographic fairness, particularly concerning
children, in state-of-the-art face recognition models. It further seeks to align with
a responsible development framework following the novel regulations for biometric identification
systems in the EU AI Act and ISO/IEC standards. The research aims to
address these issues by the bias mitigation technique of fine-tuning pre-trained face
recognition models using synthetic images of children’s faces. This approach seeks to
mitigate bias while maintaining privacy. The thesis evaluates the fairness of these models
using performance metrics, including the False Positive Identification Rate, False
Negative Identification Rate, and the demographic fairness metric False Negative Differential.
Performance is assessed across different age subgroups, categorising children
into age ranges from 1 to 15 years and comparing these groups with adults.
Key findings reveal significant disparities in the baseline performance of pre-trained face
recognition models, with adult faces being identified more accurately than children’s
faces. Fine-tuning with synthetic child data generally improved recognition for both
children and adults. However, the improvement in adult performance was significantly
greater than that for children, ultimately widening the gap in identification performance.
Consequently, this led to less fair face recognition models for children compared to the
baseline models.
Further analysis revealed that image quality significantly affects recognition performance
across demographic groups, emphasising the importance of controlling image quality to
confidently isolate and address biases. When controlling for image quality, FNIR was
observed to decrease with age, indicating that the older the children are, the better the
face recognition models identify them.
These insights contribute to enhancing fairness in face identification systems for children,
suggesting that while fine-tuning on synthetic data improves the overall recognition
performance for children, it does not enhance the demographic fairness of the face
recognition system. To further the development of fair face recognition systems for identification,
this thesis discusses alternative bias mitigation strategies, carefully balancing
the trade-off between privacy and performance. These strategies are evaluated within
the context of compliance with regulatory frameworks governing the field.