Muhammad Haseeb Kamal Successfully defended his MSc thesis on Defending Face Impersonation Detectors against Adversarial Attacks

In April 2024, Muhammad Haseeb Kama successfully defended his MSc thesis with the title „Defending Face Impersonation Detectors against Adversarial Attacks“.

Abstract:
Face recognition systems are a popular choice for biometric authentication. However, they are known to be vulnerable to presentation and morphing attacks. To mitigate security risks of these attacks, face impersonation detectors for detecting presentation or morphing attacks are deployed in conjunction with face recognition systems. However, since these detectors are based on deep neural networks they are prone to another class of attacks known as adversarial attacks. The vulnerability of face impersonation detectors against adversarial attacks is not well-researched. This thesis aimed to assess this vulnerability through a variety of black-box adversarial attacks. The thesis also investigated the impact of the impersonation attacks on multiple face recognition systems before and after the adversarial attacks were applied to fool the impersonation detectors. A secondary aim was to evaluate defence mechanisms to mitigate the risk of adversarial attacks.

It has been shown through experimental results that both presentation attack detectors and morphing attack detectors are vulnerable to query-based black-box attacks. In this regard, morphing attack detectors have been shown to be more vulnerable to adversarial attacks than the tested presentation attack detectors. The vulnerability of face recognition systems against attacks has also been shown. These vulnerabilities have been shown through analysis of various standardised metrics such as detection equal error rates and impostor attack presentation accept rates. Following this, it has been shown that both detector types can be defended against the attacks with varying success rates. To this end, defensive distillation was used on the presentation attack detectors while adversarial training was used on the morphing attack detectors. The former in this case led to a greater increase in model robustness.