Javad Rzayev successfully defended his Master Thesis „Deep Learning-based Face Image Quality Assessment“
Abstract
Getting into phones, crossing borders, checking IDs – faces help do it all now. Yet how well those checks work ties closely to how clear the face picture happens to be. When photos come out fuzzy, too dark, or partly blocked, mistakes creep in more easily. That is where judging face photo quality steps forward as something essential behind the scenes. Lately, smart algorithms built from layered neural networks started handling that judgment job without needing human eyes. These models learn patterns on their own, then decide if a given face image will likely succeed during matching moments later.
This work looks into modern deep learning techniques used to judge face image quality. A summary of known methods and recent work in face image
quality analysis comes first. After that, various open-source tools are tested using one particular collection of images. What matters most is seeing how well each approach estimates an image’s usefulness for identifying faces. For this purpose, EDC curves help show how well each method spots poor images then removes them, boosting recognition performance. Another way, violin plots reveal how quality estimates spread out while also uncovering differences among diverse population segments.
Looking closer at how different face image quality assessment tools perform shows clear gaps in their judgment of face image quality. What stands out is that some approaches catch problematic images better than others when tested through EDC curves. Images with degraded recognition performance tend to be more consistently labeled by certain models. Another important point: rating patterns vary significantly depending on the population segment. These changes are due to group characteristics, not randomness. The conclusion is that not all deep learning methods are equally good at ensuring fair ratings. The outcomes help clarify where current tools succeed – and where they fall short. Picking the right method could depend heavily on system goals and user diversity. Usefulness here lies mainly in exposing hidden behaviors within automated evaluation systems.
