N Alsufyani is a research student in the School of Engineering and Digital Arts.
Conference or workshop item
Alsufyani, N. et al. (2018). Biometric Presentation Attack Detection using Gaze Alignment. in: 2018 IEEE 4th International Conference on Identity, Security, and Behavior Analysis (ISBA).. Available at: https://doi.org/10.1109/ISBA.2018.8311472.Face recognition systems have been improved rapidly in recent decades. However, their wide deployment has been hindered by their vulnerability to spoofing attacks. In this paper, we present a challenge and response method to detect attack in face recognition systems by recording the gaze of a user in response to a moving stimulus. The proposed system extracts eye centres in the captured frames and computes features from these landmarks to ascertain whether the gaze aligns with the challenge trajectory in order to detect spoofing attacks. The system is tested using a new database simulating mobile device use with 70 subjects attempting three types of spoof attacks (projected photo, looking through a 2D mask or wearing a 3D mask). Evaluations on the collected database show that the proposed approach performs favourably when compared with state-of-the-art methods.
Ali, A. et al. (2017). Biometric Counter-spoofing for Mobile Devices using Gaze Information. in: 7th International Conference on Pattern Recognition and Machine Intelligence. Springer, pp. 11-18. Available at: https://doi.org/10.1007/978-3-319-69900-4_2.With the rise in the use of biometric authentication on mobile devices, it is important to address the security vulnerability of spoofing attacks where an attacker using an artefact representing the biometric features of a genuine user attempts to subvert the system. In this paper, techniques for presentation attack detection are presented using gaze information with a focus on their applicability for use on mobile devices. Novel features that rely on directing the gaze of the user and establishing its behaviour are explored for detecting spoofing attempts. The attack scenarios considered in this work include the use of projected photos, 2D and 3D masks. The proposed features and the systems based on them were extensively evaluated using data captured from volunteers performing genuine and spoofing attempts. The results of the evaluations indicate that gaze-based features have the potential for discriminating between genuine attempts and imposter attacks on mobile devices.