Portrait of Professor Farzin Deravi

Professor Farzin Deravi

Professor of Information Engineering
Head of School

About

Farzin Deravi obtained his first degree in Engineering and Economics from the University of Oxford in 1981 and his M.Sc. in Electronic Engineering from Imperial College, University of London in 1982. From 1983 to 1987 he worked as a research assistant at the University of Wales, Swansea where he obtained his Ph.D. In 1987 he joined the academic staff at Swansea where he was active in teaching and research in the department of Electrical and Electronic Engineering. In 1998 he joined the Department of Electronics at the University of Kent where he is Professor of Information Engineering.

His current research interests include pattern recognition, information fusion, computer vision, image processing, fractals and self-similarity, biometrics, bio-signals and assistive technologies.

Professor Deravi is a Member of the Institute of Electrical and Electronic Engineers, the Institution of Engineering and Technology, and the British Machine Vision Association. He was the founding chair of the IET Professional Network on Visual Information Engineering and is currently Editor-in-Chief of the IET Image Processing journal. He also serves on BSI and ISO committees on Biometric Standardisation.

Research interests

Pattern Recognition, Image Processing, Computer Vision

Publications

Article

  • Ali, A., Deravi, F. and Hoque, S. (2018). Gaze Stability for Liveness Detection. Pattern Analysis and Applications [Online] 21:437-449. Available at: http://dx.doi.org/10.1007/s10044-016-0587-2.
    Spoofing attacks on biometric systems are one of the major impediments to their use for secure unattended applications. This paper explores features for face liveness detection based on tracking the gaze of the user. In the proposed approach, a visual stimulus is placed on the display screen, at apparently random locations, which the user is required to follow while their gaze is measured. This visual stimulus appears in such a way that it repeatedly directs the gaze of the user to specific positions on the screen. Features extracted from sets of collinear and colocated points are used to estimate the liveness of the user. Data is collected from genuine users tracking the stimulus with natural head/eye movements and impostors holding a photograph, looking through a 2D mask or replaying the video of a genuine user. The choice of stimulus and features are based on the assumption that natural head/eye coordination for directing gaze results in a greater accuracy and thus can be used to effectively differentiate between genuine and spoofing attempts. Tests are performed to assess the effectiveness of the system with these features in isolation as well as in combination with each other using score fusion techniques. The results from the experiments indicate the effectiveness of the proposed gaze-based features in detecting such presentation attacks.
  • Putjorn, P. et al. (2018). Investigating the use of sensor-based IoET to facilitate learning for children in rural Thailand. PLOS ONE [Online] 13:e0201875. Available at: https://doi.org/10.1371/journal.pone.0201875.
    A novel sensor-based Internet of Educational Things (IoET) platform named OBSY was iteratively
    designed, developed and evaluated to support education in rural regions in Thailand.
    To assess the effectiveness of this platform, a study was carried out at four primary schools
    located near the Thai northern border with 244 students and 8 teachers. Participants were
    asked to carry out three science-based learning activities and were measured for improvements
    in learning outcome and learning engagement. Overall, the results showed that students
    in the IoET group who had used OBSY to learn showed significantly higher learning
    outcome and had better learning engagement than those in the control condition. In addition,
    for those in the IoET group, there was no significant effect regarding gender, home
    location (Urban or Rural), age, prior experience with technology and ethnicity on learning
    outcome. For learning engagement, only age was found to influence interest/enjoyment.
    The study demonstrated the potential of IoET technologies in underprivileged area, through
    a co-design approach with teachers and students, taking into account the local contexts.
  • Yang, S. and Deravi, F. (2017). On the Usability of Electroencephalographic Signals for Biometric Recognition: A Survey. IEEE Transactions on Human-Machine Systems [Online]. Available at: https://dx.doi.org/10.1109/THMS.2017.2682115.
    Research on using electroencephalographic signals for biometric recognition has made considerable progress and is attracting growing attention in recent years. However, the usability aspects of the proposed biometric systems in the literatures have not received significant attention. In this paper, we present a comprehensive survey to examine the development and current status of various aspects of electroencephalography (EEG)-based biometric recognition. We first compare the characteristics of different stimuli that have been used for evoking biometric information bearing EEG signals. This is followed by a survey of the reported features and classifiers employed for EEG biometric recognition. To highlight the usability challenges of using EEG for biometric recognition in real-life scenarios, we propose a novel usability assessment framework which combines a number of user-related factors to evaluate the reported systems. The evaluation scores indicate a pattern of increasing usability, particularly in recent years, of EEG-based biometric systems as efforts have been made to improve the performance of such systems in realistic application scenarios. We also propose how this framework may be extended to take into account Aging effects as more performance data becomes available.
  • Douglas, K., Ang, C. and Deravi, F. (2017). Reclaiming the truth. The Psychologist [Online] 30:36-42. Available at: https://thepsychologist.bps.org.uk/volume-30/june-2017/reclaiming-truth.
  • Yang, S., Deravi, F. and Hoque, S. (2016). Task sensitivity in EEG biometric recognition. Pattern Analysis and Applications [Online]:1-13. Available at: http://dx.doi.org/10.1007/s10044-016-0569-4.
    This work explores the sensitivity of electroencephalographic-based biometric recognition to the type of tasks required by subjects to perform while their brain activity is being recorded. A novel wavelet-based feature is used to extract identity information from a database of 109 subjects who performed four different motor movement/imagery tasks while their data was recorded. Training and test of the system was performed using a number of experimental protocols to establish if training with one type of task and tested with another would significantly affect the recognition performance. Also, experiments were conducted to evaluate the performance when a mixture of data from different tasks was used for training. The results suggest that performance is not significantly affected when there is a mismatch between training and test tasks. Furthermore, as the amount of data used for training is increased using a combination of data from several tasks, the performance can be improved. These results indicate that a more flexible approach may be incorporated in data collection for EEG-based biometric systems which could facilitate their deployment and improved performance.
  • Farzin, D. et al. (2015). Usability and Performance Measure of a Consumer-grade Brain Computer Interface System for Environmental Control by Neurological Patients. International Journal of Engineering and Technology Innovation [Online] 5:165-177. Available at: http://sparc.nfu.edu.tw/~ijeti/download.php?file_id=103.
    With the increasing incidence and prevalence of chronic brain injury patients and the current financial constraints in healthcare budgets, there is a need for a more intelligent way to realise the current practice of neuro-rehabilitation service provision. Brain-computer Interface (BCI) systems have the potential to address this issue to a certain extent only if carefully designed research can demonstrate that these systems are accurate, safe, cost-effective, are able to increase patient/carer satisfaction and enhance their quality of life. Therefore, one of the objectives of the proposed study was to examine whether participants (patients with brain injury and a sample of reference population) were able to use a low cost BCI system (Emotiv EPOC) to interact with a computer and to communicate via spelling words. Patients participated in the study did not have prior experience in using BCI headsets so as to measure the user experience in the first-exposure to BCI training. To measure emotional arousal of participants we used an ElectroDermal Activity Sensor (Qsensor by Affectiva). For the signal processing and feature extraction of imagery controls the Cognitive Suite of Emotiv's Control Panel was used. Our study reports the key findings based on data obtained from a group of patients and a sample reference population and presents the implications for the design and development of a BCI system for communication and control. The study also evaluates the performance of the system when used practically in context of an acute clinical environment.
  • Wilkinson, D. et al. (2015). Emotional Correlates of Unirhinal Odor Identification. Laterality: Asymmetries of Body, Brain and Cognition [Online] 21:85-99. Available at: http://dx.doi.org/10.1080/1357650X.2015.1075546.
    It seems self-evident that smell profoundly shapes emotion, but less clear is the nature of this interaction. Here we sought to determine whether the ability to identify odors co-varies with self-reported feelings of empathy and emotional expression recognition, as predicted if the two capacities draw on common resource. Thirty six neurotypical volunteers were administered the Alberta Smell Test, The Interpersonal Reactivity Index and an emotional expression recognition task. Statistical analyses indicated that feelings of emotional empathy positively correlated with odor discrimination in right nostril, while the recognition of happy and fearful facial expressions positively correlated with odor discrimination in left nostril. These results uncover new links between olfactory discrimination and emotion which, given the ipsilateral configuration of the olfactory projections, point towards intra- rather than inter-hemispheric interaction. The results also provide novel support for the proposed lateralisation of emotional empathy and the recognition of facial expression, and give reason to further explore the diagnostic sensitivity of smell tests because reduced sensitivity to others’ emotions can mark the onset of certain neurological diseases.
  • Pruet, P., Ang, C. and Farzin, D. (2014). Understanding tablet computer usage among primary school students in underdeveloped areas: Students’ technology experience, learning styles and attitudes. Computers in Human Behavior [Online] 55:1131-1144. Available at: http://doi.org/10.1016/j.chb.2014.09.063.
    The need to provide low-cost learning technologies such as laptops or tablet computers in developing countries with the aim to bridge the digital divide as well as addressing the uneven standards of education quality has been widely recognised by previous studies. With this aim in mind, the Thai Government has launched the “One Tablet PC Per Child” (OTPC) policy and distributed 800,000 tablet computers to grade-one students nationwide in 2012. However, there is limited empirical evidence on the effectiveness of tablet computer use in the classroom. Our study examined students’ learning styles, attitudes towards tablet computer use and how these are linked to their academic performance. The study has investigated 213 grade two students in economically underprivileged regions of North Thailand. Data collection was based on questionnaires filled in by the students with the help of their teachers. Our results overall suggested that there were some key significant differences in relation to students’ gender and home locations (urban vs. rural). In contrast to existing studies, both genders at this stage had similar technology experience and positive attitudes towards tablet computer use. However, we found girls had higher visual learning style (M = 4.23, p < .032) than boys (M = 3.96). Where home location was concerned, rural students had higher learning competitiveness and higher levels of anxiety towards tablet use (M = 1.71, p < .028) than urban students (M = 1.33). Additionally, we also found technology experiences, collaborative learning style and anxiety affected students’ academic performance.
  • Radu, P. et al. (2013). A Colour Iris Recognition System Employing Multiple Classifier Techniques. ELCVIA Electronic Letters on Computer Vision and Image Analysis [Online] 12:54-65. Available at: http://elcvia.cvc.uab.es/article/view/520.
    The randomness of iris texture has allowed researchers to develop biometric systems with almost flawless accuracies. However, a common drawback of the majority of existing iris recognition systems is the constrained environment in which the user is enroled and recognized. The iris recognition systems typically require a high quality iris image captured under near infrared illumination. A desirable property of an iris recognition system is to be able to operate on colour images, whilst maintaining a high accuracy. In the present work we propose an iris recognition methodology which is designed to cope with noisy colour iris images. There are two main contributions of this paper: first, we adapt standard iris features proposed in literature for near infrared images by applying a feature selection method on features extracted from various colour channels; second, we introduce a Multiple Classifier System architecture to enhance the recognition accuracy of the biometric system. With a feature size of only 360 real valued components, the proposed iris recognition system performs with a high accuracy on UBIRISv1 dataset, in both identification and verfication scenarios.
  • Radu, P. et al. (2013). A review of information fusion techniques employed in iris recognition systems. International Journal of Advanced Intelligence Paradigms [Online] 4:211-240. Available at: http://dx.doi.org/10.1504/IJAIP.2012.052067.
    Iris recognition has shown to be one of the most reliable biometric authentication methods. The majority of iris recognition systems which have been developed require a constrained environment to enrol and recognise the user. If the user is not cooperative or the capture environment changes then the accuracy of the iris recognition system may decrease significantly. To minimise the effect of such limitations, possible solutions include the use of multiple channels of information such as using both eyes or extracting more iris feature types and subsequently employing an efficient fusion method. In this paper, we present a review of iris recognition systems using information from multiple sources that are fused in different ways or at different levels. A categorisation of the iris recognition systems incorporating multiple classifier systems is also presented. As a new desirable dimension of a biometric system, besides those proposed in the literature, the mobility of such a system is introduced in this work. The review charts the path towards greater flexibility and robustness of iris recognition systems through the use of information fusion techniques and points towards further developments in the future leading to mobile and ubiquitous deployment of such systems.
  • Sheng, W. et al. (2012). Reliable and Secure Encryption Key Generation from Fingerprints. Information Management & Computer Security [Online] 20:207-221. Available at: http://dx.doi.org/10.1108/09685221211247307.
    Purpose ‐ Biometric authentication, which requires storage of biometric templates and/or encryption keys, raises a matter of serious concern, since the compromise of templates or keys necessarily compromises the information secured by those keys. To address such concerns, efforts based on dynamic key generation directly from the biometrics have recently emerged. However, previous methods often have quite unacceptable authentication performance and/or small key spaces and therefore are not viable in practice. The purpose of this paper is to propose a novel method which can reliably generate long keys while requires storage of neither biometric templates nor encryption keys. Design/methodology/approach ‐ This proposition is achieved by devising the use of fingerprint orientation fields for key generation. Additionally, the keys produced are not permanently linked to the orientation fields, hence, allowing them to be replaced in the event of key compromise. Findings ‐ The evaluation demonstrates that the proposed method for dynamic key generation can offer both good reliability and security in practice, and outperforms other related methods. Originality/value ‐ In this paper, the authors propose a novel method which can reliably generate long keys while requires storage of neither biometric templates nor encryption keys. This is achieved by devising the use of fingerprint orientation fields for key generation. Additionally, the keys produced are not permanently linked to the orientation fields, hence, allowing them to be replaced in the event of key compromise.
  • Hoque, S., Azhar, M. and Deravi, F. (2011). ZOOMETRICS - Biometric Identification of Wildlife using Natural Body Marks. International Journal of Bio-Science and Bio-Technology 3:45-53.
    Using physiological or behavioral characteristics to identify humans has been in use for quite some time now. Many wildlife animals also show distinctive natural body marks that can be used to identify them individually. Scientists in conservation research often use this approach but the process is manual and can be slow and error prone. This paper reports on an investigation to use biometric techniques for the identification of an important endangered species – The Great Crested Newt. The paper reports on novel techniques for extraction of the belly patterns of these animals as a source of biometric information. Features and classification techniques used for their automatic recognition are presented. The proposed approach is tested on a database of newts under investigation by conservationists. Preliminary studies are also reported on the ageing effects when belly images are compared over a number of years. The results suggest that such biometric techniques may be suitable for developing effective and flexible identification of wildlife in the field.
  • McConnon, G. et al. (2011). An Investigation of Quality Aspects of Noisy Colour Images for Iris Recognition. International Journal of Signal Processing, Image Processing and Pattern Recognition [Online] 4:165-178. Available at: http://www.sersc.org/journals/IJSIP/vol4_no3.php.
    The UBIRIS.v2 dataset is a set of noisy colour iris images designed to simulate visible wavelength iris acquisition at-a-distance and on-the-move. This paper presents an examination of some of the characteristics that can impact the performance of iris recognition in the UBIRIS.v2 dataset. This dataset consists of iris images in the visible wavelength and was designed to be noisy. The quality and characteristics of these images are surveyed by examining seven different channels of information extracted from them: red, green, blue, intensity, value, lightness, and luminance. We present new quality metrics to assess the image characteristics with regard to focus, entropy, reflections, pupil constriction and pupillary boundary contrast. The results clearly suggest the existence of different characteristics for these channels and could be exploited for use in the design and evaluation of iris recognition systems.
  • Radu, P. et al. (2011). Information Fusion for Unconstrained Iris Recognition. International Journal of Hybrid Information Technology 4:1-12.
    The majority of the iris recognition algorithms available in the literature were developed to operate on near infrared images. A desirable feature of iris recognition systems with reduced constraints such as potential operability on commonly available hardware is to work with images acquired under visible wavelength. Unlike in near infrared images, in colour iris images the pigment melanin present in the iris tissue causes the appearance of reflections, which are one of the major noise factors present in colour iris images. In this paper we present an iris recognition system which is able to cope with noisy colour iris images by employing score level fusion between different channels of the iris image. The robustness of the proposed approach is tested on three colour iris images datasets, ranging from images captured with professional cameras in both constrained environment and less cooperative scenario, and finally to iris images acquired with a mobile phone.

Book section

  • Deravi, F. (2015). Biometric Systems, Agent-Based. in: Li, S. Z. and Jain, A. K. eds. Encyclopedia of Biometrics. Springer US, pp. 243-248. Available at: http://doi.org/10.1007/978-1-4899-7488-4_290.
    Agent-based biometric systems use the computational notion of intelligent autonomous agents that assist the users and act on their behalf to develop systems that intelligently facilitate biometrics-enabled transactions, giving them the ability to learn from the users and adapt to application needs, thus enhancing recognition performance and usability.
  • Deravi, F. et al. (2014). Multibiometrics and Data Fusion Standardization. in: Li, S. Z. and Jain, A. K. eds. Encyclopedia of Biometrics. Heidelberg: New York, pp. 1-10. Available at: http://dx.doi.org/10.1007/978-3-642-27733-7_229-2.
  • Deravi, F. (2012). Intelligent Biometrics. in: Mordini, E. and Tzovaras, D. eds. Second Generation Biometrics: The Ethical, Legal and Social Context. Springer. Available at: http://www.springer.com/social+sciences/applied+ethics/book/978-94-007-3891-1.

Conference or workshop item

  • Pan, S. and Deravi, F. (2018). Facial Spoofing Detection Using Temporal Texture Co-occurrence. in: 2018 IEEE 4th International Conference on Identity, Security, and Behavior Analysis (ISBA). USA: IEEE. Available at: https://doi.org/10.1109/ISBA.2018.8311464.
    Biometric person recognition systems based on facial images are increasingly used in a wide range of applications. However, the potential for face spoofing attacks remains a significant challenge to the security of such systems and finding better means of detecting such presentation attacks has become a necessity. In this paper, we propose a new spoofing detection method, which is based on temporal changes in texture information. A novel temporal texture descriptor is proposed to characterise the pattern of change in a short video sequence named Temporal Co-occurrence Adjacent Local Binary Pattern (TCoALBP). Experimental results using the CASIA-FA, Replay Attack and MSU-MFSD datasets; the proposed method shows the effectiveness of the proposed technique on these challenging datasets.
  • Alsufyani, N. et al. (2018). Biometric Presentation Attack Detection using Gaze Alignment. in: 2018 IEEE 4th International Conference on Identity, Security, and Behavior Analysis (ISBA).. Available at: https://doi.org/10.1109/ISBA.2018.8311472.
    Face recognition systems have been improved rapidly in recent decades. However, their wide deployment has been hindered by their vulnerability to spoofing attacks. In this paper, we present a challenge and response method to detect attack in face recognition systems by recording the gaze of a user in response to a moving stimulus. The proposed system extracts eye centres in the captured frames and computes features from these landmarks to ascertain whether the gaze aligns with the challenge trajectory in order to detect spoofing attacks. The system is tested using a new database simulating mobile device use with 70 subjects attempting three types of spoof attacks (projected photo, looking through a 2D mask or wearing a 3D mask). Evaluations on the collected database show that the proposed approach performs favourably when compared with state-of-the-art methods.
  • Shi, P. and Deravi, F. (2017). Facial Action Units for Presentation Attack Detection. in: 2017 Seventh International Conference on Emerging Security Technologies (EST). IEEE. Available at: https://doi.org/10.1109/EST.2017.8090400.
    This paper is concerned with biometric spoofing detection using the dynamics of natural facial movements as a feature. Facial muscle movement information can be extracted from video sequences and encoded using the Facial Action Coding System (FACS). The proposed feature constructs a Facial Action Units Histogram (FAUH) to encapsulate this information for the detection of biometric presentation attacks without the need for active user cooperation. The performance of the proposed system was tested on two datasets: CASIA-FASD and Replay Attack and produced encouraging results. Further improvements may be possible by integrating this source of information with other indicators for further protecting biometric systems from subversion.
  • Ali, A. et al. (2017). Biometric Counter-spoofing for Mobile Devices using Gaze Information. in: 7th International Conference on Pattern Recognition and Machine Intelligence. Springer, pp. 11-18. Available at: https://doi.org/10.1007/978-3-319-69900-4_2.
    With the rise in the use of biometric authentication on mobile devices, it is important to address the security vulnerability of spoofing attacks where an attacker using an artefact representing the biometric features of a genuine user attempts to subvert the system. In this paper, techniques for presentation attack detection are presented using gaze information with a focus on their applicability for use on mobile devices. Novel features that rely on directing the gaze of the user and establishing its behaviour are explored for detecting spoofing attempts. The attack scenarios considered in this work include the use of projected photos, 2D and 3D masks. The proposed features and the systems based on them were extensively evaluated using data captured from volunteers performing genuine and spoofing attempts. The results of the evaluations indicate that gaze-based features have the potential for discriminating between genuine attempts and imposter attacks on mobile devices.
  • Putjorn, P. et al. (2017). Designing a ubiquitous sensor-based platform to facilitate learning for young children in Thailand. in: MobileHCI 2017: 19th International Conference on Human-Computer Interaction with Mobile Devices and Services. ACM. Available at: http://dx.doi.org/10.1145/3098279.3098525.
    Education plays an important role in helping developing nations reduce poverty and improving quality of life. Ubiquitous and mobile technologies could greatly enhance education in such regions by providing augmented access to learning. This paper presents a three-year iterative study where a ubiquitous sensor based learning platform was designed, developed and tested to support science learning among primary school students in underprivileged Northern Thailand. The platform is built upon the school’s existing mobile devices and was expanded to include sensor-based technology. Throughout the iterative design process, observations, interviews and group discussions were carried out with stakeholders. This lead to key reflections and design concepts such as the value of injecting anthropomorphic qualities into the learning device and providing personally and culturally relevant learning experiences through technology. Overall, the results outlined in this paper help contribute to knowledge regarding the design, development and implementation of ubiquitous sensor-based technology to support learning.
  • Alsufyani, H., Hoque, S. and Deravi, F. (2017). Automated Skin Region Quality Assessment for Texture-based Biometrics. in: 2017 Seventh International Conference on Emerging Security Technologies (EST). IEEE, pp. 169-174. Available at: https://doi.org/10.1109/EST.2017.8090418.
    Designing a biometric system based solely on skin texture is of interest because the face is sometimes occluded by hair or artefacts in many real-world contexts. This work presents a novel framework for the assessment of skin-based biometric systems incorporating skin quality information. The quality or purity of the extracted skin region is automatically established using pixel colour models prior to biometric processing. Facial landmarks are detected to facilitate automated extraction of facial regions of interest. Although the present study is confined to the forehead region, the idea can be extended to other skin regions. Local Binary Patterns (LBP) and Gabor wavelet filters are utilised to extract skin features. Using the publicly available XM2VTS database, the experimental results show that the system provides promising performance when compared to other commonly used techniques.
  • Yassin, D., Hoque, S. and Deravi, F. (2016). FACE RECOGNITION ACROSS AGES. in: 6th Brunei International Conference on Engineering and Technology 2016 (BICET2016).
    This paper is concerned with the effect of ageing on biometric systems and particularly its impact on face recognition systems. Being biological tissue in nature, facial biometric trait undergoes significant changes as a person ages. Consequently, developing biometric applications for long-term use becomes a particularly challenging task. The idea behind the investigation presented here is that biometric systems have uneven difficulty in recognising people from different ages. Some algorithms may perform better for certain age groups. Therefore, a carefully optimised multi-algorithmic system can reduce the error rates. A subset of 100 subjects from the MORPH-II database has been selected to test the performance of a face verification system. The population is split into 5 age bands (≤19, 20-29, 30-39, 40-49, ≥50 years) based on their age during enrolment. The facial image database used in the experiments here contains images acquired over a period of five years. In the proposed multi-classifier scheme, features extracted from face images are transformed by different projection algorithms prior to matching. It has been observed that all the age groups showed improved performances when compared to the single classifier error rates. Of all the groups, the EER were highest for the younger population (≤19 year olds).
  • Alsufyani, H., Hoque, S. and Deravi, F. (2016). Exploring the Potential of Facial Skin Regions for the Provision of Identity Information. in: The 7th IET International Conference on Imaging for Crime Detection and Prevention (ICDP-16).. Available at: http://dx.doi.org/10.1049/ic.2016.0084.
    This work presents a novel framework to investigate the possibility of using texture information from facial skin regions for biometric person recognition. Such information will be practically useful when the entire facial image is not available for identifying the individuals. Four facial regions have been investigated (i.e. forehead, right cheek, left cheek, and chin) since they are relatively easy to distinguish in frontal images. Facial landmarks are automatically detected to facilitate the extraction of these facial regions of interest. A new skin detection technique is applied to identify regions with significant skin content. Each such skin regions are then processed independently using features based on Local Binary Patterns and Gabor wavelet filters. Feature fusion is then used prior to classification of the images. Experiments were carried out using the publicly available Skin Segmentation database and the XM2VTS databases to evaluate the skin detection technique and the biometric recognition performances respectively. The results indicate that the skin detection algorithm provided an acceptable results when compared with other state-of-the-art skin detection algorithms. In addition, the forehead and the chin regions where found to provide a rich source of biometric information.
  • Morgado, L. et al. (2015). Cities in citizens’ hands. in: 6th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion (DSAI 2015).
  • Putjorn, P. et al. (2015). Exploring the Internet of "Educational Things"(IoET) in rural underprivileged areas. in: 2015 12th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). IEEE, pp. 1-5. Available at: http://dx.doi.org/10.1109/ECTICon.2015.7207125.
  • Putjorn, P., Ang, C. and Deravi, F. (2015). Learning IoT without the "I" - Educational Internet of Things in a Developing Context. in: MobiSys 2015 - The 13th International Conference on Mobile Systems, Applications, and Services. New York, NY, USA: ACM, pp. 11-13. Available at: http://doi.org/10.1145/2753488.2753489.
    To provide better education to children from different socio-economic backgrounds, the Thai Government launched the "One Tablet PC Per Child" (OTPC) policy and distributed 800,000 tablet computers to first grade students across the country in 2012. This initiative is an opportunity to study how mobile learning and Internet of Things (IoT) technology can be designed for students in underprivileged areas of northern Thailand. In this position paper, we present a prototype, called OBSY (Observation Learning System) which targets primary science education. OBSY consists of i) a sensor device, developed with low-cost open source singled-board computer Raspberry Pi, housed in a 3D printed case, ii) a mobile device friendly graphical interface displaying visualisations of the sensor data, iii) a self-contained DIY Wi-Fi network which allows the system to operate in an environment with inadequate ICT infrastructure.
  • Yang, S. and Deravi, F. (2014). Novel HHT-Based Features for Biometric Identification Using EEG Signals. in: 22nd International Conference on Pattern Recognition. Institute of Electrical Electronic Engineers Computer Society, pp. 1922-1927. Available at: http://dx.doi.org/10.1109/ICPR.2014.336.
    In this paper we present a novel approach for biometric identification using electroencephalogram (EEG) signals based on features extracted with the Hilbert-Huang Transform (HHT). The instantaneous amplitude and the instantaneous frequency were computed after the HHT, and these were then used to generate the features for classification. The proposed system was evaluated using two publicly available databases in scenarios where only a single electrode is used to provide biometric information. One database (with 122 subjects) has the users viewing a series of pictures while the other one (with 109 subjects) has the users performing motor/imagery tasks. Average identification accuracies of 96% and 99% were reached for these two databases respectively using only a single electrode. These compare favourably with previously published results employing a variety of other features and classification approaches.
  • Yassin, D., Hoque, S. and Deravi, F. (2013). Age Sensitivity of Face Recognition Algorithms. in: 4th International Conference on Emerging Security Technologies (EST 2013),. pp. 12-15. Available at: http://dx.doi.org/10.1109/EST.2013.8.
    This paper investigates the performance degradation of facial recognition systems due to the influence of age. A comparative analysis of verification performance is conducted for four subspace projection techniques combined with four different distance metrics. The experimental results based on a subset of the MORPH-II database show that the choice of subspace projection technique and associated distance metric can have a significant impact on the performance of the face recognition system for particular age groups.
  • Guness, S. et al. (2013). A Novel Depth-based Head Tracking and Gesture Recognition System. in: 12th European AAATE (Association for the Advancement of Assistive Technology in Europe) Conference. IOS Press EBooks, pp. 1021-1026. Available at: http://dx.doi.org/10.3233/978-1-61499-304-9-1021.
    This paper presents the architecture for a novel RGB-D based assistive device that incorporates depth as well as RGB data to enhance head tracking and facial gesture based control for severely disabled users. Using depth information it is possible to remove background clutter and therefore achieve a more accurate and robust performance. The system is compared with the CameraMouse, SmartNav and our previous 2D head tracking system. For the RGB-D system, the effective throughput of dwell clicking increased by a third (from 0.21 to 0.30 bits per second) and that of blink clicking doubled (from 0.15 to 0.28 bits per second) compared to the 2D system.
  • Ali, A., Deravi, F. and Hoque, S. (2013). Directional Sensitivity of Gaze-Collinearity Features in Liveness Detection. in: Emerging Security Technologies (EST), 2013 Fourth International Conference on. pp. 8-11. Available at: http://dx.doi.org/10.1109/EST.2013.7.
    To increase the trust in using face recognition systems, these need to be capable of differentiating between face images captured from a real person and those captured from photos or similar artifacts presented at the sensor. Methods have been published for face liveness detection by measuring the gaze of a user while the user tracks an object on the screen, which appears at pre-defined, places randomly. In this paper we explore the sensitivity of such a system to different stimulus alignments. The aim is to establish whether there is such sensitivity and if so to explore how this may be exploited for improving the design of the stimulus. The results suggest that collecting feature points along the horizontal direction is more effective than the vertical direction for liveness detection.
  • Radu, P. et al. (2013). A Multi-algorithmic Colour Iris Recognition System. in: Proceedings of the 5th International Workshop Soft Computing Applications (SOFA). Springer Berlin Heidelberg, pp. 45-56. Available at: http://dx.doi.org/10.1007/978-3-642-33941-7_7.
    The reported accuracies of iris recognition systems are generally higher on near infrared images than on colour RGB images. To increase a colour iris recognition system’s performance, a possible solution is a multialgorithmic approach with an appropriate fusion mechanism. In the present work, this approach is investigated by fusing three algorithms at the score level to enhance the performance of a colour iris recognition system. The contribution of this paper consists of proposing 2 novel feature extraction methods for colour iris images, one based on a 3-bit encoder of the 8 neighborhood and the other one based on gray level co-occurrence matrix. The third algorithm employed uses the classical Gabor filters and phase encoding for feature extraction. A weighted average is used as a matching score fusion. The efficiency of the proposed iris recognition system is demonstrated on UBIRISv1 dataset.
  • Radu, P. et al. (2013). Optimizing 2D Gabor Filters for Iris Recognition. in: 4th International Conference on Emerging Security Technologies (EST 2013),. IEEE, pp. 47-50. Available at: http://dx.doi.org/10.1109/EST.2013.15.
    The randomness and richness present in the iris texture make the 2D Gabor filter bank analysis a suitable technique to be used for iris recognition systems. To accurately characterize complex texture structures using 2D Gabor filters it is necessary to use multiple sets of parameters of this type of filters. This paper proposes a technique of optimizing multiple sets of 2D Gabor filter parameters to gradually enhance the accuracy of an iris recognition system. The proposed methodology is suitable to be applied on both near infrared and visible spectrum iris images. To illustrate the efficiency of the filter bank design technique, UBIRISv1 database was used for benchmarking
  • Radu, P. et al. (2013). A Novel Iris Clustering Approach Using LAB Color Features. in: 4th IEEE International Symposium on Electrical And Electronics Engineering (ISEEE 2013). IEEE, pp. 1-4. Available at: http://dx.doi.org/10.1109/ISEEE.2013.6674362.
    Interesting results of color clustering for the iris images in the UBIRISv1 database are presented. The iris colors are characterized by feature vectors with 80 components corresponding to histogram bins computed in the CIELAB color space. The feature extraction is applied to the first session eye images after undergoing an iris segmentation process. An agglomerative hierarchical algorithm is used to organize 1.205 segmented iris images in 8 clusters based on their color content.
  • Ali, A., Deravi, F. and Hoque, S. (2013). Spoofing attempt detection using gaze colocation. in: Biometrics Special Interest Group (BIOSIG), 2013 International Conference. pp. 1-12.
    Spoofing attacks on biometric systems are one of the major impediments to their use for secure unattended applications. This paper presents a novel method for face liveness detection by tracking the gaze of the user with an ordinary webcam. In the proposed system, an object appears randomly on the display screen which the user is required to look at while their gaze is measured. The visual stimulus appears in such a way that it repeatedly directs the gaze of the user to specific points on the screen. Features extracted from images captured at these sets of colocated points are used to estimate the liveness of the user. A scenario is investigated where genuine users track the challenge with head/eye movements whereas the impostors hold a photograph of the target user and attempt to follow the stimulus during simulated spoofing attacks. The results from the experiments indicate the effectiveness of the gaze colocation feature in detecting spoofing attack
  • Azhar, M., Hoque, S. and Deravi, F. (2012). Automatic identification of wildlife using local binary patterns. in: IET Conference on Image Processing (IPR 2012). pp. B5-B5. Available at: http://dx.doi.org/10.1049/cp.2012.0454.
    Recognition of individuals is necessary for accurate estimation of wildlife population dynamics for effective management and conservation. Identifying individual wildlife by their distinctive body marks is one of the least invasive methods available. Although widely practiced, this method is mostly manual where newly captured images are compared with those in the library of previously captured images. The ability to do so automatically using computer vision techniques can improve speed and accuracy, facilitate on-field matching, and so on. This paper reports the results of using a texture based image feature descriptor, the Local Binary Patterns (LBP), for the automatic identification of an important endangered species — The Great Crested Newt (GCN). The proposed approach is tested on a database of newts' distinctive belly images which are treated as a source of biometric information. Results indicate that when both appearance and spatial information of newt belly patterns are encoded into a composite LBP feature vector, the discriminating power of the system can improve significantly.
  • Radu, P. et al. (2012). Image Enhancement vs Feature Fusion in Colour Iris Recognition. in: Emerging Security Technologies (EST), 2012 Third International Conference. IEEE, pp. 53-57. Available at: http://dx.doi.org/10.1109/EST.2012.33.
    In iris recognition, most of the research was conducted on operation under near infrared illumination. For an iris recognition system to be deployed on common hardware devices, such as laptops or mobile phones, its ability of working with visible spectrum iris images is necessary. Two of the main possible approaches to cope with noisy images in a colour iris recognition system are either to apply image enhancement techniques or to extract multiple types of features and subsequently to employ an efficient fusion mechanism. The contribution of the present paper consists of comparing which of the two above mentioned approaches is best in both identification and verification scenarios of a colour iris recognition system. The efficiency of the two approaches is demonstrated on UBIRISv1 dataset
  • Ali, A., Deravi, F. and Hoque, S. (2012). Liveness Detection Using Gaze Collinearity. in: 2012 Third International Conference on Emerging Security Technologies. IEEE, pp. 62-65. Available at: http://dx.doi.org/10.1109/EST.2012.12.
    This paper presents a liveness detection method based on tracking the gaze of the user of a face recognition system using a single camera. The user is required to follow a visual animation of a moving object on a display screen while his/her gaze is measured. The visual stimulus is designed to direct the gaze of the user to sets of collinear points on the screen. Features based on the measured collinearity of the observed gaze are then used to discriminate between live attempts at responding to this challenge and those conducted by âimpostorsâ holding photographs and attempting to follow the stimulus. An initial set of experiments is reported that indicates the effectiveness of the proposed method in detecting this class of spoofing attacks.
  • Radu, P. et al. (2012). A Visible Light Iris Recognition System using Colour Information. in: 9th IASTED International Conference on Signal Processing, Pattern Recognition and Applications (SPPRA 2012). Acta Press. Available at: http://dx.doi.org/10.2316/P.2012.778-019.
    The iris has been shown to be a highly reliable biometric modality with almost perfect authentication accuracy. However, a classical iris recognition system operates under near infrared illumination, which is a major constraint for a range of applications. In this paper, we propose an iris recognition system which is able to cope with noisy colour iris images by employing image processing techniques together with a Multiple Classifier System to fuse the information from various colour channels. There are two main contributions in the present work: first, we adapt standard iris features, proposed in the literature for near infrared images, to match the characteristics of colour iris images; second, we introduce a robust fusion mechanism to combine the features from various colour channels. With a feature size of only 360 real numbers, the efficiency of the proposed biometric system is demonstrated on the UBIRISv1 dataset for both identification and verification scenarios.
  • Guness, S. et al. (2012). Evaluation of vision-based head-trackers for assistive devices. in: 34th Annual International Conference of the IEEE EMBS. pp. 4804-4807. Available at: http://dx.doi.org/10.1109/EMBC.2012.6347068.
    This paper presents a new evaluation methodology for assistive devices employing head-tracking systems based on an adaptation of the Fitts Test. This methodology is used to compare the effectiveness and performance of a new vision-based head tracking system using face, skin and motion detection techniques with two existing head tracking devices and a standard mouse. The application context and the abilities of the user are combined with the results from the modified Fitts Test to help determine the most appropriate devices for the user. The results suggest that this modified form of the Fitts test can be effectively employed for the comparison of different access technologies.
  • McConnon, G. et al. (2012). Impact of common ophthalmic disorders on iris segmentation. in: Biometrics (ICB), 2012 5th IAPR International Conference. IEEE, pp. 277-282. Available at: http://dx.doi.org/10.1109/ICB.2012.6199820.
    As iris recognition moves from constrained indoor and near-infrared systems towards unconstrained on-the-move and at-a-distance systems, possibly using visible light illumination, interest in measurement of the fidelity of the acquired images and their impact on recognition performance has grown. However, the impact of the subject's physiological characteristics on the nature of the acquired images has received little attention. In this paper we catalog a selection of the most common ophthalmic disorders and investigate some of their characteristics including their prevalence and possible impact on recognition performance. The paper also includes an experimental exploration of the effect of such conditions on segmentation of the iris image.
  • Yang, S. and Deravi, F. (2012). On the Effectiveness of EEG Signals as a Source of Biometric Information. in: 3rd Int. Conference on Emerging Security Technologies,. pp. 49-52. Available at: http://dx.doi.org/10.1109/EST.2012.8.
    This paper presents a biometric person recognition system using electroencephalogram (EEG) signals as the source of identity information. Wavelet transform is used for extracting features from raw EEG signals which are then classified using a support vector machine and a knearestneighbour classifier to recognize the individuals. A number of stimuli are explored using up to 18 subjects to generate person-specific EEG patterns to explore which type of stimulus may achieve better recognition rates. A comparison between two kinds of tasks - motor movement and motor imagery - appears to indicate that imagery tasks show better and more stable performance than movement tasks. The paper also reports on the impact of the number and positioning of the electrodes on performance.
  • Guness, S. et al. (2012). Developing a vision based gesture recognition system to control assistive technology in neuro-disability. in: 2012 Annual Conference, American Congress of Rehabilitation Medicine (2012 ACRM-ASNR). Elsevier Science B.V., p. e1. Available at: http://dx.doi.org/doi:10.1016/j.apmr.2012.08.202.
  • Radu, P. et al. (2011). A Versatile Iris Segmentation Algorithm. in: 2011 BIOSIG Conference on Biometrics and Security.
  • Radu, P. et al. (2011). Information Fusion for Unconstrained Iris Recognition. in: International Conference on Emerging Security Technologies (EST 2011).
  • Deravi, F. and Guness, S. (2011). Gaze Trajectory as a Biometric Modality. in: Biosignals 2011.
  • Sirlantzis, K. et al. (2010). Nomad Biometric Authentificatin (NOBA): Towards Mobile and Ubiquitous Person Identification. in: International Conference on Emergy Security Technologies (EST 2010).
  • McConnon, G. et al. (2010). A Novel Interactive Biometric Passport Photograph Alignment System. in: 18th European Symposium on Artificial Neural Networks (ESANN).

Forthcoming

  • Ellavarason, E., Guest, R. and Deravi, F. (2018). A Framework for Assessing Factors Influencing User Interaction for Touch-based Biometrics. in: 26th European Signal Processing Conference (Eusipco 2018).
    Touch-based behavioural biometrics is an
    emerging technique for passive and transparent user
    authentication on mobile devices. It utilises dynamics mined
    from users’ touch actions to model behaviour. The
    interaction of the user with the mobile device using touch is
    an important aspect to investigate as the interaction errors
    can influence the stability of sample donation and overall
    performance of the implemented biometric authentication
    system. In this paper, we are outlining a data collection
    framework for touch-based behavioural biometric
    modalities (signature, swipe and keystroke dynamics) that
    will enable us to study the influence of environmental
    conditions and body movement on the touch-interaction. In
    order to achieve this, we have designed a multi-modal
    behavioural biometric data capturing application
    “Touchlogger” that logs touch actions exhibited by the user
    on the mobile device. The novelty of our framework lies in
    the collection of users’ touch data under various usage
    scenarios and environmental conditions. We aim to collect
    touch data in two different environments - indoors and
    outdoors, along with different usage scenarios - whilst the
    user is seated at a desk, walking on a treadmill, walking
    outdoors and seated on a bus. The range of collected data
    may include swiping, signatures using finger and stylus,
    alphabetic, numeric keystroke data and writing patterns
    using a stylus.