Portrait of Dr Sanaul Hoque

Dr Sanaul Hoque

Lecturer in Secure Systems Engineering


Computer vision, OCR, biometrics, security and encryption, multi-expert fusion, document modelling.

Research interests


Showing 50 of 66 total publications in the Kent Academic Repository. View all publications.


  • Ali, A., Deravi, F. and Hoque, S. (2018). Gaze Stability for Liveness Detection. Pattern Analysis and Applications [Online] 21:437-449. Available at: http://dx.doi.org/10.1007/s10044-016-0587-2.
    Spoofing attacks on biometric systems are one of the major impediments to their use for secure unattended applications. This paper explores features for face liveness detection based on tracking the gaze of the user. In the proposed approach, a visual stimulus is placed on the display screen, at apparently random locations, which the user is required to follow while their gaze is measured. This visual stimulus appears in such a way that it repeatedly directs the gaze of the user to specific positions on the screen. Features extracted from sets of collinear and colocated points are used to estimate the liveness of the user. Data is collected from genuine users tracking the stimulus with natural head/eye movements and impostors holding a photograph, looking through a 2D mask or replaying the video of a genuine user. The choice of stimulus and features are based on the assumption that natural head/eye coordination for directing gaze results in a greater accuracy and thus can be used to effectively differentiate between genuine and spoofing attempts. Tests are performed to assess the effectiveness of the system with these features in isolation as well as in combination with each other using score fusion techniques. The results from the experiments indicate the effectiveness of the proposed gaze-based features in detecting such presentation attacks.
  • Yang, S., Deravi, F. and Hoque, S. (2016). Task sensitivity in EEG biometric recognition. Pattern Analysis and Applications [Online]:1-13. Available at: http://dx.doi.org/10.1007/s10044-016-0569-4.
    This work explores the sensitivity of electroencephalographic-based biometric recognition to the type of tasks required by subjects to perform while their brain activity is being recorded. A novel wavelet-based feature is used to extract identity information from a database of 109 subjects who performed four different motor movement/imagery tasks while their data was recorded. Training and test of the system was performed using a number of experimental protocols to establish if training with one type of task and tested with another would significantly affect the recognition performance. Also, experiments were conducted to evaluate the performance when a mixture of data from different tasks was used for training. The results suggest that performance is not significantly affected when there is a mismatch between training and test tasks. Furthermore, as the amount of data used for training is increased using a combination of data from several tasks, the performance can be improved. These results indicate that a more flexible approach may be incorporated in data collection for EEG-based biometric systems which could facilitate their deployment and improved performance.
  • Radu, P. et al. (2013). A review of information fusion techniques employed in iris recognition systems. International Journal of Advanced Intelligence Paradigms [Online] 4:211-240. Available at: http://dx.doi.org/10.1504/IJAIP.2012.052067.
    Iris recognition has shown to be one of the most reliable biometric authentication methods. The majority of iris recognition systems which have been developed require a constrained environment to enrol and recognise the user. If the user is not cooperative or the capture environment changes then the accuracy of the iris recognition system may decrease significantly. To minimise the effect of such limitations, possible solutions include the use of multiple channels of information such as using both eyes or extracting more iris feature types and subsequently employing an efficient fusion method. In this paper, we present a review of iris recognition systems using information from multiple sources that are fused in different ways or at different levels. A categorisation of the iris recognition systems incorporating multiple classifier systems is also presented. As a new desirable dimension of a biometric system, besides those proposed in the literature, the mobility of such a system is introduced in this work. The review charts the path towards greater flexibility and robustness of iris recognition systems through the use of information fusion techniques and points towards further developments in the future leading to mobile and ubiquitous deployment of such systems.
  • Radu, P. et al. (2013). A Colour Iris Recognition System Employing Multiple Classifier Techniques. ELCVIA Electronic Letters on Computer Vision and Image Analysis [Online] 12:54-65. Available at: http://elcvia.cvc.uab.es/article/view/520.
    The randomness of iris texture has allowed researchers to develop biometric systems with almost flawless accuracies. However, a common drawback of the majority of existing iris recognition systems is the constrained environment in which the user is enroled and recognized. The iris recognition systems typically require a high quality iris image captured under near infrared illumination. A desirable property of an iris recognition system is to be able to operate on colour images, whilst maintaining a high accuracy. In the present work we propose an iris recognition methodology which is designed to cope with noisy colour iris images. There are two main contributions of this paper: first, we adapt standard iris features proposed in literature for near infrared images by applying a feature selection method on features extracted from various colour channels; second, we introduce a Multiple Classifier System architecture to enhance the recognition accuracy of the biometric system. With a feature size of only 360 real valued components, the proposed iris recognition system performs with a high accuracy on UBIRISv1 dataset, in both identification and verfication scenarios.
  • Radu, P. et al. (2011). Information Fusion for Unconstrained Iris Recognition. International Journal of Hybrid Information Technology 4:1-12.
    The majority of the iris recognition algorithms available in the literature were developed to operate on near infrared images. A desirable feature of iris recognition systems with reduced constraints such as potential operability on commonly available hardware is to work with images acquired under visible wavelength. Unlike in near infrared images, in colour iris images the pigment melanin present in the iris tissue causes the appearance of reflections, which are one of the major noise factors present in colour iris images. In this paper we present an iris recognition system which is able to cope with noisy colour iris images by employing score level fusion between different channels of the iris image. The robustness of the proposed approach is tested on three colour iris images datasets, ranging from images captured with professional cameras in both constrained environment and less cooperative scenario, and finally to iris images acquired with a mobile phone.
  • McConnon, G. et al. (2011). An Investigation of Quality Aspects of Noisy Colour Images for Iris Recognition. International Journal of Signal Processing, Image Processing and Pattern Recognition [Online] 4:165-178. Available at: http://www.sersc.org/journals/IJSIP/vol4_no3.php.
    The UBIRIS.v2 dataset is a set of noisy colour iris images designed to simulate visible wavelength iris acquisition at-a-distance and on-the-move. This paper presents an examination of some of the characteristics that can impact the performance of iris recognition in the UBIRIS.v2 dataset. This dataset consists of iris images in the visible wavelength and was designed to be noisy. The quality and characteristics of these images are surveyed by examining seven different channels of information extracted from them: red, green, blue, intensity, value, lightness, and luminance. We present new quality metrics to assess the image characteristics with regard to focus, entropy, reflections, pupil constriction and pupillary boundary contrast. The results clearly suggest the existence of different characteristics for these channels and could be exploited for use in the design and evaluation of iris recognition systems.
  • Hoque, S., Azhar, M. and Deravi, F. (2011). ZOOMETRICS - Biometric Identification of Wildlife using Natural Body Marks. International Journal of Bio-Science and Bio-Technology 3:45-53.
    Using physiological or behavioral characteristics to identify humans has been in use for quite some time now. Many wildlife animals also show distinctive natural body marks that can be used to identify them individually. Scientists in conservation research often use this approach but the process is manual and can be slow and error prone. This paper reports on an investigation to use biometric techniques for the identification of an important endangered species – The Great Crested Newt. The paper reports on novel techniques for extraction of the belly patterns of these animals as a source of biometric information. Features and classification techniques used for their automatic recognition are presented. The proposed approach is tested on a database of newts under investigation by conservationists. Preliminary studies are also reported on the ageing effects when belly images are compared over a number of years. The results suggest that such biometric techniques may be suitable for developing effective and flexible identification of wildlife in the field.
  • Sirlantzis, K., Hoque, S. and Fairhurst, M. (2008). Diversity in multiple classifier ensembles based on binary feature quantisation with application to face recognition. Applied Soft Computing [Online] 8:437-445. Available at: http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6W86-4N919HX-4&_user=125871&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000010239&_version=1&_urlVersion=0&_userid=125871&md5=65de7ea4b9a8b9f9dd824973e90ec1c4.
    In this paper we present two methods to create multiple classifier systems based on an initial transformation of the original features to the binary domain and subsequent decompositions (quantisation). Both methods are generally applicable although in this work they are applied to grey-scale pixel values of facial images which form the original feature domain. We further investigate the issue of diversity within the generated ensembles of classifiers which emerges as an important concept in classifier fusion and propose a formal definition based on statistically independent classifiers using the kappa statistic to quantitatively assess it. Results show that our methods outperform a number of alternative algorithms applied on the same dataset, while our analysis indicates that diversity among the classifiers in a combination scheme is not sufficient to guarantee performance improvements. Rather, some type of trade off seems to be necessary between participant classifiers' accuracy and ensemble diversity in order to achieve maximum recognition gains.
  • Hoque, S. et al. (2005). Feasibility of Generating Biometric Encryption Keys. Electronics Letters 41:309-311.

Book section

  • O'Brien, J. et al. (2017). Automated Cell Segmentation of Fission Yeast Phase Images - Segmenting Cells from Light Microscopy Images. in: Silveira, M. et al. eds. Proceedings of the 10th International Joint Conference on Biomedical Engineering Systems and Technologies. Scitepress, pp. 92-99. Available at: http://dx.doi.org/10.5220/0006149100920099.
    Robust image analysis is an important aspect of all cell biology studies. The geometrics of cells are critical for developing an understanding of biological processes. Time constraints placed on researchers lead to a narrower focus on what data are collected and recorded from an experiment, resulting in a loss of data. Currently, preprocessing of microscope images is followed by the utilisation and parameterisation of inbuilt functions of various softwares to obtain information. Using the fission yeast, Schizosaccharomyes pombe, we propose a novel, fully automated, segmentation software for cells with a significantly lower rate of segmentation errors than PombeX with the same dataset.

Conference or workshop item

  • Alsufyani, N. et al. (2018). Biometric Presentation Attack Detection using Gaze Alignment. in: 2018 IEEE 4th International Conference on Identity, Security, and Behavior Analysis (ISBA).. Available at: https://doi.org/10.1109/ISBA.2018.8311472.
    Face recognition systems have been improved rapidly in recent decades. However, their wide deployment has been hindered by their vulnerability to spoofing attacks. In this paper, we present a challenge and response method to detect attack in face recognition systems by recording the gaze of a user in response to a moving stimulus. The proposed system extracts eye centres in the captured frames and computes features from these landmarks to ascertain whether the gaze aligns with the challenge trajectory in order to detect spoofing attacks. The system is tested using a new database simulating mobile device use with 70 subjects attempting three types of spoof attacks (projected photo, looking through a 2D mask or wearing a 3D mask). Evaluations on the collected database show that the proposed approach performs favourably when compared with state-of-the-art methods.
  • Ali, A. et al. (2017). Biometric Counter-spoofing for Mobile Devices using Gaze Information. in: 7th International Conference on Pattern Recognition and Machine Intelligence. Springer, pp. 11-18. Available at: https://doi.org/10.1007/978-3-319-69900-4_2.
    With the rise in the use of biometric authentication on mobile devices, it is important to address the security vulnerability of spoofing attacks where an attacker using an artefact representing the biometric features of a genuine user attempts to subvert the system. In this paper, techniques for presentation attack detection are presented using gaze information with a focus on their applicability for use on mobile devices. Novel features that rely on directing the gaze of the user and establishing its behaviour are explored for detecting spoofing attempts. The attack scenarios considered in this work include the use of projected photos, 2D and 3D masks. The proposed features and the systems based on them were extensively evaluated using data captured from volunteers performing genuine and spoofing attempts. The results of the evaluations indicate that gaze-based features have the potential for discriminating between genuine attempts and imposter attacks on mobile devices.
  • Alsufyani, H., Hoque, S. and Deravi, F. (2017). Automated Skin Region Quality Assessment for Texture-based Biometrics. in: 2017 Seventh International Conference on Emerging Security Technologies (EST). IEEE, pp. 169-174. Available at: https://doi.org/10.1109/EST.2017.8090418.
    Designing a biometric system based solely on skin texture is of interest because the face is sometimes occluded by hair or artefacts in many real-world contexts. This work presents a novel framework for the assessment of skin-based biometric systems incorporating skin quality information. The quality or purity of the extracted skin region is automatically established using pixel colour models prior to biometric processing. Facial landmarks are detected to facilitate automated extraction of facial regions of interest. Although the present study is confined to the forehead region, the idea can be extended to other skin regions. Local Binary Patterns (LBP) and Gabor wavelet filters are utilised to extract skin features. Using the publicly available XM2VTS database, the experimental results show that the system provides promising performance when compared to other commonly used techniques.
  • Yassin, D., Hoque, S. and Deravi, F. (2016). FACE RECOGNITION ACROSS AGES. in: 6th Brunei International Conference on Engineering and Technology 2016 (BICET2016).
    This paper is concerned with the effect of ageing on biometric systems and particularly its impact on face recognition systems. Being biological tissue in nature, facial biometric trait undergoes significant changes as a person ages. Consequently, developing biometric applications for long-term use becomes a particularly challenging task. The idea behind the investigation presented here is that biometric systems have uneven difficulty in recognising people from different ages. Some algorithms may perform better for certain age groups. Therefore, a carefully optimised multi-algorithmic system can reduce the error rates. A subset of 100 subjects from the MORPH-II database has been selected to test the performance of a face verification system. The population is split into 5 age bands (?19, 20-29, 30-39, 40-49, ?50 years) based on their age during enrolment. The facial image database used in the experiments here contains images acquired over a period of five years. In the proposed multi-classifier scheme, features extracted from face images are transformed by different projection algorithms prior to matching. It has been observed that all the age groups showed improved performances when compared to the single classifier error rates. Of all the groups, the EER were highest for the younger population (?19 year olds).
  • Alsufyani, H., Hoque, S. and Deravi, F. (2016). Exploring the Potential of Facial Skin Regions for the Provision of Identity Information. in: The 7th IET International Conference on Imaging for Crime Detection and Prevention (ICDP-16).. Available at: http://dx.doi.org/10.1049/ic.2016.0084.
    This work presents a novel framework to investigate the possibility of using texture information from facial skin regions for biometric person recognition. Such information will be practically useful when the entire facial image is not available for identifying the individuals. Four facial regions have been investigated (i.e. forehead, right cheek, left cheek, and chin) since they are relatively easy to distinguish in frontal images. Facial landmarks are automatically detected to facilitate the extraction of these facial regions of interest. A new skin detection technique is applied to identify regions with significant skin content. Each such skin regions are then processed independently using features based on Local Binary Patterns and Gabor wavelet filters. Feature fusion is then used prior to classification of the images. Experiments were carried out using the publicly available Skin Segmentation database and the XM2VTS databases to evaluate the skin detection technique and the biometric recognition performances respectively. The results indicate that the skin detection algorithm provided an acceptable results when compared with other state-of-the-art skin detection algorithms. In addition, the forehead and the chin regions where found to provide a rich source of biometric information.
  • Yassin, D., Hoque, S. and Deravi, F. (2013). Age Sensitivity of Face Recognition Algorithms. in: 4th International Conference on Emerging Security Technologies (EST 2013),. pp. 12-15. Available at: http://dx.doi.org/10.1109/EST.2013.8.
    This paper investigates the performance degradation of facial recognition systems due to the influence of age. A comparative analysis of verification performance is conducted for four subspace projection techniques combined with four different distance metrics. The experimental results based on a subset of the MORPH-II database show that the choice of subspace projection technique and associated distance metric can have a significant impact on the performance of the face recognition system for particular age groups.
  • Ali, A., Deravi, F. and Hoque, S. (2013). Spoofing attempt detection using gaze colocation. in: Biometrics Special Interest Group (BIOSIG), 2013 International Conference. pp. 1-12.
    Spoofing attacks on biometric systems are one of the major impediments to their use for secure unattended applications. This paper presents a novel method for face liveness detection by tracking the gaze of the user with an ordinary webcam. In the proposed system, an object appears randomly on the display screen which the user is required to look at while their gaze is measured. The visual stimulus appears in such a way that it repeatedly directs the gaze of the user to specific points on the screen. Features extracted from images captured at these sets of colocated points are used to estimate the liveness of the user. A scenario is investigated where genuine users track the challenge with head/eye movements whereas the impostors hold a photograph of the target user and attempt to follow the stimulus during simulated spoofing attacks. The results from the experiments indicate the effectiveness of the gaze colocation feature in detecting spoofing attack
  • Radu, P. et al. (2013). A Multi-algorithmic Colour Iris Recognition System. in: Proceedings of the 5th International Workshop Soft Computing Applications (SOFA). Springer Berlin Heidelberg, pp. 45-56. Available at: http://dx.doi.org/10.1007/978-3-642-33941-7_7.
    The reported accuracies of iris recognition systems are generally higher on near infrared images than on colour RGB images. To increase a colour iris recognition system’s performance, a possible solution is a multialgorithmic approach with an appropriate fusion mechanism. In the present work, this approach is investigated by fusing three algorithms at the score level to enhance the performance of a colour iris recognition system. The contribution of this paper consists of proposing 2 novel feature extraction methods for colour iris images, one based on a 3-bit encoder of the 8 neighborhood and the other one based on gray level co-occurrence matrix. The third algorithm employed uses the classical Gabor filters and phase encoding for feature extraction. A weighted average is used as a matching score fusion. The efficiency of the proposed iris recognition system is demonstrated on UBIRISv1 dataset.
  • Radu, P. et al. (2013). A Novel Iris Clustering Approach Using LAB Color Features. in: 4th IEEE International Symposium on Electrical And Electronics Engineering (ISEEE 2013). IEEE, pp. 1-4. Available at: http://dx.doi.org/10.1109/ISEEE.2013.6674362.
    Interesting results of color clustering for the iris images in the UBIRISv1 database are presented. The iris colors are characterized by feature vectors with 80 components corresponding to histogram bins computed in the CIELAB color space. The feature extraction is applied to the first session eye images after undergoing an iris segmentation process. An agglomerative hierarchical algorithm is used to organize 1.205 segmented iris images in 8 clusters based on their color content.
  • Radu, P. et al. (2013). Optimizing 2D Gabor Filters for Iris Recognition. in: 4th International Conference on Emerging Security Technologies (EST 2013),. IEEE, pp. 47-50. Available at: http://dx.doi.org/10.1109/EST.2013.15.
    The randomness and richness present in the iris texture make the 2D Gabor filter bank analysis a suitable technique to be used for iris recognition systems. To accurately characterize complex texture structures using 2D Gabor filters it is necessary to use multiple sets of parameters of this type of filters. This paper proposes a technique of optimizing multiple sets of 2D Gabor filter parameters to gradually enhance the accuracy of an iris recognition system. The proposed methodology is suitable to be applied on both near infrared and visible spectrum iris images. To illustrate the efficiency of the filter bank design technique, UBIRISv1 database was used for benchmarking
  • Ali, A., Deravi, F. and Hoque, S. (2013). Directional Sensitivity of Gaze-Collinearity Features in Liveness Detection. in: Emerging Security Technologies (EST), 2013 Fourth International Conference on. pp. 8-11. Available at: http://dx.doi.org/10.1109/EST.2013.7.
    To increase the trust in using face recognition systems, these need to be capable of differentiating between face images captured from a real person and those captured from photos or similar artifacts presented at the sensor. Methods have been published for face liveness detection by measuring the gaze of a user while the user tracks an object on the screen, which appears at pre-defined, places randomly. In this paper we explore the sensitivity of such a system to different stimulus alignments. The aim is to establish whether there is such sensitivity and if so to explore how this may be exploited for improving the design of the stimulus. The results suggest that collecting feature points along the horizontal direction is more effective than the vertical direction for liveness detection.
  • McConnon, G. et al. (2012). Impact of common ophthalmic disorders on iris segmentation. in: Biometrics (ICB), 2012 5th IAPR International Conference. IEEE, pp. 277-282. Available at: http://dx.doi.org/10.1109/ICB.2012.6199820.
    As iris recognition moves from constrained indoor and near-infrared systems towards unconstrained on-the-move and at-a-distance systems, possibly using visible light illumination, interest in measurement of the fidelity of the acquired images and their impact on recognition performance has grown. However, the impact of the subject's physiological characteristics on the nature of the acquired images has received little attention. In this paper we catalog a selection of the most common ophthalmic disorders and investigate some of their characteristics including their prevalence and possible impact on recognition performance. The paper also includes an experimental exploration of the effect of such conditions on segmentation of the iris image.
  • Radu, P. et al. (2012). A Visible Light Iris Recognition System using Colour Information. in: 9th IASTED International Conference on Signal Processing, Pattern Recognition and Applications (SPPRA 2012). Acta Press. Available at: http://dx.doi.org/10.2316/P.2012.778-019.
    The iris has been shown to be a highly reliable biometric modality with almost perfect authentication accuracy. However, a classical iris recognition system operates under near infrared illumination, which is a major constraint for a range of applications. In this paper, we propose an iris recognition system which is able to cope with noisy colour iris images by employing image processing techniques together with a Multiple Classifier System to fuse the information from various colour channels. There are two main contributions in the present work: first, we adapt standard iris features, proposed in the literature for near infrared images, to match the characteristics of colour iris images; second, we introduce a robust fusion mechanism to combine the features from various colour channels. With a feature size of only 360 real numbers, the efficiency of the proposed biometric system is demonstrated on the UBIRISv1 dataset for both identification and verification scenarios.
  • Azhar, M., Hoque, S. and Deravi, F. (2012). Automatic identification of wildlife using local binary patterns. in: IET Conference on Image Processing (IPR 2012). pp. B5-B5. Available at: http://dx.doi.org/10.1049/cp.2012.0454.
    Recognition of individuals is necessary for accurate estimation of wildlife population dynamics for effective management and conservation. Identifying individual wildlife by their distinctive body marks is one of the least invasive methods available. Although widely practiced, this method is mostly manual where newly captured images are compared with those in the library of previously captured images. The ability to do so automatically using computer vision techniques can improve speed and accuracy, facilitate on-field matching, and so on. This paper reports the results of using a texture based image feature descriptor, the Local Binary Patterns (LBP), for the automatic identification of an important endangered species — The Great Crested Newt (GCN). The proposed approach is tested on a database of newts' distinctive belly images which are treated as a source of biometric information. Results indicate that when both appearance and spatial information of newt belly patterns are encoded into a composite LBP feature vector, the discriminating power of the system can improve significantly.
  • Ali, A., Deravi, F. and Hoque, S. (2012). Liveness Detection Using Gaze Collinearity. in: 2012 Third International Conference on Emerging Security Technologies. IEEE, pp. 62-65. Available at: http://dx.doi.org/10.1109/EST.2012.12.
    This paper presents a liveness detection method based on tracking the gaze of the user of a face recognition system using a single camera. The user is required to follow a visual animation of a moving object on a display screen while his/her gaze is measured. The visual stimulus is designed to direct the gaze of the user to sets of collinear points on the screen. Features based on the measured collinearity of the observed gaze are then used to discriminate between live attempts at responding to this challenge and those conducted by âimpostorsâ holding photographs and attempting to follow the stimulus. An initial set of experiments is reported that indicates the effectiveness of the proposed method in detecting this class of spoofing attacks.
  • Radu, P. et al. (2012). Image Enhancement vs Feature Fusion in Colour Iris Recognition. in: Emerging Security Technologies (EST), 2012 Third International Conference. IEEE, pp. 53-57. Available at: http://dx.doi.org/10.1109/EST.2012.33.
    In iris recognition, most of the research was conducted on operation under near infrared illumination. For an iris recognition system to be deployed on common hardware devices, such as laptops or mobile phones, its ability of working with visible spectrum iris images is necessary. Two of the main possible approaches to cope with noisy images in a colour iris recognition system are either to apply image enhancement techniques or to extract multiple types of features and subsequently to employ an efficient fusion mechanism. The contribution of the present paper consists of comparing which of the two above mentioned approaches is best in both identification and verification scenarios of a colour iris recognition system. The efficiency of the two approaches is demonstrated on UBIRISv1 dataset
  • Radu, P. et al. (2011). A Versatile Iris Segmentation Algorithm. in: 2011 BIOSIG Conference on Biometrics and Security.
  • Radu, P. et al. (2011). Information Fusion for Unconstrained Iris Recognition. in: International Conference on Emerging Security Technologies (EST 2011).
  • McConnon, G. et al. (2010). A Novel Interactive Biometric Passport Photograph Alignment System. in: 18th European Symposium on Artificial Neural Networks (ESANN).
  • Radu, P. et al. (2010). On Combining Information from Both Eyes to Cope with Motion Blur in Iris Recognition. in: 4th International Workshop on Soft Computing. IEEE, pp. 175-181.
    Iris Recognition has emerged as one of the best biometric authentication techniques in recent years. However, a significant drawback of this biometric modality is the constrained environment in which the user is enrolled and recognized. It typically requires the user to be very cooperative for good quality images to be captured. If this limitation could be effectively addressed, it would be possible to employ iris recognition in environments where images incorporating increased noise and distortions were present whilst maintaining high recognition accuracy. In the present paper, we explore how the effect of image distortions caused by motion blur may be significantly reduced by using iris information from both eyes of the user.
  • Sirlantzis, K. et al. (2010). Nomad Biometric Authentificatin (NOBA): Towards Mobile and Ubiquitous Person Identification. in: International Conference on Emergy Security Technologies (EST 2010).
  • McConnon, G. et al. (2010). A Survey of Point-Source Specular Reflections in Noisy Iris Images. in: Emerging Security Technologies (EST), 2010 International Conference. pp. 13-17. Available at: http://dx.doi.org/10.1109/EST.2010.33.
    This paper presents an examination of a selection of images taken from the UBIRIS.v2 dataset to explore the characteristics of point-source reflections present in the images. These reflections were some of the most commonly found sources of noise in iris images acquired under visual wavelength light and clearly impact the accuracy of iris recognition systems. The spatial and intensity distributions of these reflections is studied and results are presented that can be used to model their behaviour. This information can be helpful for developing more accurate iris synthesis techniques and for the study of iris image focus assessment as well as developing better matching algorithms for iris recognition.
  • Radu, P. et al. (2010). Can Dual Iris Help with Motion Blur? in: 4th IEEE Int. Workshop on Soft Computing Applications (SOFA2010).
  • Radu, P. et al. (2010). Are Two Eyes Better Than One? in: International Conference on Emerging Security Technologies (EST 2010).
  • Hoque, S., Fairhurst, M. and Howells, G. (2008). Evaluating Biometric Encryption Key Generation using Handwritten Signatures. in: 2008 ECSIS Symposium on Bio-Inspired, Learning and Intelligent Systems for Security (BLISS 2008). IEEE Computer Socieity, pp. 17-22.
    In traditional cryptosystems, user authentication is based on the possession of secret keys/tokens. Such keys can be forgotten, lost, stolen, or may be illegally shared, but an ability to relate a ctyptographic key to biometric data can enhance the trustworthiness of a system. In this paper, we demonstrate how biometric keys can be generated directly from live biometrics, under certain conditions, by partitioning feature space into subspaces and partitioning these into cells, where each cell subspace contributes to the overall key generated. We evaluate the proposed scheme on real biometric data, representing both genuine samples and attempted imitations. Experimental results then demonstrate the extent to which the proposed technique can be implemented reliably in possible practical scenarios
  • Howells, G. et al. (2008). A Securable Autonomous Generalised Document Model (SAGENT). in: Stoica, A. et al. eds. 2008 ECSIS Symposium on Bio-Inspired, Learning and Intelligent Systems for Security (BLISS 2008). IEEE, pp. 136-141.
    A generalised modelling system for handling multimedia documents is introduced capable of allowing documents authored in differing formats to be efficiently manipulated, compared and analysed. The model represents a document as a secure autonomous object possessing the ability to represent and modify its own meta data whilst not compromising the ideals of maintaining the autonomy of individual document components. The paper presents the model by means of a case study of the existing prototype implementation followed by the detailed presentation of the implementation of the model using Object-Oriented technology showing how the model is able to address the key issues of security, generality and autonomy.
  • Osoka, A., Fairhurst, M. and Hoque, S. (2007). A Novel Approach to Quantifying Risk in Biometric Systems Performance. in: Proc. 4th Visual Engineering (VIE) Conference.
  • Hoque, S. and Fairhurst, M. (2006). A Novel Scheme for Implementation of the Scanning nTuple Classifier in a Constrained Environment. in: Proc. 10th International Workshop on Frontiers in Handwriting Recognition (IWFRH). pp. 127-132.
    The scanning ntuple classifier is an efficient and accurate classifier for handwriting recognition. One of the major difficulties in implementing this scheme is its demand for a very large memory space, thus making it unsuitable for resource constrained systems such as embedded applications. This paper proposes some modifications to the basic sntuple algorithm which eliminates the necessity of normalizing the chain-code length, by adjusting the memory cell increments as an inverse function the chain length. The resulting system performance is shown to be superior to the standard sntuple configuration in both speed and accuracy when smaller and fewer sntuples are used, a configuration which also reduces the demand for memory.
  • Fairhurst, M. et al. (2005). Evaluating Biometric Encryption Key Generation. in: Proceedings of Third Cost 275 Workshop, Biometrics on the Internet - http://www.fub.it/cost275. pp. 93-96.
    In traditional cryptosystems, user authentication is based on
    possession of secret keys/tokens. Such keys can be forgotten,
    lost, stolen, or may be illegally shared, but an ability to relate
    a cryptographic key to biometric data can enhance the
    trustworthiness of a system. This paper presents a new
    approach to generating encryption keys directly from
  • Fairhurst, M., Hoque, S. and Boyle, T. (2005). Assessing Behavioural Characteristics of Dyspraxia through on-line Drawing Analaysis. in: Proceedings of the 12th Conference of the International Graphonomics Society (IGS2005).
  • Howells, G. et al. (2005). SAGENT: A Model for Exploiting Biometric-Based Security for Distributed Multimedia Documents. in: Proceedings of Third Cost 275 Workshop, Biometrics on the Internet - http://www.fub.it/cost275.
    Issues concerning the security of heterogeneous documents in
    a distributed environment are addressed by introducing a
    novel document model capable of incorporating any desired,
    and in particular biometrically based, security techniques
    within existing documents without compromising the original
    document structure.
  • Fairhurst, M., Razian, M. and Hoque, S. (2004). Intelligent Assessment of Drawing Task Execution for Behavioural Analysis in Dyspraxia. in: Proceedings of the IEEE SMC UK-RI Chapter Conference on Intelligent Cybernetic Systems.
  • Fairhurst, M., Hoque, S. and Razian, M. (2004). Improved Screening of Developmental Dyspraxia using On-Line Image Analysis. in: the 8th World Multi-Conference on Systemics, Cybernetics and Informatics (SCI2004). INT, pp. 160-165.
    Dyspraxia is a neurological impairment of the organization of movements. In this paper, we present a process for automated diagnosis of developmental dyspraxia in children using dynamic attributes from Beery's VMI test drawings. The test environment is identical to the conventional VMI tests and all procedural modifications are transparent to the children. Empirical investigations reveal interesting results despite limited data availability. This study has exposed limitations of the conventional VMI analysis and is capable of producing much richer analysis even when operated by non-experts.
  • Chindaro, S. et al. (2004). Diversity-Performance Relationship in Bit-plane Decomposition Based Handwriting Recognition System. in: 9th International Workshop on Frontiers in Handwriting Recognition (IWFHR-9).
  • Hoque, S., Sirlantzis, K. and Fairhurst, M. (2003). A New Chain-Code Quantization Approach Enabling High Performance Handwriting Recognition Based on Multiple Classifier Schemes. in: Proceedings of the 7th International Conference on Document Analysis and Recognition (ICDAR 2003). pp. 834-838.
  • Sirlantzis, K., Hoque, S. and Fairhurst, M. (2003). Input Space Transformations for Multi-Classifier Systems based on n-tuple Classifiers with Application to Handwriting Recognition. in: Multiple Clasifier Systems, Fourth International Workshop, MCS 2003. Springer, pp. 356-365.
    In this paper we investigate the properties of novel systems for handwritten character recognition which are based on input space transformations to exploit the advantages of multiple classifier structures, These systems provide an effective solution to the problem of utilising the power of n-tuple based classifiers while, simultaneously, addressing successfully the issues of the trade-off between the memory requirements and the accuracy achieved. Utilizing the flexibility offered by multi-classifier schemes we can subsequently exploit this complementarity of different transformations of the original feature space while at the same time decompose it to simpler input spaces, thus reducing the resources requirements of the sn-tuple classifiers used. Our analysis of the observed behaviour based on Mutual Information estimators between. the original and the transformed input spaces showed a direct correspondence of the values of this information measure and the accuracy obtained. This suggests Mutual Information as a useful tool for the analysis and design of multi-classifier systems. The paper concludes with a number of comparisons with results on the same data set achieved by a diverse set of classifiers. Our findings clearly demonstrate the significant gains that can be obtained, simultaneously in performance and memory space reduction, by the proposed systems.
  • Sirlantzis, K., Hoque, S. and Fairhurst, M. (2003). Genetic Algorithms for Handwriting Recognition: From Structure Optimisation to Information Measures. in: British Machine Vision Association (BMVA) Symposium on Document and Text Recognition in Images and Video Sequences.
  • Sirlantzis, K., Hoque, S. and Fairhurst, M. (2002). Trainable Multiple Classifier Schemes for Handwritten Character Recognition. in: Kittler, J. and Roli, F. eds. Third International Workshop on Multiple Classifier Systems. Springer-Verlag, London, England, pp. 169-178.
  • Sirlantzis, K. et al. (2002). Fusion of n-Tuple Based Classifiers for High Performance Handwritten Character Recognition. in: Caelli, T. et al. eds. Joint IAPR Int. Workshop on Syntactical and Structural Pattern Recognition (SSPR 2002) and Statistical Pattern Recognition (SPR 2002). pp. 770-778. Available at: http://dx.doi.org/10.1007/3-540-70659-3_81.


  • Yang, S., Hoque, S. and Deravi, F. (2019). Improved time-frequency features and electrode placement for EEG-based biometric person recognition. IEEE ACCESS.
    This work introduces a novel feature extraction method for biometric recognition using EEG data and provides an analysis of the impact of electrode placements on performance. The feature extraction method is based on the wavelet transform of the raw EEG signal. The logarithms of wavelet coefficients are further processed using the discrete cosine transform (DCT). The DCT coefficients from each wavelet band are used to form the feature vectors for classification. As an application in the biometrics scenario, the effectiveness of the electrode locations on person recognition is also investigated, and suggestions are made for electrode positioning to improve performance. The effectiveness of the proposed feature was investigated in both identification and verification scenarios. Identification results of 98.24% and 93.28% were obtained using the EEG Motor Movement/Imagery Dataset (MM/I) and the UCI EEG Database Dataset respectively, which compares favorably with other published reports while using a significantly smaller number of electrodes. The performance of the proposed system also showed substantial improvements in the verification scenario when compared with some similar systems from the published literature. A multi-session analysis is simulated using with eyes open and eyes closed recordings from the MM/I database. It is found that the proposed feature is less influenced by time separation between training and testing compared with a conventional feature based on power spectral analysis.
Last updated