Portrait of Dr Konstantinos Sirlantzis

Dr Konstantinos Sirlantzis

Senior Lecturer in Intelligent Systems
Director of Graduate Studies (Research)


Pattern Recognition; Multiple Classifier Systems; Artificial Intelligence techniques; Neural Networks, Genetic Algorithms, and other Biologically Inspired Computing Paradigms; Image Processing; Multimodal Biometric Models; Handwriting Recognition; Numerical Stochastic Optimisation Algorithms; Non-linear Dynamics and Chaos Theory; Markov Chain Monte Carlo (MCMC) methods for Sensor Data Fusion.

Research interests


Showing 50 of 120 total publications in the Kent Academic Repository. View all publications.


  • Dib, J., Sirlantzis, K. and Howells, G. (2020). A Review on Negative Road Anomaly Detection Methods. IEEE Access [Online]. Available at: https://doi.org/10.1109/ACCESS.2020.2982220.
    The main limitation to obstacle avoidance nowadays has been negative road anomalies which is the term we used to refer to potholes and cracks due to their negative drop from the surface of the road. This has for long been a limitation because of the fact that they exist in different, random and stochastic shapes. Today’s technology lacks the presence of sensors capable of detecting negative road anomalies efficiently as the latter surpasses the sensor’s limitations rendering the sensing technique inaccurate. A significant amount of research has been focused on the detection of negative road anomalies due to the fact that this topic is becoming a hot research topic. In this paper, the existing techniques will be reviewed. Their limitations will be highlighted and they will be assessed via certain performance indicators and via some chosen criteria which will be introduced.
  • Hu, Y., Sirlantzis, K. and Howells, G. (2016). Optimal Generation of Iris Codes for Iris Recognition. IEEE Transactions on Information Forensics and Security [Online] 12:157-171. Available at: http://dx.doi.org/10.1109/TIFS.2016.2606083.
    The calculation of binary iris codes from feature values (e.g. the result of Gabor transform) is a key step in iris recognition systems. Traditional binarization method based on the sign of feature values has achieved very promising performance. However, currently, little research focuses on a deeper insight into this binarization method to produce iris codes. In this paper, we illustrate the iris code calculation from the perspective of optimization. We demonstrate that the traditional iris code is the solution of an optimization problem which minimizes the distance between the feature values and iris codes. Furthermore, we show that more effective iris codes can be obtained by adding terms to the objective function of this optimization problem. We investigate two additional objective terms. The first objective term exploits the spatial relationships of the bits in different positions of an iris code. The second objective term mitigates the influence of less reliable bits in iris codes. The two objective terms can be applied to the optimization problem individually, or in a combined scheme. We conduct experiments on four benchmark datasets with varying image quality. The experimental results demonstrate that the iris code produced by solving the optimization problem with the two additional objective terms achieves a generally improved performance in comparison to the traditional iris code calculated by binarizing feature values based on their signs.
  • Hu, Y., Sirlantzis, K., Howells, G., Ragot, N. and Rodríguez, P. (2016). An online background subtraction algorithm deployed on a NAO humanoid robot based monitoring system. Robotics and Autonomous Systems [Online] 85:37-47. Available at: http://dx.doi.org/10.1016/j.robot.2016.08.013.
    In this paper, we design a fast background subtraction algorithm and deploy this algorithm on a monitoring system based on NAO humanoid robot. The proposed algorithm detects a contiguous foreground via a contiguously weighted linear regression (CWLR) model. It consists of a background model and a foreground model. The background model is a regression based low rank model. It seeks a low rank background subspace and represents the background as the linear combination of the basis spanning the subspace. The foreground model promotes the contiguity in the foreground detection. It encourages the foreground to be detected as whole regions rather than separated pixels. We formulate the background and foreground model into a contiguously weighted linear regression problem. This problem can be solved efficiently via an alternating optimization approach which includes continuous and discrete variables. Given an image sequence, we use the first few frames to incrementally initialize the background subspace, and we determine the background and foreground in the following frames in an online scheme using the proposed CWLR model, with the background subspace continuously updated using the detected background information. The proposed algorithm is implemented by Python on a NAO humanoid robot based monitoring system. This system consists of a control station and a Nao robot. The Nao robot acts as a mobile probe. It captures image sequence and sends it to the control station. The control station serves as a control terminal. It sends commands to control the behaviour of Nao robot, and it processes the image data sent by Nao. This system can be used for living environment monitoring and form the basis for many vision-based applications like fall detection and scene understanding. The experimental comparisons with most recent algorithms on both benchmark dataset and NAO captures demonstrate the high effectiveness of the proposed algorithm.
  • Hu, Y., Sirlantzis, K. and Howells, G. (2016). A novel iris weight map method for less constrained iris recognition based on bit stability and discriminability. Image and Vision Computing [Online] 58:168-180. Available at: http://dx.doi.org/10.1016/j.imavis.2016.05.003.
    In this paper, we propose and investigate a novel iris weight map method for iris matching stage to improve less constrained iris recognition. The proposed iris weight map considers both intra-class bit stability and inter-class bit discriminability of iris codes. We model the intra-class bit stability in a stability map to improve the intra-class matching. The stability map assigns more weight to the bits that have values more consistent with their noiseless and stable estimates obtained using a low rank approximation from a set of noisy training images. Also, we express the inter-class bit discriminability in a discriminability map to enhance the inter-class separation. We calculate the discriminability map using a 1-to-N strategy, emphasizing the bits with more discriminative power in iris codes. The final iris weight map is the combination of the stability map and the discriminability map. We conduct experimental analysis on four publicly available datasets captured in varying less constrained conditions. The experimental results demonstrate that the proposed iris weight map achieves generally improved identification and verification performance compared to state-of-the-art methods.
  • Hu, Y., Sirlantzis, K. and Howells, G. (2016). Signal-Level Information Fusion for Less Constrained Iris Recognition using Sparse-Error Low Rank Matrix Factorization. IEEE Transactions on Information Forensics and Security [Online] 11:1549-1564. Available at: http://dx.doi.org/10.1109/TIFS.2016.2541612.
    Iris recognition systems working in less constrained environments with the subject at-a-distance and on-the-move suffer from the noise and degradations in the iris captures. These noise and degradations significantly deteriorate iris recognition performance. In this paper, we propose a novel signal-level information fusion method to mitigate the influence of noise and degradations for less constrained iris recognition systems. The proposed method is based on low rank approximation (LRA). Given multiple noisy captures of the same eye, we assume that: 1) the potential noiseless images lie in a low rank subspace and 2) the noise is spatially sparse. Based on these assumptions, we seek an LRA of noisy captures to separate the noiseless images and noise for information fusion. Specifically, we propose a sparse-error low rank matrix factorization model to perform LRA, decomposing the noisy captures into a low rank component and a sparse error component. The low rank component estimates the potential noiseless images, while the error component models the noise. Then, the low rank and error components are utilized to perform signal-level fusion separately, producing two individually fused images. Finally, we combine the two fused images at the code level to produce one iris code as the final fusion result. Experiments on benchmark data sets demonstrate that the proposed signal-level fusion method is able to achieve a generally improved iris recognition performance in less constrained environment, in comparison with the existing iris recognition algorithms, especially for the iris captures with heavy noise and low quality.
  • Nasri, Y., Vauchey, V., Khemmar, R., Ragot, N., Sirlantzis, K. and Ertaud, J. (2016). ROS-based Autonomous Navigation Wheelchair using Omnidirectional Sensor. International Journal of Computer Applications [Online] 133. Available at: http://dx.doi.org/10.5120/ijca2016907533.
  • Hu, Y., Sirlantzis, K. and Howells, G. (2015). Iris liveness detection using regional features. Pattern Recognition Letters [Online]. Available at: https://doi.org/10.1016/j.patrec.2015.10.010.
    In this paper, we exploit regional features for iris liveness detection. Regional features are designed based on the relationship of the features in neighbouring regions. They essentially capture the feature distribution among neighbouring regions. We construct the regional features via two models: spatial pyramid and relational measure which seek the feature distributions in regions with varying size and shape respectively. The spatial pyramid model extracts features from coarse to fine grid regions, and, it models a local to global feature distribution. The local distribution captures the local feature variations while the global distribution includes the information that is more robust to translational transform. The relational measure is based on a feature-level convolution operation defined in this paper. By varying the shape of the convolution kernel, we are able to obtain the feature distribution in regions with different shapes. To combine the feature distribution information in regions with varying size and shape, we fuse the results based on the two models at the score level. Experimental results on benchmark datasets demonstrate that the proposed method achieves an improved performance compared to state-of-the-art features.
  • Hu, Y., Sirlantzis, K. and Howells, G. (2015). Improving colour iris segmentation using a model selection technique. Pattern Recognition Letters [Online] 57:24-32. Available at: http://doi.org/10.1016/j.patrec.2014.12.012.
    In this paper, we propose a novel method to improve the reliability and accuracy of colour iris segmentation for captures both from static and mobile devices. Our method is a fusion strategy based on selection among the segmentation outcomes of different segmentation methods or models. First, we present and analyse an iris segmentation framework which uses three different models to show that improvements can be obtained by selection among the outcomes generated by the three models. Then, we introduce a model selection method which defines the optimal segmentation based on a ring-shaped region around the outer segmentation boundary identified by each model. We use the histogram of oriented gradients (HOG) as features extracted from the ring-shaped region, and train a SVM-based classifier which provides the selection decision. Experiments on colour iris datasets, captured by mobile devices and static camera, show that the proposed method achieves an improved performance compared to the individual iris segmentation models and existing algorithms.
  • Radu, P., Sirlantzis, K., Howells, G., Hoque, S. and Deravi, F. (2013). A Colour Iris Recognition System Employing Multiple Classifier Techniques. ELCVIA Electronic Letters on Computer Vision and Image Analysis [Online] 12:54-65. Available at: http://elcvia.cvc.uab.es/article/view/520.
    The randomness of iris texture has allowed researchers to develop biometric systems with almost flawless accuracies. However, a common drawback of the majority of existing iris recognition systems is the constrained environment in which the user is enroled and recognized. The iris recognition systems typically require a high quality iris image captured under near infrared illumination. A desirable property of an iris recognition system is to be able to operate on colour images, whilst maintaining a high accuracy. In the present work we propose an iris recognition methodology which is designed to cope with noisy colour iris images. There are two main contributions of this paper: first, we adapt standard iris features proposed in literature for near infrared images by applying a feature selection method on features extracted from various colour channels; second, we introduce a Multiple Classifier System architecture to enhance the recognition accuracy of the biometric system. With a feature size of only 360 real valued components, the proposed iris recognition system performs with a high accuracy on UBIRISv1 dataset, in both identification and verfication scenarios.
  • Radu, P., Sirlantzis, K., Howells, G., Deravi, F. and Hoque, S. (2013). A review of information fusion techniques employed in iris recognition systems. International Journal of Advanced Intelligence Paradigms [Online] 4:211-240. Available at: http://dx.doi.org/10.1504/IJAIP.2012.052067.
    Iris recognition has shown to be one of the most reliable biometric authentication methods. The majority of iris recognition systems which have been developed require a constrained environment to enrol and recognise the user. If the user is not cooperative or the capture environment changes then the accuracy of the iris recognition system may decrease significantly. To minimise the effect of such limitations, possible solutions include the use of multiple channels of information such as using both eyes or extracting more iris feature types and subsequently employing an efficient fusion method. In this paper, we present a review of iris recognition systems using information from multiple sources that are fused in different ways or at different levels. A categorisation of the iris recognition systems incorporating multiple classifier systems is also presented. As a new desirable dimension of a biometric system, besides those proposed in the literature, the mobility of such a system is introduced in this work. The review charts the path towards greater flexibility and robustness of iris recognition systems through the use of information fusion techniques and points towards further developments in the future leading to mobile and ubiquitous deployment of such systems.
  • Radu, P., Sirlantzis, K., Howells, G., Hoque, S. and Deravi, F. (2011). Information Fusion for Unconstrained Iris Recognition. International Journal of Hybrid Information Technology 4:1-12.
    The majority of the iris recognition algorithms available in the literature were developed to operate on near infrared images. A desirable feature of iris recognition systems with reduced constraints such as potential operability on commonly available hardware is to work with images acquired under visible wavelength. Unlike in near infrared images, in colour iris images the pigment melanin present in the iris tissue causes the appearance of reflections, which are one of the major noise factors present in colour iris images. In this paper we present an iris recognition system which is able to cope with noisy colour iris images by employing score level fusion between different channels of the iris image. The robustness of the proposed approach is tested on three colour iris images datasets, ranging from images captured with professional cameras in both constrained environment and less cooperative scenario, and finally to iris images acquired with a mobile phone.
  • McConnon, G., Deravi, F., Hoque, S., Sirlantzis, K. and Howells, G. (2011). An Investigation of Quality Aspects of Noisy Colour Images for Iris Recognition. International Journal of Signal Processing, Image Processing and Pattern Recognition [Online] 4:165-178. Available at: http://www.sersc.org/journals/IJSIP/vol4_no3.php.
    The UBIRIS.v2 dataset is a set of noisy colour iris images designed to simulate visible wavelength iris acquisition at-a-distance and on-the-move. This paper presents an examination of some of the characteristics that can impact the performance of iris recognition in the UBIRIS.v2 dataset. This dataset consists of iris images in the visible wavelength and was designed to be noisy. The quality and characteristics of these images are surveyed by examining seven different channels of information extracted from them: red, green, blue, intensity, value, lightness, and luminance. We present new quality metrics to assess the image characteristics with regard to focus, entropy, reflections, pupil constriction and pupillary boundary contrast. The results clearly suggest the existence of different characteristics for these channels and could be exploited for use in the design and evaluation of iris recognition systems.

Book section

  • Mohamed, E., Sirlantzis, K. and Howells, G. (2019). Application of Transfer Learning for Object Detection on Manually Collected Data. In: Intelligent Systems and Applications Volume 1. Springer, pp. 913-931. Available at: http://dx.doi.org/10.1007/978-3-030-29516-5_69.
    This paper investigates the usage of pre-trained deep learning neural networks for object detection on a manually collected dataset for real-life indoor objects. Availability of object-specific datasets is a great challenge and the unavoidable task of collecting, processing and annotating ground truth data is laborious and time-consuming. In this paper, two famous models (AlexNet and Vgg16) have been evaluated as feature extractors in a Faster R-CNN network. Network models have been trained end-to-end on the collected dataset. The study highlights the poor performance of state of art systems when dealing with small size objects. Modifying the detector design by redesigning systems’ anchor boxes might help to tackle this problem. Detector results on the proposed dataset have been collected and compared. In addition, limitations and future work have been discussed.
  • Yang, Y., Yan, X., Sirlantzis, K. and Howells, G. (2019). Application of Sliding Mode Trajectory Tracking Control Design for Two-Wheeled Mobile Robots. In: 2019 NASA/ESA Conference on Adaptive Hardware and Systems (AHS). IEEE. Available at: http://dx.doi.org/10.1109/AHS.2019.00012.
    A trajectory tracking controller is proposed to drive the wheeled mobile robot (WMR) to follow a predefined trajectory robustly within a finite time under the presence of uncertainties. The two-wheeled mobile robot and tracking error system are modelled by kinematic equations, the stability and reachability of the sliding mode controller are analysed based on the system models. A two-wheeled mobile robot is built by using the STM32F407 (ARM Cortex-M4 microcontroller) board and a MATLAB GUI, and a cooperative real-time operating system are implemented by using C programming language in order to provide convenient system configuration and improve the overall tracking performance. It is demonstrated that the line and circular trajectories are well tracked in simulation and experiment.
  • Chatzidimitriadis, S., Oprea, P., Gillham, M. and Sirlantzis, K. (2017). Evaluation of 3D obstacle avoidance algorithm for smart powered wheelchairs. In: 2017 Seventh International Conference on Emerging Security Technologies (EST). IEEE, pp. 157-162. Available at: https://dx.doi.org/10.1109/EST.2017.8090416.
    This research investigates the feasibility for the development of a novel 3D collision avoidance system for smart powered wheelchairs operating in a cluttered setting by using a scenario generated in a simulated environment using the Robot Operating System development framework. We constructed an innovative interface with a commercially available powered wheelchair system in order to extract joystick data to provide the input for interacting with the simulation. By integrating with a standard PWC control system the user can operate the PWC joystick with the model responding in real-time. The wheelchair model was equipped with a Kinect depth sensor segmented into three layers, two representing the upper body and torso, and a third layer fused with a LIDAR for the leg section. When using the assisted driving algorithm there was a 91.7% reduction in collisions and the course completion rate was 100% compared to 87.5% when not using the algorithm.
  • O’Brien, J., Hoque, S., Mulvihill, D. and Sirlantzis, K. (2017). Automated Cell Segmentation of Fission Yeast Phase Images - Segmenting Cells from Light Microscopy Images. In: Silveira, M., Fred, A., Gamboa, H. and Vaz, M. eds. Proceedings of the 10th International Joint Conference on Biomedical Engineering Systems and Technologies. Scitepress, pp. 92-99. Available at: http://dx.doi.org/10.5220/0006149100920099.
    Robust image analysis is an important aspect of all cell biology studies. The geometrics of cells are critical for developing an understanding of biological processes. Time constraints placed on researchers lead to a narrower focus on what data are collected and recorded from an experiment, resulting in a loss of data. Currently, preprocessing of microscope images is followed by the utilisation and parameterisation of inbuilt functions of various softwares to obtain information. Using the fission yeast, Schizosaccharomyes pombe, we propose a novel, fully automated, segmentation software for cells with a significantly lower rate of segmentation errors than PombeX with the same dataset.
  • Hu, Y., Sirlantzis, K. and Howells, G. (2016). A study on iris textural correlation using steering kernels. In: 2016 IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS). IEEE. Available at: https://dx.doi.org/10.1109/BTAS.2016.7791160.
    Research on iris recognition have observed that iris texture has inherent radial correlation. However, currently, there lacks a deeper insight into iris textural correlation. Few research focus on a quantitative and comprehensive analysis on this correlation. In this paper, we perform a quantitative analysis on iris textural correlation. We employ steering kernels to model the textural correlation in images. We conduct experiments on three benchmark datasets covering iris captures with varying quality. We find that the local textural correlation varies due to local characteristics in iris images, while the general trend of textural correlation goes along the radial direction. Moreover, we demonstrate that the information on iris textural correlation can be utilized to improve iris recognition. We employ this information to produce iris codes. We show that the iris code with the information on textural correlation achieves an improved performance compared to traditional iris codes.
  • Spanogianopoulos, S. and Sirlantzis, K. (2016). Car-Like Mobile Robot Navigation: A Survey. In: Tsihrintzis, G., Virvou, M. and Jain, L. C. eds. Intelligent Computing Systems: Emerging Application Areas. Berlin Heidelberg: Springer-Verlag Berlin Heidelberg. Available at: http://dx.doi.org/10.1007/978-3-662-49179-9_14.
    Car-like mobile robot navigation has been an active and challenging field both in academic research an in industry over the last few decades, and it has opened the way to build and test (recently) autonomously driven robotic cars which can negotiate the complexity and uncertainties introduced by real urban and suburban environments. In this chapter, we review the basic principles and discuss the corresponding categories in which current methods and associated algorithms for car-like vehicle autonomous navigation belong. They are used especially for outdoor activities and they have to be able to account for the constraints imposed by the non-holonomic type of movement allowable for car-like mobile robots. In addition, we present a number of projects from various application areas in the industry that are using these technologies. Our review starts with a description of a very popular and successful family of algorithms, namely the Rapidly-exploring Random Tree (RRT) planning method. After discussing the great variety and modifications proposed for the basic RRT algorithm, we turn our focus to versions which can address highly dynamic environments, especially those which become increasingly uncertain due to limited accuracy of the sensors used. We, subsequently, explore methods which use Fuzzy Logic to address the uncertainty and methods which consider navigation solutions within the holistic approach of a Simultaneous Localization and Mapping (SLAM) framework. Finally, we conclude with some remarks and thoughts about the current state of research and possible future developments.
  • Spanogianopoulos, S. and Sirlantzis, K. (2015). Non-holonomic Path Planning of Car-like Robot using RRT*FN. In: 2015 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI). IEEE, pp. 53-57. Available at: http://dx.doi.org/10.1109/URAI.2015.7358927.
    Path planning of car-like robots can be done using RRT and RRT*. Instead of generating the non-holonomic path between two sampled configurations in RRT, our approach finds a small incremental step towards the next configuration. Since the incremental step can be in any direction we use RRT to guide the robot from start configuration to end configuration. Moreover, an effective variant of RRT called as RRT - Fixed Nodes (RRT*FN) is used to show the path planning of non-holonomic car-like robot. The algorithm is further tested with different static environments. The results show that RRT*FN implemented with non-holonomic constraints is able to find a feasible solution with the increased cost of number of iterations of RRT*FN while maintaining fixed number of nodes.
  • Radu, P., Sirlantzis, K., Howells, G., Hoque, S. and Deravi, F. (2013). A Multi-algorithmic Colour Iris Recognition System. In: Soft Computing Applications Proceedings of the 5th International Workshop Soft Computing Applications (SOFA). Berlin, Germany: Springer, pp. 45-56. Available at: http://dx.doi.org/10.1007/978-3-642-33941-7_7.
    The reported accuracies of iris recognition systems are generally higher on near infrared images than on colour RGB images. To increase a colour iris recognition system’s performance, a possible solution is a multialgorithmic approach with an appropriate fusion mechanism. In the present work, this approach is investigated by fusing three algorithms at the score level to enhance the performance of a colour iris recognition system. The contribution of this paper consists of proposing 2 novel feature extraction methods for colour iris images, one based on a 3-bit encoder of the 8 neighborhood and the other one based on gray level co-occurrence matrix. The third algorithm employed uses the classical Gabor filters and phase encoding for feature extraction. A weighted average is used as a matching score fusion. The efficiency of the proposed iris recognition system is demonstrated on UBIRISv1 dataset.
  • Gherman, B. and Sirlantzis, K. (2013). Polynomial Order Prediction Using a Classifier Trained on Meta-Measurements. In: 2013 Fourth International Conference on Emerging Security Technologies. IEEE, pp. 117-120. Available at: http://dx.doi.org/10.1109/EST.2013.26.
    Polynomial regression is still widely used in engineering and economics where polynomials of low order (usually less than tenth order) are being fitted to experimental data. However, the fundamental problem of selecting the optimal order of the polynomial to be fitted to experimental data is not a straightforward problem. This paper investigates the performance of automated methods for predicting the order of the polynomial that can be fitted on the decision boundary formed between two classes in a pattern recognition problem. We have investigated statistical methods and proposed a method of predicting the order of the polynomial. Our proposed machine learning method is computing a number of measurements on the input data which are used by a classifier trained offline to predict the order of the polynomial that should be fitted to the decision boundary. We have considered two matching scenarios. One scenario is where we have counted only the exact matches as being correct and another scenario in which we count as correct an exact match and higher polynomial orders. Experimental results on synthetic data show that our proposed method predicts the exact order of the polynomial with 31.90% accuracy as opposed to 13.22% of the best statistical method, but it also under-estimates the true order almost twice as often when compared to statistical methods of predicting the order of the polynomial to be fitted to the same data points.
  • McConnon, G., Deravi, F., Hoque, S., Sirlantzis, K. and Howells, G. (2012). Impact of common ophthalmic disorders on iris segmentation. In: 2012 5th IAPR International Conference on Biometrics (ICB). IEEE, pp. 277-282. Available at: http://dx.doi.org/10.1109/ICB.2012.6199820.
    As iris recognition moves from constrained indoor and near-infrared systems towards unconstrained on-the-move and at-a-distance systems, possibly using visible light illumination, interest in measurement of the fidelity of the acquired images and their impact on recognition performance has grown. However, the impact of the subject's physiological characteristics on the nature of the acquired images has received little attention. In this paper we catalog a selection of the most common ophthalmic disorders and investigate some of their characteristics including their prevalence and possible impact on recognition performance. The paper also includes an experimental exploration of the effect of such conditions on segmentation of the iris image.
  • Radu, P., Sirlantzis, K., Howells, G., Hoque, S. and Deravi, F. (2012). Image Enhancement vs Feature Fusion in Colour Iris Recognition. In: 2012 Third International Conference on Emerging Security Technologies. IEEE, pp. 53-57. Available at: http://dx.doi.org/10.1109/EST.2012.33.
    In iris recognition, most of the research was conducted on operation under near infrared illumination. For an iris recognition system to be deployed on common hardware devices, such as laptops or mobile phones, its ability of working with visible spectrum iris images is necessary. Two of the main possible approaches to cope with noisy images in a colour iris recognition system are either to apply image enhancement techniques or to extract multiple types of features and subsequently to employ an efficient fusion mechanism. The contribution of the present paper consists of comparing which of the two above mentioned approaches is best in both identification and verification scenarios of a colour iris recognition system. The efficiency of the two approaches is demonstrated on UBIRISv1 dataset

Conference or workshop item

  • Oprea, P., Sirlantzis, K., Chatzidimitriadis, S., Doumas, O. and Howells, G. (2019). Artificial intelligence for safe assisted driving based on user head movements in robotic wheelchairs. In: 15th Conference on Global Challenges in Assistive Technology: Research, Policy & Practice. IOS Press. Available at: https://aaate2019.eu/aaate-2019-proceedings/.
    Wheelchairs users don’t always have the ability to control a powered wheelchair using a normal joystick due to factors that restrict the use of their arms and hands. For a certain number of these individuals, which still retain mobility of their head, alternative methods have been devised, such as chin-joysticks, head switches or sip-and-puff control. Such solutions can be bulky, cumbersome, unintuitive or simply uncomfortable and taxing for the user. This work presents an alternative head-based drive-control system for wheelchair users.
  • Mohamed, E., Dib, J., Sirlantzis, K. and Howells, G. (2019). Integrating ride dynamics measurements and user comfort assessment to smart robotic wheelchairs. In: 15th Conference on Global Challenges in Assistive Technology: Research, Policy & Practice. IOS Press. Available at: https://aaate2019.eu/aaate-2019-proceedings/.
    Individuals relying on wheelchairs for mobility are subject to the risk of injury due to their exposure to whole-body vibrations for long periods of time as per ISO 2631-1. Our study evaluates the feasibility of integrating ride dynamics measurements (i.e. vertical accelerations) as expressions of user travel comfort assessment to smart robotic wheelchairs. This will also help to mitigate the injury risk caused by the continuous exposure to vibrations using real time electronic measurement systems in order to ensure that the wheelchair’s movement dynamics (acceleration and speed) and the user’s comfort is adapted to the surrounding environment, specifically the type of ground surface type, as per the ISO standard mentioned previously.
  • Canoz, V., Gillham, M., Oprea, P., Chaumont, P., Bodin, A., Laux, P., Lebigre, M., Howells, G. and Sirlantzis, K. (2017). Embedded hardware for closing the gap between research and industry in the assistive powered wheelchair market. In: IEEE/SICE International Symposium on System Integration (SII 2016). IEEE. Available at: https://doi.org/10.1109/SII.2016.7843983.
    Literature is abound with smart wheelchair platforms of various developments, yet to date there has been little technology find its way to the market place. Many trials and much research has taken place over the last few decades however the end user has benefited precious little. There exists two fundamental difficulties when developing a smart powered wheelchair assistive system, the first is need for the system to be fully compatible with all of the manufacturers, and the second is to produce a technology and business model which is marketable and therefore desirable to the manufacturers. However this requires the researchers to have access to hardware which can be used to develop practical systems which integrate and communicate seamlessly with current manufacturer’s wheelchair systems. We present our powered wheelchair system which integrates with 95% of the powered wheelchair controller market; our system allows researchers to access the low level embedded system with more powerful computational devices running sophisticated software enabling rapid development of algorithms and techniques. When they have been evaluated they can be easily ported to the embedded processor for real-time evaluation and clinical trial.
  • Hu, Y., Sirlantzis, K. and Howells, G. (2015). Exploiting stable and discriminative iris weight map for iris recognition under less constrained environment. In: 7th IEEE International Conference on Biometrics: Theory, Applications and Systems.
  • Hu, Y., Sirlantzis, K., Howells, G., Nicolas, R. and Rodriguez, P. (2015). An online background subtraction algorithm using contiguously weighted linear regression. In: EUSIPCO 2015 - European Signal Processing Conference. Available at: http://www.eusipco2015.org/.
  • Ragot, N., Caron, G., Sakel, M. and Sirlantzis, K. (2015). COALAS : A EU multidisciplinary research project for assistive robotics neuro-rehabilitation’. In: IEEE/RSJ International Conference on Intelligent Robots (IROS).
  • Khemmar, R., Ertaud, J., Sirlantzis, K. and Savatier, X. (2015). V2G-based Smart Autonomous Vehicle For Urban Mobility using Renewable Energy. In: SMART 2015 the Fourth International Conference on Smart Systems, Devices and Technologies : URBAN COMPUTING 2015, the International Symposium on Emerging Frontiers of Urban Computing. Brussels, Belgium, pp. 62-68.
    IRSEEM is coordinator of a research program Savemore [19] aiming to develop and demonstrate the viability and effectiveness of systems for electrical transport and urban logistics based on autonomous robotic electric vehicles operating within a smart grid electrical power distribution framework. As a part of this project, our work focuses on the study of the coupling of electric vehicles with renewable energy. At the scale of a city, electric vehicles can be considered as a means of intermittent storage of electric power which can be distributed to the network when it is required (e.g., at times of the date when demand spikes). When these vehicles belong to a controlled and intelligent fleet, network organization is dynamic and leads to a smart grid. The widespread use of electric vehicles in cities coupled with renewable energy appears as a powerful tool to help local and regional authorities in the implementation of the European Agenda for low-carbon, reduced air pollution and encourage energy savings. In this paper, we present a Vehicle to Grid model which implements the interaction between an electric vehicle and a smart grid. The model takes into account several kinds of parameters related to the battery, the charging station, the size of the fleet and the power grid as the expansion coefficient. A statistical approach is adopted for the setting of these parameters to determine the significant parameters. Several simulations are performed to validate the model. In first step, we have studied the behavior of the model on a typical day of a person who has traveled from his home to work (in France). As a second step, and in order to study the power consumption behavior of the model, we have tested it during several seasons. The results show the effectiveness of the model developed.
  • Catley, E., Sirlantzis, K., Howells, G. and Kelly, S. (2014). Preliminary investigations into human fall verification in static images using the NAO humanoid robot. In: CareTECH.
  • Motoc, I., Sirlantzis, K., Spurgeon, S. and Lee, P. (2014). A Stable and Robust Walking Algorithm for the Humanoid Robot NAO based on the Zero Moment Point. In: CareTECH.
  • Spanogianopoulos, S., Sirlantzis, K., Mentzelopoulos, M. and Protopsaltis, A. (2014). Human computer interaction using gestures for mobile devices and serious games: A review. In: 2014 International Conference on Interactive Mobile Communication Technologies and Learning (IMCL),. IEEE, pp. 310-314. Available at: http://doi.org/10.1109/IMCTL.2014.7011154.
    The Human-Computer Interaction (HCI) with interfaces is an active challenge field in the industry over the past decades and has opened the way to communicate with the means of verbal, hand and body gestures using the latest technologies for a variety of different applications in areas such as video games, training and simulation. However, accurate recognition of gestures is still a challenge. In this paper, we review the basic principles and current methodologies used for collecting the raw gesture data from the user for recognize actions the users perform and the technologies currently used for gesture-HCI in games enterprise. In addition, we present a set of projects from various applications in games industry that are using gestural interaction.
  • Hu, Y., Sirlantzis, K. and Howells, G. (2014). A Robust Algorithm for Colour Iris Segmentation Based on 1-norm Regression. In: International Joint Conference on Biometrics.
  • Catley, E., Sirlantzis, K., Kelly, S. and Howells, G. (2014). Non-overlapping dual camera fall detection using the NAO humanoid robot. In: 5th International Conference on Emerging Security Technologies. pp. 67-70.
    With an aging population and a greater desire for independence, the dangers of falling incidents in the elderly have become particularly pronounced. In light of this, several technologies have been developed with the aim of preventing or monitoring falls. Failing to strike the balance between several factors including reliability, complexity and invasion of privacy has seen prohibitive in the uptake of these systems. Some systems rely on cameras being mounted in all rooms of a user's home while others require being worn 24 hours a day. This paper explores a system using a humanoid NAO robot with dual vertically mounted cameras to perform the task of fall detection.
  • Motoc, I., Sirlantzis, K., Spurgeon, S. and Lee, P. (2014). Zero Moment Point/Inverted Pendulum-Based Walking Algorithm for the NAO Robot. In: 2014 Fifth International Conference on Emerging Security Technologies (EST),. IEEE, pp. 63-66. Available at: http://doi.org/10.1109/EST.2014.34.
    Bipedal walking may be a difficult task to execute by a bipedal robot. Different factors such as the arm movement or the constant changing of the Center of Mass may lead to an unstable gait. This may be one of the reasons the trajectory of the Center of Mass should be calculated before making the next step. This paper presents a walking algorithm based on Zero Moment Point for the NAO robot. NAO is a 58 cm tall humanoid bipedal robot produced by the French company Aldebaran Robotics. Bipedal walking can be a quite difficult task, since the Center of Mass moves from one foot to another during the walking. For the NAO robot, walking is an even more difficult task, due to its limitations. This paper uses a Zero Moment Point-based walking algorithm in order to calculate the trajectory of the Center of Mass and obtain a stable and robust walk for NAO. The algorithm was used on a simulated environment using the NAO robot.
  • Radu, P., Sirlantzis, K., Howells, G., Hoque, S. and Deravi, F. (2013). Optimizing 2D Gabor Filters for Iris Recognition. In: 4th International Conference on Emerging Security Technologies (EST 2013),. IEEE, pp. 47-50. Available at: http://dx.doi.org/10.1109/EST.2013.15.
    The randomness and richness present in the iris texture make the 2D Gabor filter bank analysis a suitable technique to be used for iris recognition systems. To accurately characterize complex texture structures using 2D Gabor filters it is necessary to use multiple sets of parameters of this type of filters. This paper proposes a technique of optimizing multiple sets of 2D Gabor filter parameters to gradually enhance the accuracy of an iris recognition system. The proposed methodology is suitable to be applied on both near infrared and visible spectrum iris images. To illustrate the efficiency of the filter bank design technique, UBIRISv1 database was used for benchmarking
  • Ragot, N., Bouzbouz, F., Khemmar, R., Ertaud, J., Kokosy, A., Labbani-Igbida, O., Sajous, P., Niyonsaba, E., Reguer, D., Hu, H., McDonald-Maier, K., Sirlantzis, K., Howells, G., Pepper, M. and Sakel, M. (2013). Enhancing the Autonomy of Disabled Persons: Assistive Technologies Directed by User Feedback. In: 4th International Conference on Emerging Security Technologies (EST 2013),. IEEE, pp. 71-74. Available at: http://dx.doi.org/10.1109/EST.2013.20.
    Europe faces a major and growing healthcare problem due to increase in population, increasing longevity and an aging population with disability. Such dependent, elderly, disabled and vulnerable persons, are concerned since they wish to live at home as long as possible. This aspiration is also shared by national policies and communities across EU. To ensure the optimum care of dependent people, innovative solutions are encouraged to maintain independent life style. This paper outlines two projects, SYSIASS and COALAS, which aim to develop a set of technology based solutions to meet the needs and empower these people by enhancing mobility and communication.
  • Radu, P., Sirlantzis, K., Howells, G., Hoque, S. and Deravi, F. (2013). A Novel Iris Clustering Approach Using LAB Color Features. In: 4th IEEE International Symposium on Electrical And Electronics Engineering (ISEEE 2013). IEEE, pp. 1-4. Available at: http://dx.doi.org/10.1109/ISEEE.2013.6674362.
    Interesting results of color clustering for the iris images in the UBIRISv1 database are presented. The iris colors are characterized by feature vectors with 80 components corresponding to histogram bins computed in the CIELAB color space. The feature extraction is applied to the first session eye images after undergoing an iris segmentation process. An agglomerative hierarchical algorithm is used to organize 1.205 segmented iris images in 8 clusters based on their color content.
  • Guness, S., Deravi, F., Sirlantzis, K., Pepper, M. and Sakel, M. (2013). A Novel Depth-based Head Tracking and Gesture Recognition System. In: 12th European AAATE (Association for the Advancement of Assistive Technology in Europe) Conference. IOS Press EBooks, pp. 1021-1026. Available at: http://dx.doi.org/10.3233/978-1-61499-304-9-1021.
    This paper presents the architecture for a novel RGB-D based assistive device that incorporates depth as well as RGB data to enhance head tracking and facial gesture based control for severely disabled users. Using depth information it is possible to remove background clutter and therefore achieve a more accurate and robust performance. The system is compared with the CameraMouse, SmartNav and our previous 2D head tracking system. For the RGB-D system, the effective throughput of dwell clicking increased by a third (from 0.21 to 0.30 bits per second) and that of blink clicking doubled (from 0.15 to 0.28 bits per second) compared to the 2D system.
  • Guness, S., Deravi, F., Sirlantzis, K., Pepper, M. and Sakel, M. (2012). Developing a vision based gesture recognition system to control assistive technology in neuro-disability. In: 2012 Annual Conference, American Congress of Rehabilitation Medicine (2012 ACRM-ASNR). Elsevier Science B.V., p. e1. Available at: http://dx.doi.org/doi:10.1016/j.apmr.2012.08.202.
  • Guness, S., Deravi, F., Sirlantzis, K., Pepper, M. and Sakel, M. (2012). Evaluation of vision-based head-trackers for assistive devices. In: 34th Annual International Conference of the IEEE EMBS. pp. 4804-4807. Available at: http://dx.doi.org/10.1109/EMBC.2012.6347068.
    This paper presents a new evaluation methodology for assistive devices employing head-tracking systems based on an adaptation of the Fitts Test. This methodology is used to compare the effectiveness and performance of a new vision-based head tracking system using face, skin and motion detection techniques with two existing head tracking devices and a standard mouse. The application context and the abilities of the user are combined with the results from the modified Fitts Test to help determine the most appropriate devices for the user. The results suggest that this modified form of the Fitts test can be effectively employed for the comparison of different access technologies.
  • Radu, P., Sirlantzis, K., Howells, G., Hoque, S. and Deravi, F. (2012). A Visible Light Iris Recognition System using Colour Information. In: 9th IASTED International Conference on Signal Processing, Pattern Recognition and Applications (SPPRA 2012). Acta Press. Available at: http://dx.doi.org/10.2316/P.2012.778-019.
    The iris has been shown to be a highly reliable biometric modality with almost perfect authentication accuracy. However, a classical iris recognition system operates under near infrared illumination, which is a major constraint for a range of applications. In this paper, we propose an iris recognition system which is able to cope with noisy colour iris images by employing image processing techniques together with a Multiple Classifier System to fuse the information from various colour channels. There are two main contributions in the present work: first, we adapt standard iris features, proposed in the literature for near infrared images, to match the characteristics of colour iris images; second, we introduce a robust fusion mechanism to combine the features from various colour channels. With a feature size of only 360 real numbers, the efficiency of the proposed biometric system is demonstrated on the UBIRISv1 dataset for both identification and verification scenarios.
  • Radu, P., Sirlantzis, K., Howells, G., Hoque, S. and Deravi, F. (2011). A Versatile Iris Segmentation Algorithm. In: 2011 BIOSIG Conference on Biometrics and Security.
  • Radu, P., Sirlantzis, K., Howells, G., Deravi, F. and Hoque, S. (2011). Information Fusion for Unconstrained Iris Recognition. In: International Conference on Emerging Security Technologies (EST 2011).


  • Spanogianopoulos, S. (2017). A New Approach towards Non-Holonomic Path Planning of Car-Like Robots Using Rapidly Random Tree Fixed Nodes(RRT*FN).
    Autonomous car driving is gaining attention in industry and is also an ongoing research in scientific community. Assuming that the cars moving on the road are all autonomous, this thesis introduces an elegant approach to generate non-holonomic collision-free motion of a car connecting any two poses (configurations) set by the user. Particularly this thesis focusses research on "path-planning" of car-like robots in the presence of static obstacles.
    Path planning of car-like robots can be done using RRT and RRT*. Instead of generating the non-holonomic path between two sampled configurations in RRT, our approach finds a small incremental step towards the next random configuration. Since the incremental step can be in any direction we use RRT to guide the robot from start configuration to end configuration.
    This "easy-to-implement" mechanism provides flexibility for enabling standard plan- ners to solve for non-holonomic robots without much modifications. Thus, strength of such planners for car path planning can be easily realized. This thesis demon- strates this point by applying this mechanism for an effective variant of RRT called as RRT - Fixed Nodes (RRT*FN).
    Experiments are conducted by incorporating our mechanism into RRT*FN (termed as RRT*FN-NH) to show the effectiveness and quality of non-holonomic path gener- ated. The experiments are conducted for typical benchmark static environments and the results indicate that RRT*FN-NH is mostly finding the feasible non-holonomic solutions with a fixed number of nodes (satisfying memory requirements) at the cost of increased number of iterations in multiples of 10k.
    Thus, this thesis proves the applicability of mechanism for a highly constrained planner like RRT*-FN, where the path needs to be found with a fixed number of nodes. Although, comparing the algorithm (RRT*FN-NH) with other existing planners is not the focus of this thesis there are considerable advantages of the mechanism when applied to a planner. They are a) instantaneous non-holonomoic path generation using the strengths of that particular planner, b) ability to modify on-the-fly non-holomic paths, and c) simple to integrate with most of the existing planners.
    Moreover, applicability of this mechanism using RRT*-FN for non-holonomic path generation of a car is shown for a more realistic urban environments that have typical narrow curved roads. The experiments were done for actual road map obtained from google maps and the feasibility of non-holonomic path generation was shown for such environments. The typical number of iterations needed for finding such feasible solutions were also in multiple of 10k. Increasing speed profiles of the car was tested by limiting max speed and acceleration to see the effect on the number of iterations.
  • Hu, Y. (2017). Improving Less Constrained Iris Recognition.
    The iris has been one of the most reliable biometric traits for automatic human authentication due to its highly stable and distinctive patterns. Traditional iris recognition algorithms have achieved remarkable performance in strictly constrained environments, with the subject standing still and with the iris captured at a close distance. This enables the wide deployment of iris recognition systems in applications such as border control and access control. However, in less constrained environments with the subject at-a-distance and on-the-move, the iris recognition performance is significantly deteriorated, since such environments induce noise and degradations in iris captures. This restricts the applicability and practicality of iris recognition technology for some real-world applications with more open capturing conditions, such as surveillance, forensic and mobile device security applications. Therefore, robust algorithms for less constrained iris recognition are desirable for the wider deployment of iris recognition systems.

    This thesis focuses on improving less constrained iris recognition. Five methods are proposed to improve the performance of different stages in less constrained iris recognition. First, a robust iris segmentation algorithm is developed using l1-norm regression and model selection. This algorithm formulates iris segmentation as robust l1-norm regression problems. To further enhance the robustness, multiple segmentation results are produced by applying l1-norm regression to different models, and a model selection technique is used to select the most reliable result. Second, an iris liveness detection method using regional features is investigated. This method seeks not only low level features, but also high level feature distributions for more accurate and robust iris liveness detection. Third, a signal-level information fusion algorithm is presented to mitigate the noise in less constrained iris captures. With multiple noisy iris captures, this algorithm proposes a sparse-error low rank matrix factorization model to separate noiseless iris structures and noise. The noiseless structures are preserved and emphasised during the fusion process, while the noise is suppressed, in order to obtain more reliable signals for recognition. Fourth, a method to generate optimal iris codes is proposed. This method considers iris code generation from the perspective of optimization. It formulates traditional iris code generation method as an optimization problem; an additional objective term modelling the spatial correlations in iris codes is applied to this optimization problem to produce more effective iris codes. Fifth, an iris weight map method is studied for robust iris matching. This method considers both intra-class bit stability and inter-class bit discriminability in iris codes. It emphasises highly stable and discriminative bits for iris matching, enhancing the robustness of iris matching.

    Comprehensive experimental analysis are performed on benchmark datasets for each of the above methods. The results indicate that the presented methods are effective for less constrained iris recognition, generally improving state-of-the-art performance.
  • Gherman, B. (2016). Modular Neural Networks Applied to Pattern Recognition Tasks.
    Pattern recognition has become an accessible tool in developing advanced adaptive products. The need for such products is not diminishing but on the contrary, requirements for systems that are more and more aware of their environmental circumstances are constantly growing. Feed-forward neural networks are used to learn patterns in their training data without the need to discover by hand the relationships present in the data. However, the problem of estimating the required size of the neural network is still not solved. If we choose a neural network that is too small for a particular given task, the network is unable to "comprehend" the intricacies of the data. On the other hand if we choose a network size that is too big for the given task, we will observe that there are too many parameters to be tuned for the network, or we can fall in the "Curse of dimensionality" or even worse, the training algorithm can easily be trapped in local minima of the error surface. Therefore, we choose to investigate possible ways to find the 'Goldilocks' size for a feed-forward neural network (which is just right in some sense), being given a training set. Furthermore, we used a common paradigm used by the Roman Empire and employed on a wide scale in computer programming, which is the "Divide-et-Impera" approach, to divide a given dataset in multiple sub-datasets, solve the problem for each of the sub-dataset and fuse the results of all the sub-problems to form the result for the initial problem as a whole. To this effect we investigated modular neural networks and their performance.
  • Guness, S. (2015). Development and Evaluation of Facial Gesture Recognition and Head Tracking for Assistive Technologies.
    Globally, the World Health Organisation estimates that there are about 1 billion people suffering from disabilities and the UK has about 10 million people suffering from neurological disabilities in particular. In extreme cases these individuals with disabilities such as Motor Neuron Disease(MND), Cerebral Palsy(CP) and Multiple Sclerosis(MS) may only be able to perform limited head movement, move their eyes or make facial gestures. The aim of this research is to investigate low-cost and reliable assistive devices using automatic gesture recognition systems that will enable the most severely disabled user to access electronic assistive technologies and communication devices thus enabling them to communicate with friends and relative.

    The research presented in this thesis is concerned with the detection of head movements, eye movements, and facial gestures, through the analysis of video and depth images. The proposed system, using web cameras or a RGB-D sensor coupled with computer vision and pattern recognition techniques, will have to be able to detect the movement of the user and calibrate it to facilitate communication. The system will also provide the user with the functionality of choosing the sensor to be used i.e. the web camera or the RGB-D sensor, and the interaction or switching mechanism i.e. eye blink or eyebrows movement to use. This ability to system to enable the user to select according to the user's needs would make it easier on the users as they would not have to learn how to operating the same system as their condition changes.

    This research aims to explore in particular the use of depth data for head movement based assistive devices and the usability of different gesture modalities as switching mechanisms. The proposed framework consists of a facial feature detection module, a head tracking module and a gesture recognition module. Techniques such as Haar-Cascade and skin detection were used to detect facial features such as the face, eyes and nose. The depth data from the RGB-D sensor was used to segment the area nearest to the sensor. Both the head tracking module and the gesture recognition module rely on the facial feature module as it provided data such as the location of the facial features. The head tracking module uses the facial feature data to calculate the centroid of the face, the distance to the sensor, the location of the eyes and the nose to detect head motion and translate it into pointer movement. The gesture detection module uses features such as the location of the eyes, the location of the pupil, the size of the pupil and calculates the interocular distance for the detection of blink or eyebrows movement to perform a click action. The research resulted in the creation of four assistive devices based on the combination of the sensors (Web Camera and RGB-D sensor) and facial gestures (Blink and Eyebrows movement): Webcam-Blink, Webcam-Eyebrows, Kinect-Blink and Kinect-Eyebrows. Another outcome of this research has been the creation of an evaluation framework based on Fitts' Law with a modified multi-directional task including a central location and a dataset consisting of both colour images and depth data of people performing head movement towards different direction and performing gestures such as eye blink, eyebrows movement and mouth movements.

    The devices have been tested with healthy participants. From the observed data, it was found that both Kinect-based devices have lower Movement Time and higher Index of Performance and Effective Throughput than the web camera-based devices thus showing that the introduction of the depth data has had a positive impact on the head tracking algorithm. The usability assessment survey, suggests that there is a significant difference in eye fatigue experienced by the participants; blink gesture was less tiring to the eye than eyebrows movement gesture. Also, the analysis of the gestures showed that the Index of Difficulty has a large effect on the error rates of the gesture detection and also that the smaller the Index of Difficulty the higher the error rate.


  • Motoc, I., Sirlantzis, K. and Spurgeon, S. (2016). A novel robust arm movement algorithm for humanoid robots based on finite time control. Journal of Intelligent & Robotic Systems.
Last updated