Portrait of Dr Stuart Gibson

Dr Stuart Gibson

Senior Lecturer in Forensic Science
Director of Innovation and Enterprise

About

Dr Stuart Gibson was appointed to the position of Lecturer in the School of Physical Sciences at the University of Kent in 2007. He is the co-inventor of the EFIT-V facial composite system which is currently used by the majority of UK police constabularies and in numerous other countries.

Research interests

The main theme of Dr Stuart Gibson’s research is forensic applications of digital image processing and machine learning. Specific areas of expertise include:

  • facial composites for use in criminal investigations
  • digital image forensics
  • interactive evolutionary computation.

Teaching

Stuart teaches in the areas of numerical and computational methods, mathematical techniques for physical sciences and digital forensics. 

Publications

Article

  • Liu, J. et al. (2019). Dynamic spectrum matching with one-shot learning. Chemometrics and Intelligent Laboratory Systems [Online] 184:175-181. Available at: https://doi.org/10.1016/j.chemolab.2018.12.005.
    Convolutional neural networks (CNN) have been shown to provide a good solution for classification problems that utilize data obtained from vibrational spectroscopy. Moreover, CNNs are capable of identifying substances from noisy spectra without the need for additional preprocessing. However, their application in practical spectroscopy is restricted due to two reasons. First the effectiveness of classification using CNNs diminishes rapidly when only a small number of spectra per substance are available for training (which is a typical situation in real applications). Secondly, to accommodate new, previously unseen, substance classes the network must be retrained which is computationally intensive. Here we address these issues by reformulating a multi-class classification problem with a large number of classes to a binary classification problem for which the available data is sufficient for representation learning. Hence, we define the learning task as identifying pairs of inputs as belonging to the same class or different classes. We achieve this using a Siamese convolutional neural network. A novel sampling strategy is proposed to address the imbalance problem in training the Siamese network. The trained network can classify samples of previously unseen substance classes using just a single reference sample (termed as one-shot learning in the machine learning community). Our results on three independent Raman datasets demonstrate much better accuracy than other practical systems to date, while allowing effortless updates of the system's database with new substance classes.
  • Alsufyani, A. et al. (2018). Breakthrough Percepts of Famous Faces. Psychophysiology [Online]. Available at: https://doi.org/10.1111/psyp.13279.
    Recently, we showed that presenting salient names (i.e. a participant’s first name) on
    the fringe of awareness (in Rapid Serial Visual Presentation) breaks through into
    awareness, resulting in the generation of a P3, which (if concealed information is
    presented) could be used to differentiate between deceivers and non-deceivers
    (Bowman et al., 2013; Bowman, Filetti, Alsufyani, Janssen, & Su, 2014). The aim of
    the present study was to explore whether face stimuli can be used in an ERP-based
    RSVP paradigm to infer recognition of broadly familiar faces. To do this, we explored
    whether famous faces differentially break into awareness when presented in RSVP and,
    importantly, whether ERPs can be used to detect these ‘breakthrough’ events on an
    individual basis. Our findings provide evidence that famous faces are differentially
    perceived and processed by participants’ brains as compared to novel (or unfamiliar)
    faces. EEG data revealed large differences in brain responses between these conditions.
  • Liu, J. et al. (2017). Deep Convolutional Neural Networks for Raman Spectrum Recognition: A Unified Solution. Analyst [Online] 142:4067-4074. Available at: http://dx.doi.org/10.1039/C7AN01371J.
    Machine learning methods have found many applications in Raman spectroscopy, especially for the identification of chemical species. However, almost all of these methods require non-trivial
    preprocessing such as baseline correction and/or PCA as an essential step. Here we describe our unified solution for the identification of chemical species in which a convolutional neural network
    is trained to automatically identify substances according to their Raman spectrum without the need for preprocessing. We evaluated our approach using the RRUFF spectral database, comprising
    mineral sample data. Superior classification performance is demonstrated compared with other frequently used machine learning algorithms including the popular support vector machine
    method.
  • Mididoddi, C. et al. (2017). High throughput photonic time stretch optical coherence tomography with data compression. IEEE Photonics Journal [Online]. Available at: https://doi.org/10.1109/JPHOT.2017.2716179.
    Photonic time stretch enables real time high throughput optical coherence tomography (OCT), but with massive data volume being a real challenge. In this paper, data compression in high throughput optical time stretch OCT has been explored and experimentally demonstrated. This is made possible by exploiting spectral sparsity of encoded optical pulse spectrum using compressive sensing (CS) approach. Both randomization and integration have been implemented in the optical domain avoiding an electronic bottleneck. A data compression ratio of 66% has been achieved in high throughput OCT measurements with 1.51 MHz axial scan rate using greatly reduced data sampling rate of 50 MS/s. Potential to improve compression ratio has been exploited. In addition, using a dual pulse integration method, capability of improving frequency measurement resolution in the proposed system has been demonstrated. A number of optimization algorithms for the reconstruction of the frequency-domain OCT signals have been compared in terms of reconstruction accuracy and efficiency. Our results show that the L1 Magic implementation of the primal-dual interior point method offers the best compromise between accuracy and reconstruction time of the time-stretch OCT signal tested.
  • Hernández-Castro, C. et al. (2017). Using machine learning to identify common flaws in CAPTCHA design: FunCAPTCHA case analysis. Computers and Security [Online] 70:744-756. Available at: https://doi.org/10.1016/j.cose.2017.05.005.
    Human Interactive Proofs (HIPs 1 or CAPTCHAs 2) have become a first-level security measure on the Internet to avoid automatic attacks or minimize their effects. All the most widespread, successful or interesting CAPTCHA designs put to scrutiny have been successfully broken. Many of these attacks have been side-channel attacks. New designs are proposed to tackle these security problems while improving the human interface. FunCAPTCHA is the first commercial implementation of a gender classification CAPTCHA, with reported improvements in conversion rates. This article finds weaknesses in the security of FunCAPTCHA and uses simple machine learning (ML) analysis to test them. It shows a side-channel attack that leverages these flaws and successfully solves FunCAPTCHA on 90% of occasions without using meaningful image analysis. This simple yet effective security analysis can be applied with minor modifications to other HIPs proposals, allowing to check whether they leak enough information that would in turn allow for simple side-channel attacks.
  • Osadchy, M. et al. (2017). No Bot Expects the DeepCAPTCHA! Introducing Immutable Adversarial Examples, with Applications to CAPTCHA Generation. IEEE Transactions on Information Forensics and Security [Online]. Available at: http://dx.doi.org/10.1109/TIFS.2017.2718479.
    Recent advances in Deep Learning (DL) allow for solving complex AI problems that used to be considered very hard. While this progress has advanced many fields, it is considered to be bad news for CAPTCHAs (Completely Automated Public Turing tests to tell Computers and Humans Apart), the security of which rests on the hardness of some learning problems.
    In this paper we introduce DeepCAPTCHA, a new and secure CAPTCHA scheme based on adversarial examples, an inherit limitation of the current Deep Learning networks. These adversarial examples are constructed inputs, either synthesized from scratch or computed by adding a small and specific perturbation called adversarial noise to correctly classified items, causing the targeted DL network to misclassify them. We show that plain adversarial noise is insufficient to achieve secure CAPTCHA schemes, which leads us to introduce immutable adversarial noise — an adversarial noise that is resistant to removal attempts. In this work we implement a proof of concept system, and its analysis shows that the scheme offers high security and good usability compared to the best previously existing CAPTCHAs.
  • Davis, J. et al. (2016). Holistic facial composite construction and subsequent lineup identification accuracy: Comparing adults and children. Journal of Psychology [Online] 150:102-118. Available at: http://dx.doi.org/10.1080/00223980.2015.1009867.
    When the police have no suspect, they may ask an eyewitness to construct a facial composite of that suspect from memory. Faces are primarily processed holistically, and recently developed computerized holistic facial composite systems (e.g., EFIT-V) have been designed to match these processes. The reported research compared children aged 6–11 years with adults on their ability to construct a recognizable EFIT-V composite. Adult constructor's EFIT-Vs received significantly higher composite-suspect likeness ratings from assessors than children's, although there were some notable exceptions. In comparison to adults, the child constructors also overestimated the composite-suspect likeness of their own EFIT-Vs. In a second phase, there were no differences between adult controls and constructors in correct identification rates from video lineups. However, correct suspect identification rates by child constructors were lower than those of child controls, suggesting that a child's memory for the suspect can be adversely influenced by composite construction. Nevertheless, all child constructors coped with the demands of the EFIT-V system, and the implications for research, theory, and the criminal justice system practice are discussed.
  • Davis, J. et al. (2015). Holistic facial composite creation and subsequent video line-up eyewitness identification paradigm. Journal of Visualized Experiments [Online] 106. Available at: http://dx.doi.org/10.3791/53298.
    The paradigm detailed in this manuscript describes an applied experimental method based on real police investigations during which an eyewitness or victim to a crime may create from memory a holistic facial composite of the culprit with the assistance of a police operator. The aim is that the composite is recognized by someone who believes that they know the culprit. For this paradigm, participants view a culprit actor on video and following a delay, participant-witnesses construct a holistic system facial composite. Controls do not construct a composite. From a series of arrays of computer-generated, but realistic faces, the holistic system construction method primarily requires participant-witnesses to select the facial images most closely meeting their memory of the culprit. Variation between faces in successive arrays is reduced until ideally the final image possesses a close likeness to the culprit. Participant-witness directed tools can also alter facial features, configurations between features and holistic properties (e.g., age, distinctiveness, skin tone), all within a whole face context. The procedure is designed to closely match the holistic manner by which humans’ process faces. On completion, based on their memory of the culprit, ratings of composite-culprit similarity are collected from the participant-witnesses. Similar ratings are collected from culprit-acquaintance assessors, as a marker of composite recognition likelihood. Following a further delay, all participants — including the controls — attempt to identify the culprit in either a culprit-present or culprit-absent video line-up, to replicate circumstances in which the police have located the correct culprit, or an innocent suspect. Data of control and participant-witness line-up outcomes are presented, demonstrating the positive influence of holistic composite construction on identification accuracy. Correlational analyses are conducted to measure the relationship between assessor and participant-witness composite-culprit similarity ratings, delay, identification accuracy, and confidence to examine which factors influence video line-up outcomes.
  • Davis, J. et al. (2015). An evaluation of post-production facial composite enhancement techniques. Journal of Forensic Practice [Online] 17:307-318. Available at: http://www.emeraldinsight.com/journal/jfp.
    Purpose – The purpose of this paper is to describe four experiments evaluating post-production enhancement techniques with facial composites mainly created using the EFIT-V holistic system. Design/methodology/approach – Experiments 1–4 were conducted in two stages. In Stage 1, constructors created between one and four individual composites of unfamiliar targets. These were merged to create morphs. Additionally in Experiment 3, composites were vertically stretched. In Stage 2, participants familiar with the targets named or provided target-similarity ratings to the images. Findings – In Experiments 1–3, correct naming rates were significantly higher to between-witness 4-morphs, within-witness 4-morphs and vertically stretched composites than to individual composites. In Experiment 4, there was a positive relationship between composite-target similarity ratings and between-witness morph-size (2–, 4–, 8–, 16-morphs). Practical implications – The likelihood of a facial composite being recognised can be improved by morphing and vertical stretch. Originality/value – This paper improves knowledge of the theoretical underpinnings of these facial composite post-production enhancement techniques. This should encourage acceptance by the criminal justice system, and lead to better detection outcomes
  • Sandoval Orozco, A. et al. (2015). Smartphone image acquisition forensics using sensor fingerprint. Institute of Engineering and Technology, Computer Vision, [Online] 9:723-731. Available at: http://dx.doi.org/10.1049/iet-cvi.2014.0243.
    The forensic analysis of digital images from mobile devices is particularly important
    given their quick expansion and everyday use in the society. A further consequence of
    digital images' widespread use is that they are used today as silent witnesses in legal
    proceedings, as crucial evidence of the crime. This study specifically addresses the
    description of a technique that allows the identification of the image source acquisition, for
    the specific case of mobile devices images. This approach is to extract wavelet-based ...
  • Mist, J., Gibson, S. and Solomon, C. (2015). Comparing Evolutionary Operators, Search Spaces, and Evolutionary Algorithms in the Construction of Facial Composites. Informatica [Online]:135-145. Available at: http://www.informatica.si/index.php/informatica.
    Facial composite construction is one of the most successful applications of interactive evolutionary computation.
    In spite of this, previous work in the area of composite construction has not investigated the
    algorithm design options in detail. We address this issue with four experiments. In the first experiment a
    sorting task is used to identify the 12 most salient dimensions of a 30-dimensional search space. In the second
    experiment the performances of two mutation and two recombination operators for interactive genetic
    algorithms are compared. In the third experiment three search spaces are compared: a 30-dimensional
    search space, a mathematically reduced 12-dimensional search space, and a 12-dimensional search space
    formed from the 12 most salient dimensions. Finally, we compare the performances of an interactive
    genetic algorithm to interactive differential evolution. Our results show that the facial composite construction
    process is remarkably robust to the choice of evolutionary operator(s), the dimensionality of the search
    space, and the choice of interactive evolutionary algorithm. We attribute this to the imprecise nature of human
    face perception and differences between the participants in how they interact with the algorithms.

    Povzetek: Kompozitna gradnja obrazov je ena izmed najbolj uspešnih aplikacij interaktivnega evolucijskega
    raˇcunanja. Kljub temu pa do zdaj na podroˇcju kompozitne gradnje niso bile podrobno raziskane
    možnosti snovanja algoritma. To vprašanje smo obravnavali s štirimi poskusi. V prvem je uporabljeno
    sortiranje za identifikacijo 12 najbolj izstopajoˇcih dimenzij 30-dimenzionalnega preiskovalnega prostora.
    V drugem primerjamo uˇcinkovitost dveh mutacij in dveh rekombinacijskih operaterjev za interaktivni
    genetski algoritem. V tretjem primerjamo tri preiskovalne prostore: 30-dimenzionalni, matematiˇcno reducirani
    12-dimenzionalni in 12-dimenzionalni prostor sestavljen iz 12 najpomembnejših dimenzij. Na
    koncu smo primerjali uspešnost interaktivnega genetskega algoritma z interaktivno diferencialno evolucijo.
    Rezultati kažejo, da je proces kompozitne gradnje obrazov izredno robusten glede na izbiro evolucijskega
    operatorja(-ev), dimenzionalnost preiskovalnega prostora in izbiro interaktivnega evolucijskega algoritma.
    To pripisujemo nenatanˇcni naravi percepcije in razlikam med interakcijami uporabnikov z algoritmom.
  • Davis, J., Gibson, S. and Solomon, C. (2014). The positive influence of creating a holistic facial composite on video lineup identification. Applied Cognitive Psychology [Online] 28:634-639. Available at: http://dx.doi.org/10.1002/acp.3045.
  • Salahioglu, F., Went, M. and Gibson, S. (2013). Application of Raman spectroscopy for the differentiation of lipstick traces. Analytical Methods 5:5392-5401.
  • Solomon, C., Gibson, S. and Mist, J. (2013). Interactive evolutionary generation of facial composites for locating suspects in criminal investigations. Applied Soft Computing [Online] 13:3298-3306. Available at: http://dx.doi.org/10.1016/j.asoc.2013.02.010.
    Statistical appearance models have previously been used for computer face recognition applications in which
    an image patch is synthesized and morphed to match a target face image using an automated iterative fitting
    algorithm. Here we describe an alternative use for appearance models, namely for producing facial composite
    images (sometimes referred to as E-FIT or PhotoFIT images). This application poses an interesting real-
    world optimization problem because the target face exists in the mind of the witness and not in a tangible
    form such as a digital image. To solve this problem we employ an interactive evolutionary algorithm that
    allows the witness to evolve a likeness to the target face. A system based on our approach, called EFIT-V,
    is used frequently by three quarters of UK police constabularies.
  • French, H., Went, M. and Gibson, S. (2013). Graphite Furnace Atomic Absorption Elemental Analysis of Ecstasy Tablets. Forensic Science International [Online] 231:88-91. Available at: http://dx.doi.org/10.1016/j.forsciint.2013.04.021.
    Abstract: Six metals (Cu, Mg, Ba, Ni, Cr, Pb) were determined in two separate batches of seized ecstasy
    tablets by graphite furnace atomic absorption spectroscopy (GFAAS) following digestion with nitric
    acid and hydrogen peroxide. Large intra-batch variations were found as expected for tablets produced
    in clandestine laboratories. For example, nickel in batch 1 was present in the range 0.47-13.1 ppm and
    in batch 2 in the range 0.35-9.06 ppm. Although batch 1 had significantly higher MDMA content than
    batch 2, barium was the only element which discriminated between the two ecstasy seizures (batch 1:
    0.19-0.66 ppm, batch 2: 3.77-5.47 ppm).
  • Valentine, T. et al. (2010). Evolving and combining facial composites: Between-witness and within-witness morphs compared. Journal of Experimental Psychology: Applied [Online] 16:72-86. Available at: http://dx.doi.org/10.1037/a0018801.
  • Gibson, S. et al. (2009). New methodology in facial composite construction: from theory to practice. International Journal of Electronic Security and Digital Forensics [Online] 2:156-168. Available at: http://dx.doi.org/10.1504/IJESDF.2009.024900.
    Existing commercial, computerised techniques for constructing facial composites generated from eyewitness memory are essentially electronic versions of the original, mechanical feature-based systems such as PhotoFIT and Identikit. The effectiveness of this feature-based approach is fundamentally limited by the witness's ability to recall and verbalise accurate descriptions of facial features from memory. Recent advances in facial composite methodology have led to software systems that do not rely on this process but instead exploit a cognitively less demanding process of recognition. We provide a technical overview of the EFIT-V system, currently being used by a number of police services in the UK.
  • Gibson, S. et al. (2009). Computer Assisted Age Progression. Journal of Forensic Science, Medicine and Pathology [Online] 5:174-181. Available at: http://dx.doi.org/10.1007/s12024-009-9102-z.
    A computer assisted method for altering the perceived age of a human face is presented. Our technique is based on calculating a trajectory or axis within a multidimensional space that captures the changes in large scale facial structure, shading and complexion associated with aging. Fine facial details associated with increasing age, such as wrinkles, are added to the aged face using a variation on a standard image processing technique called high boost filtering. The method is successfully applied to two-dimensional photographic images exhibiting uncontrolled variations in pose and illumination. Unlike our previous work on automated age progression, here the objective is to allow a certain degree of manual control over the process by the adjustment of three key progression-control-parameters. In the future this work may form the basis for a software tool to be used by forensic artists
  • Scandrett (nee Hill), C., Solomon, C. and Gibson, S. (2006). A Person-Specific, Rigorous Aging Model of the Human Face. Pattern Recognition Letters [Online] 27:1776-1787. Available at: http://dx.doi.org/10.1016/j.patrec.2006.02.007.
    We present a statistically rigorous approach to the aging of digitised images of the human face. Our methodology is based on the calculation of optimised aging trajectories in a model space and aged images can be obtained through a fast, semi-automatic procedure. In addition, person-specific information about the subject at previous ages is included, allowing aging to proceed in the most appropriate direction in the model space. The theoretical basis is introduced and experimental results from our implementation are presented and discussed.
  • Gibson, S., Solomon, C. and Pallares-Bejarano, A. (2005). Non Linear, Near Photo-Realistic Caricatures using a Parametric Facial Appearance Model. Behavior Research Methods [Online] 37:170-181. Available at: http://dx.doi.org/10.3758/BF03206412.
    A mathematical model previously developed for use in computer vision applications is presented as an empirical model for face space. The term appearance space is used to distinguish this from previous models. Appearance space is a linear vector space that is dimensionally optimal, enables us to model and describe any human facial appearance, and possesses characteristics that are plausible for the representation of psychological face space. Randomly sampling from a multivariate distribution for a location in appearance space produces entirely plausible faces, and manipulation of a small set of defining parameters enables the automatic generation of photo-realistic caricatures. The appearance space model leads us to the new concept of nonlinear caricatures, and we show that the accepted linear method for caricature is only a special case of a more general paradigm. Nonlinear methods are also viable, and we present examples of photographic quality caricatures, using a number of different transformation functions. Results of a simple experiment are presented that suggest that nonlinear transformations can accurately capture key aspects of the caricature effect. Finally, we discuss the relationship between appearance space, caricature, and facial distinctiveness. On the basis of our new theoretical framework, we suggest an experimental approach that can yield new evidence for the plausibility of face space and its ability to explain processes of recognition.
  • Johnston, V. et al. (2003). Human facial beauty: Current theories and methodologies. Archives of Facial Plastic Surgery [Online] 5:371-377. Available at: http://dx.doi.org/10.1001/archfaci.5.5.446.
    This article examines current theories of beauty and describes recent progress in the ability to generate photorealistic faces using a computer. First, we describe a novel experimental tool, FacePrints, that allows a user to "evolve" an attractive face using a computer. We discuss the use of this program for research on human beauty and review the main experimental studies that have led to our current theoretical perspective: beauty is a product of sexual selection. Second, we outline major improvements to the FacePrints program and demonstrate the near photographic quality of facial composites that can be obtained by combining the Face-Prints algorithm with a principal components analysis-based facial appearance model. The technical basis for a possible computer-planning system that could help the patient and surgeon define reasonable and desirable surgical outcomes is also outlined. Finally, we summarize the current state of the art and examine the issues that need to be addressed for developing the current program into a practical experimental and/or clinical tool. © 2003 American Medical Association. All rights reserved.

Book section

  • Solomon, C. and Gibson, S. (2014). Developments in Forensic Facial Composites. in: Mallett, X., Blythe, T. and Berry, R. eds. Advances in Forensic Human Identification. CRC Press.
  • Gibson, S. (2012). Computer-assisted age progression. in: Wilkinson, C. and Rynn, C. eds. Craniofacial Identification. Cambridge University Press, pp. 76-85.
  • Solomon, C., Gibson, S. and Maylin, M. (2012). EFIT-V: evolutionary algorithms and computer composites. in: Wilkinson, C. and Rynn, C. eds. Craniofacial Identification. Cambridge University Press, pp. 24-41.

Conference or workshop item

  • Bai, F. et al. (2018). Superpixel guided active contour segmentation of retinal layers in OCT volumes. in: Podoleanu, A. G. H. and Bang, O. eds. Second Canterbury Conference on Optical Coherence Tomography, 2017, Canterbury, United Kingdom. SPIE. Available at: http://dx.doi.org/10.1117/12.2282326.
    Retinal OCT image segmentation is a precursor to subsequent medical diagnosis by a clinician or machine
    learning algorithm. In the last decade, many algorithms have been proposed to detect retinal layer boundaries
    and simplify the image representation. Inspired by the recent success of superpixel methods for pre-processing
    natural images, we present a novel framework for segmentation of retinal layers in OCT volume data. In our
    framework, the region of interest (e.g. the fovea) is located using an adaptive-curve method. The cell layer
    boundaries are then robustly detected firstly using 1D superpixels, applied to A-scans, and then fitting active
    contours in B-scan images. Thereafter the 3D cell layer surfaces are efficiently segmented from the volume data.
    The framework was tested on healthy eye data and we show that it is capable of segmenting up to 12 layers.
    The experimental results imply the effectiveness of proposed method and indicate its robustness to low image
    resolution and intrinsic speckle noise.
  • Xavier, I. et al. (2016). A Photo-Realistic Generator of Most Expressive and Discriminant Changes in 2D Face Images. in: 2015 Sixth International Conference on Emerging Security Technologies EST 2015. IEEE, pp. 80-85. Available at: http://doi.org/10.1109/EST.2015.17.
    This work describes a photo-realistic generator that creates semi-automatically face images of unseen subjects. Unlike previously described methods for generating face imagery, the approach described herein incorporates texture and shape information in a single computational framework based on high dimensional encoding of variance and discriminant information from sample groups. The method produces realistic, frontal pose, images with minimum manual intervention. We believe that the work presented describes a useful tool for face perception applications where privacy-preserving analysis might be an issue and the goal is not the recognition of the face itself, but rather its characteristics like gender, age or race, commonly explored in social and forensic contexts.
  • Mist, J., Gibson, S. and Solomon, C. (2014). A comparison of search spaces and evolutionary operators in facial composite construction. in: Šilc, J. and Zamuda, A. eds. Bioinspired Optimization Methods and their Applications (BIOMA).. Available at: http://bioma.ijs.si/conference/2014/?more=home.
    In this paper a series of experiments concerning the use of IEAs in the creation of facial composites are reported. A human evaluation based search space, which is itself a subspace of a larger search space, is created. The human reduced search space is used to compare two mutation operators and two recombination operators in an IEA. A mathematically reduced search space is constructed from the larger search space. The facial composite process is performed in the three search spaces. No statistically significant differences are found between the performances of the operators or the search spaces.
  • Corripio, J. et al. (2013). Source Smartphone Identification Using Sensor Pattern Noise and Wavelet Transform. in: The 5th International Conference on Imaging for Crime Detection and Prevention.. Available at: http://www.icdp-conf.org/.
    The ability to identify the source camera for an image has application
    in the areas of digital forensics and multimedia data
    mining. The majority of previous research in this area has focused
    on primary function imaging devices (i.e. digital cameras).
    In this work we use the pattern noise of an imaging
    sensor to classify digital photographs according to the source
    smartphone from which they originated. This is timely work
    as new smartphone models large imaging sensors, affording
    significant improvements in classification rates using pattern
    noise. Our approach is to extract wavelet based features which
    are then classified using a support vector machine. We show
    that this method generalises well when the number of source
    cameras is increased.
  • Mist, J. and Gibson, S. (2013). Optimization of Weighted Vector Directional Filters Using an Interactive Evolutionary Algorithm. in: Blum, C. and Alba, E. eds. Genetic and Evolutionary Computation Conference, GECCO '13. ACM, pp. 1691-1694. Available at: http://www.sigevo.org/gecco-2013/.
    Weighted vector directional filters are used to enhance multi-channel
    image data and have attracted a lot of interest from
    researchers in the image processing community. This paper
    describes a novel method for deriving the weights of a vector
    directional filter that uses an interactive evolution strategy.
    We performed an empirical study in which 30 participants
    each developed two filters using our approach. Each participant
    compared the performance of his/her filters to the basic
    vector directional filter and a filter that had previously been
    developed using a genetic algorithm. Of the filters studied,
    our interactive approach was the most effective at removing
    salt and pepper noise for the case when the percentage of
    corrupt image pixels was low.
  • Solomon, C. and Gibson, S. (2012). EFIT-V - Interactive Evolutionary Generation of Facial Composites for Criminal Investigations. in: Gibson, S. J. ed. BIOMA 2012:The 5th International Conference on Bioinspired Optimization Methods and their Applications.
  • Welford, S., Gibson, S. and Payne, A. (2011). Digital Image Analysis and Evaluation (DIAnE): A Forensic Image Processing Tool using MATLAB. in: Gibson, S. J. ed. The 5th Cybercrime Forensics Education & Training Conference.
  • Davis, J. et al. (2010). A Comparison of Individual and Morphed Facial Composites Created Using Different Systems. in: 2010 International Conference on Emerging Security Technologies. Washington, DC, USA: IEEE Computer Society, pp. 56-60. Available at: http://dx.doi.org/10.1109/EST.2010.29.
    An evaluation of individual and morphed composites created using the E-FIT and EFIT-V production systems was conducted. With the assistance of trained police staff, composites of unfamiliar targets were constructed from memory following a Cognitive Interview. EFIT-V composite production followed either a two-day delay, or on the same day as viewing a video of the target. E-FIT composites were created on the same day as viewing the target video. Morphs were produced from merging either two, or three composites created by the same witness, but with the assistance of a different operator. Participants familiar with the targets supplied similarity-to-target photograph ratings. No differences were found in the rated quality of composites created using E-FIT or EFIT-V, although a two-day delay in production resulted in inferior images. Morphs were rated as better likenesses than individual composites, although the benefits were greater with EFIT-Vs. Encouraging witnesses to create more than one composite image for subsequent morphing might enhance the likelihood of recognition of facial composites of criminals.
  • Clarke, D., Riggs, M. and Gibson, S. (2010). Prototyping Perceptions of Health for Inclusion in Facial Composite Systems. in: International Conference on Emerging Security Technologies (EST 2010). Washington, DC, USA: IEEE Computer Society, pp. 61-66. Available at: http://dx.doi.org/10.1109/EST.2010.36.
    A method for altering the perceived health of a human face is presented. Participants were asked to score a random sample of thirty face images for gauntness/fullness, facial symmetry, complexion, age and overall health. Healthy and unhealthy prototype face images were constructed by forming a weighted average of the sample faces according to their mean health score. The study highlights the difficulty in forming reliable unhealthy prototypes using image averaging. Another study was undertaken in which images of methamphetamine users were averaged to form lifestyle specific prototypes. This research was motivated by the need for a better understanding of perceived poor health and how best to model this trait in holistic facial composite systems.
  • Solomon, C., Gibson, S. and Maylin, M. (2009). New computational methodology for the recovery of facial images retained in human memory. in: Signal Recovery and Synthesis.. Available at: http://www.scopus.com/inward/record.url?eid=2-s2.0-84898078559&partnerID=40&md5=829eefc183285353c246c99c603f34c2.
    We present a new computational methodology for the construction of facial composites from eyewitness memory for application to criminal investigation. The conceptual and theoretical basis is described and results from both laboratory and real-world applications are presented. © 2009 Optical Society of America.
  • George, B. et al. (2008). EFIT-V -: interactive evolutionary strategy for the construction of photo-realistic facial composites. in: Keijzer, M. ed. GECCO '08: 10th annual conference on Genetic and evolutionary computation. New York, NY, USA: ACM, pp. 1485-1490. Available at: http://dx.doi.org/10.1145/1389095.1389384.
  • George, B. et al. (2008). EFIT-V - Interactive evolutionary strategy for the construction of photo-realistic facial composites. in: 10th Annual Genetic and Evolutionary Computation Conference. pp. 1485-1490. Available at: http://www.scopus.com/inward/record.url?eid=2-s2.0-57349118437&partnerID=40&md5=5dd022c0da94a51371bb0af2ea1ade6e.
    Facial composite systems are used to create a likeness to a suspect in criminal investigations. Traditional, feature-based facial composite systems rely on the witness' ability to recall individual features, provide verbal descriptions and then select them from stored libraries of labelled features - a task which witnesses often find difficult. The EFIT-V facial composite system is based on different principles, employing a holistic (whole face) approach to construction. The witness is shown a number of randomly generated faces and is asked to select the one that best resembles the target. A genetic algorithm is then used to breed a new generation of faces based upon the selected individual. This process is repeated until the user is satisfied with the composite generated. This paper describes the main components and methodology of EFIT-V and showcases the strengths of the system. Copyright 2008 ACM.
  • Gibson, S. et al. (2006). The Generation of Facial Composites using an Evolutionary Algorithm. in: Gibson, S. J. and Solomon, C. J. eds. 6th International Conference on Recent Advances in Soft Computing (best paper award).
  • Scandrett (nee Hill), C., Solomon, C. and Gibson, S. (2006). Towards a Semi-automatic Method for the Statistically Rigorous Aging of the Human Face. in: Gibson, S. J. and Solomon, C. J. eds. pp. 639-649. Available at: http://dx.doi.org/10.1049/ip-vis:20050027.
  • Gibson, S. et al. (2006). Innovations in facial composite systems: EigenFIT. in: Workshop on Eyewitness Identification Evidence.. Available at: http://www.valentinemoore.co.uk/idworkshop/abstracts.pdf.
  • Maylin, M., Solomon, C. and Gibson, S. (2005). Model-based deconvolution of the human face. in: Seventh IASTED International Conference on Signal and Image Processing. pp. 548-553. Available at: http://www.scopus.com/inward/record.url?eid=2-s2.0-33644540512&partnerID=40&md5=7de3f8e32fc7ec7ba10c3a39a284faca.
    Many practical deconvolution problems arise in which explicit knowledge of both the system PSF and the spectral characteristics of the noise are unknown. We describe and present an approach to deconvolution in this situation which is specifically matched to the forensically important problem of face identification. Our approach is to model both human faces and image aberrations in a statistical appearance framework using a representative sample of faces. Deconvolution is then achieved experimentally by moving along known transition curves in a parametric face space. Our preliminary studies demonstrate that the accuracy of the method is superior to maximum-likelihood blind deconvolution at low signal-noise ratios. A hybrid method in which the noisy face image is first projected into the model space and blind deconvolution then applied yields the best overall performance.
  • Hill, C., Solomon, C. and Gibson, S. (2005). Aging the human face - A statistically rigorous approach. in: IEE International Symposium on Imaging for Crime Detection and Prevention. pp. 89-94. Available at: http://www.scopus.com/inward/record.url?eid=2-s2.0-27644598550&partnerID=40&md5=89c8e1859d4a218faba8d0a18757444c.
    Forensic age progression for the purpose of aging a missing child is a discipline currently dominated by artistic methodologies. In order to improve on these techniques, we present a statistically rigorous approach to the aging of the human face. The technique is based upon a Principal Component Analysis and involves the definition of an aging direction through the model space, using an age-weighted combination of model parameters. Pose and expression compensation methods are also incorporated, allowing faces at a wide variety of pose orientations and expressions to be aged accurately. Near photo-quality images are obtained quickly and the resultant aging effects are realistic and plausible.
  • Hill, C., Solomon, C. and Gibson, S. (2005). Towards a semi-automatic method for the statistically rigorous aging of the human face. in: IEE International Conference on Visual Information Engineering. pp. 9-15. Available at: http://www.scopus.com/inward/record.url?eid=2-s2.0-27744599180&partnerID=40&md5=c0b6b813a8343eb0eb0ac4c9030f84fe.
    Forensic age progression for the purpose of aging a missing child is a discipline currently dominated by artistic methodologies. In order to improve on these techniques, we present a statistically rigorous approach to the aging of the human face. The technique is based upon a Principal Component Analysis and involves the definition of an aging direction through the model space, using an age-weighted combination of model parameters. Pose and expression compensation methods are also incorporated, allowing faces at a wide variety of pose orientations and expressions to be aged accurately. Near photo-quality images are obtained quickly and the resultant aging effects are realistic and plausible.
  • Hill, C., Solomon, C. and Gibson, S. (2004). Plausible aging of the human face using a statistical model. in: Proceedings of the Seventh IASTED International Conference on Computer Graphics and Imaging. pp. 56-60. Available at: http://www.scopus.com/inward/record.url?eid=2-s2.0-10444280891&partnerID=40&md5=6ddf18b017efaa4f4a9e8cecfac38a60.
    The ability to accurately age a human face has a significant potential application in Forensic Science, in particular for attempts to locate and identify missing children. There may also be an application in the field of facial synthesis with an emphasis on facial composite construction [1]. With these uses in mind, a new technique for aging a face in an image has been developed. The technique uses Principal Components Analysis (PCA) on an equal number of male and female faces of different ages to produce separate models in shape and texture. The aging direction was identified for each gender subspace and in-sample male and female faces aged by 20 years according to their Euclidean distance from each aging axis. An out-of-sample male face was also aged. In addition, aging was computed for the male and female faces by producing a model trained only on the appropriate gender sample and using multiples of the aging axis in order to age the face.
  • Gibson, S., Solomon, C. and Pallares-Bejarano, A. (2003). Synthesis of Photographic Quality Facial Composites using Evolutionary Algorithms. in: British Machine Vision Conference 2003. pp. 221-230.

Other

  • Solomon, C., Gibson, S. and Maylin, M. (2009). A new computational methodology for the construction of forensic, facial composites. 3rd International Workshop on Computational Forensics, IWCF 2009 [n/a] 5718 L:67-77. Available at: http://dx.doi.org/10.1007/978-3-642-03521-0_7.
    A facial composite generated from an eyewitness's memory often constitutes the first and only means available for police forces to identify a criminal suspect. To date, commercial computerised systems for constructing facial composites have relied almost exclusively on a feature-based, 'cut-andpaste' method whose effectiveness has been fundamentally limited by both the witness's limited ability to recall and verbalise facial features and by the large dimensionality of the search space. We outline a radically new approach to composite generation which combines a parametric, statistical model of facial appearance with a computational search algorithm based on interactive, evolutionary principles. We describe the fundamental principles on which the new system has been constructed, outline recent innovations in the computational search procedure and also report on the real-world experience of UK police forces who have been using a commercial version of the system. © 2009 Springer Berlin Heidelberg.

Thesis

Forthcoming

  • Urquhart, J. et al. (2019). ATLASGAL — Molecular fingerprints of a sample of massive star +-forming clumps? Monthly Notices of the Royal Astronomical Society.
    We have conducted a 3-mm molecular-line survey towards 570 high-mass star-forming
    clumps, using the Mopra telescope. The sample is selected from the 10,000 clumps identified
    by the ATLASGAL survey and includes all of the most important embedded evolutionary
    stages associated with massive star formation, classified into five distinct categories (quiescent,
    protostellar, young stellar objects, HII regions and photo-dominated regions). The observations
    were performed in broadband mode with frequency coverage of 85.2 to 93.4 GHz
    and a velocity resolution of �0.9 km s
  • Thorniley, S. et al. (2014). The influence of creating a holistic facial composite on children’s and adult’s video lineup identifications. Applied Cognitive Psychology [Online] 30. Available at: http://eu.wiley.com/WileyCDA/WileyTitle/productCd-ACP.html.