About

VLSI/ASIC design. Neural Networks, Optical Sensor Systems and Applications, Image processing using VLSI.

Research interests

Reconfigurable Architectures, FPGAs, Low-Power, Adiabatic Circuits, Logarithmic Signal Processing, Image Processing and Computer Vision, Embedded Systems.

Publications

Showing 50 of 60 total publications in the Kent Academic Repository. View all publications.

Article

  • Vandenbussche, J., Peuteman, J. and Lee, P. (2015). Multiplicative finite impulse response filters: implementations and applications using field programmable gate arrays. IET Signal Processing [Online] 9:449-456. Available at: http://doi.org/10.1049/iet-spr.2014.0143.
    This paper describes how modern field programmable gate array (FPGA) technology can be used to build practical and efficient multiplicative finite impulse response (MFIR) filters with low-pass, high-pass, band-pass and band-stop characteristics. This paper explains how MFIR structures can be built with or without linear phase characteristics and implemented efficiently on modern FPGA architectures using fixed-point arithmetic without incurring stability problems or limit cycles which commonly occur when using equivalent infinite impulse response structures. These properties have a particular importance for applications such as tunable resonators, narrow band rejectors and linear phase filters which have demanding, narrow transition band requirements. The results presented in this paper indicate that MFIR filters are, for some applications, a viable alternative to existing filter structures when implemented on an FPGA.
  • Chaudhary, M. and Lee, P. (2015). An Improved Two-Step Binary Logarithmic Converter for FPGAs. IEEE Transactions on Circuits and Systems II: Express Briefs [Online] 62:476-480. Available at: http://doi.org/10.1109/TCSII.2014.2386252.
    This brief describes an improved binary linear-to-log (Lin2Log) conversion algorithm that has been optimized for implementation on a field-programmable gate array. The algorithm is based on a piecewise linear (PWL) approximation of the transform curve combined with a PWL approximation of a scaled version of a normalized segment error. The architecture presented achieves 23 bits of fractional precision while using just one 18K-bit block RAM (BRAM), and synthesis results indicate operating frequencies of 93 and 110 MHz when implemented on Xilinx Spartan3 and Spartan6 devices, respectively. Memory requirements are reduced by exploiting the symmetrical properties of the normalized error curve, allowing it to be more efficiently implemented using the combinatorial logic available in the reconfigurable fabric instead of using a second BRAM inefficiently. The same principles can be also adapted to applications where higher accuracy is needed.
  • Moser, S., Lee, P. and Podoleanu, A. (2015). An FPGA Architecture for Extracting Real-Time Zernike Coefficients from Measured Phase Gradients. Measurement Science Review [Online] 15:92-100. Available at: http://dx.doi.org/10.1515/msr-2015-0014.
    Zernike modes are commonly used in adaptive optics systems to represent optical wavefronts. However, real-time calculation of Zernike modes is time consuming due to two factors: the large factorial components in the radial polynomials used to define them and the large inverse matrix calculation needed for the linear fit. This paper presents an efficient parallel method for calculating Zernike coefficients from phase gradients produced by a Shack-Hartman sensor and its real-time implementation using an FPGA by pre-calculation and storage of subsections of the large inverse matrix. The architecture exploits symmetries within the Zernike modes to achieve a significant reduction in memory requirements and a speed-up of 2.9 when compared to published results utilising a 2D-FFT method for a grid size of 8×8. Analysis of processor element internal word length requirements show that 24-bit precision in precalculated values of the Zernike mode partial derivatives ensures less than 0.5% error per Zernike coefficient and an overall error of <1%. The design has been synthesized on a Xilinx Spartan-6 XC6SLX45 FPGA. The resource utilisation on this device is <3% of slice registers, <15% of slice LUTs, and approximately 48% of available DSP blocks independent of the Shack-Hartmann grid size. Block RAM usage is <16% for Shack-Hartmann grid sizes up to 32×32.
  • Vandenbussche, J., Lee, P. and Peuteman, J. (2013). On the coefficient quantization of Multiplicative FIR filters. Digital Signal Processing [Online] 23:689-700. Available at: http://dx.doi.org/10.1016/j.dsp.2012.09.020.
    This paper analyzes the effects of coefficient quantization of Multiplicative Finite Impulse Response (MFIR) filters used to approximate the behavior of pole filters. Statistical analysis, zero displacement sensitivity and frequency domain analysis are used as measures of the filter performance for different coefficient lengths. A practical expression for determining the required number of bits for the coefficient quantization as a function of a predefined maximum deviation in the magnitude response is proposed in combination with an alternative method based on a time domain analysis. The time domain analysis allows, for a specific pole approximation, to investigate the sensitivity of the MFIR structure to coefficient variations. The paper concludes that, statistically, the MFIR pole approximation filter does not require a larger number of quantization bits for its coefficients than the corresponding Infinite Impulse Response (IIR) filter.
  • Gao, L. et al. (2013). On-line particle sizing of pneumatically conveyed biomass particles using piezoelectric sensors. Fuel [Online] 113:810-816. Available at: http://dx.doi.org/10.1016/j.fuel.2012.12.029.
    In recent years the firing of biomass at existing or new power plants has been widely adopted as one of the main technologies for reducing greenhouse gas emissions from power generation. The particle size distribution of pneumatically conveyed biomass correlates closely with combustion efficiency and pollutant emissions and should therefore be monitored continuously.

    In this paper, an instrumentation system based on a polyvinylidene difluoride (PVDF) piezoelectric film sensor is proposed, to achieve on-line continuous measurement of biomass particle size distribution. The sensor is attached to an impact bar protruding into the biomass particle flow. The relationship between the resulting impact signals and particle size is modelled mathematically, allowing particle size distribution to be inferred. Experimental work, undertaken with willow chips and miscanthus chips, has allowed both the effectiveness of the on-line particle sizing system and the validity of the mathematical model to be evaluated. The results indicate that the system has good linearity and repeatability.
  • Chaudhary, M. and Lee, P. (2013). Two-stage logarithmic converter with reduced memory requirements. IET Computers & Digital Techniques [Online] 8:23-29. Available at: http://dx.doi.org/10.1049/iet-cdt.2012.0134.
    This study presents an efficient method for converting a normalised binary number x (1 ? x < 2) into a binary logarithm. The algorithm requires less memory and fewer arithmetic components to achieve 23 bits of fractional precision than other algorithms using uniform and non-uniform piecewise linear or piecewise polynomial techniques and requires less than 20 kbits of ROM and a maximum of three multipliers. It is easily extensible to higher numeric precision and has been implemented on Xilinx Spartan3 and Spartan6 field programmable gate arrays (FPGA) to show the effect of recent architectural enhancements to the reconfigurable fabric on implementation efficiency. Synthesis results confirm that the algorithm operates at a frequency of 42.3 MHz on a Spartan3 device and 127.8 MHz on a Spartan6 with a latency of two clocks. This increases to 71.4 and 160 MHz, respectively, when the latency is increased to eight clocks. On a Spartan6 XC6SLX16 device, the converter uses just 55 logic slices, three multipliers and 11.3kbits of Block RAM configured as ROM.
  • Uzenkov, O., Lee, P. and Webb, D. (2006). An FPGA-Based Measurement System for a Fibre Bragg Grating (FBG) Strain Sensor. IEEE IMTC [Online]:2364-2367. Available at: http://dx.doi.org/10.1109/IMTC.2006.328621.
    This paper presents an FPGA based measurement system used to measure the output from a fibre based Bragg grating strain sensor. The system is able to monitor the output of up to 10 demultiplexed light sources generated by the Bragg grating sensor and separated using a miniaturised monochromator. The centroid (C) of the light sources are measured using a 2048 pixel linear CCD array controlled by a Xilinx FPGA. The system scans the linear array at a rate of 2 MHz and is able to obtain sub-pixel resolution which equates resolution of 1 microstrain. The control circuit is implemented using less than 58% of the logic resources on an XCS40 device
  • Carter, R., Yan, Y. and Lee, P. (2006). On-Line Nonintrusive Measurement of Particle Size Distribution through Digital Imaging. IEEE Transactions on Instrumentation and Measurement [Online] 55:2034-2038. Available at: http://dx.doi.org/10.1109/TIM.2006.887039.
  • Yan, Y., Xu, L. and Lee, P. (2006). Mass Flow Measurement of Fine Particles in a Pneumatic Suspension using Electrostatic Sensing and Neural Network Techniques. IEEE Transactions on Instrumentation and Measurement [Online] 55:2330-2334. Available at: http://dx.doi.org/10.1109/TIM.2006.887040.
    In this paper, a novel approach is presented to the measurement of velocity and mass How rate of pneumatically conveyed solids using electrostatic sensing and neural network techniques. A single ring-shaped electrostatic sensor is used to derive a signal, from which two crucial parameters-velocity and mass flow rate of solids-may be determined for the purpose of monitoring and control. It is found that the quantified characteristics of the signal are related to the velocity and mass flow rate of solids. The relationships between the signal characteristics and the two measurands are established through the use of backpropagation (BP) neural networks. Results obtained on a laboratory test rig suggest that an electrostatic sensor in conjunction with a trained neural network may provide a simple, practical solution to the long-standing industrial measurement problem.
  • Batchelor, J. et al. (2003). FM Signal Multipath Reduction using Multiplier-Less Adaptive Filters. Microwave and Optical Technology Letters [Online] 38:1-3. Available at: http://dx.doi.org/10.1002/mop.10954.
    Multipath interference can be removed from constant envelope signals using adaptive channel equalization based on the constant modulus algorithm (CMA). To be useful in a vehicular mobile radio system, adaptation must occur rapidly by using uncomplicated and cost-effective hardware. This paper will present a new multiplier-less filter structure based on logarithmic algebra with anticipated savings in computational complexity. This paper will discuss the filter application for analogue FM multipath reduction. The simulated distortion is observed to be consistent with that of published measurements
  • McBader, S. and Lee, P. (2003). Reducing Memory Bottlenecks in Digital Image Processors. IEE Electronics Letters 39:33-35.
  • McBader, S. and Lee, P. (2003). Reducing Memory Bottlenecks in Embedded, Parallel Image Processors. IEE Electronics Letters [Online] 39:33-35. Available at: http://dx.doi.org/10.1049/el:20030020.
    Owing to the sequential nature of memory interfaces, as well as the growing processor-memory performance gap, the design of parallel image processors is often faced with a challenge in deciding memory organisation and distribution. This work addresses the problem of memory access bottlenecks in parallel digital image processors and presents one solution which demonstrates up to 93.4% reduction over standard sequential methods.
  • Paschalakis, S. and Lee, P. (2000). Statistical pattern recognition using the Normalized Complex Moment Components vector Ferri, F. J. et al. eds. Advances in Pattern Recognition [Online] 1876:532-539. Available at: http://dx.doi.org/10.1007/3-540-44522-6_55.
    This paper presents a new feature vector for statistical pattern recognition based on the theory of moments, namely the Normalized Complex Moment Components (NCMC). The NCMC will be evaluated in the recognition of objects which share identical silhouettes using grayscale images and its performance will be compared with that of a commonly used moment based feature vector, the Hu moment invariants. The tolerance of the NCMC to random noise and the effect of using different orders of moments in its calculation will also be investigated.
  • Lee, P. and Sartori, A. (1998). Modular leading one detector for logarithmic encoder. Electronics Letters [Online] 34:727-728. Available at: http://dx.doi.org/10.1049/el:1998056.
    A modular circuit for determining the leading one in a binary word is described. The circuit, which was principally designed for encoding binary data into a binary logarithm format, can also be used for floating paint normalisation. Its similarity to the Manchester carry adder can be exploited to provide fast 'look-ahead' or 'carry-skip' stages where necessary.
  • Rahman, A., Fairhurst, M. and Lee, P. (1998). Design considerations in the real-time implementation of multiple expert image classifiers within a modular and flexible multiple-platform design environment. Real-Time Imaging [Online] 4:361-376. Available at: http://dx.doi.org/10.1016/S1077-2014(98)90005-5.
    A modular multiple platform design environment is proposed for the real-time implementation of image analysis systems suited to tasks such as visual inspection and other similar applications involving the analysis of two-dimensional (2D) shapes. The design strategies proposed are particularly suited to the implementation of high performance image classifiers based on the multiple expert paradigm. The unified configuration includes an integrated environment incorporating different software and hardware platforms to maximize the overall efficiency of the complete image processing or recognition task. One of the major application areas of such systems is the recognition of handwritten characters. In recent years, a new generation of handwritten recognition systems has been explored which is based on a multiple expert paradigm. The decision combination required for:Ph,,, configurations is a specialized data fusion process. It has been found that these multiple expert decision combination configurations dan easily outperform most of the individual experts working on their own, but successful integration of decisions taken by multiple experts depends not only on the access to different individual algorithms implemented and applied independently, but also on optimized implementations of these individual algorithms. It has been demonstrated that different image processing and recognition algorithms cannot be completely optimized on a single implementation platform. On the contrary, it has been found that different processes can be implemented with maximum efficiency on different platforms. In this paper, comparative performance analysis of the same algorithms on different platforms has been carried out to select the optimum implementation platform for different algorithms in terms of complexity and execution time constraints with the aim of implementing various multiple expert decision combination configurations, and very encouraging results have been achieved. This reasoning has been further explored to build a generalized, flexible and modular design environment to facilitate the incorporation of pipelined multiple platform implementations suitable for a range of image processing, image analysis and computer vision applications.
  • Lee, P. et al. (1997). Advances in the design of the TOTEM neurochip. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment [Online] 389:134-137. Available at: http://dx.doi.org/10.1016/S0168-9002(97)00063-6.
    The TOTEM neurochip has proved its viability as a system for real-time computation in HEP and space applications requiring high performance for event classification, data mining, and signal processing. ISA and VME boards integrating the TOTEM chip as a coprocessor have been made available to selected experimental groups which reported satisfactory results. This paper presents a new architectural solution yielding higher performance and reduced silicon area, The on-chip computational structures have been entirely redesigned to take advantage of a novel approach to number representation that, at the cost of a provably bounded approximation, leads to a much-reduced silicon area, lower power dissipation, and faster computation. This approach is validated by simulation results on experimental data, as presented in the paper.

Book section

  • Lee, P. et al. (2006). A parallel processor for neural networks. in: Commun Engineers, J. ed. 1995 IEEE Symposium on Vlsi Circuits. Piscataway NJ: IEEE Press, pp. 81-82.

Conference or workshop item

  • Alshammari, A., Sobhy, M. and Lee, P. (2017). Digital Communication System with High Security and High Noise Immunity: Security Analysis and Simulation. in: Barolli, L., Xhafa, F. and Conesa, J. eds. 12th International Conference on Broad-Band Wireless Computing, Communication and Applications (BWCCA-2017). Springer, pp. 469-481. Available at: https://doi.org/10.1007/978-3-319-69811-3_43.
    In this paper, our approach is to provide a cryptosystem that can be compared to a One-Time Pad. A new cryptosystem approach based on Lorenz chaotic systems is presented for secure data transmission. The system uses a stream cipher, in which the encryption key varies continuously. Furthermore one or more of the parameters of the Lorenz generator is controlled by an auxiliary chaotic generator for increased security. The CDMA system for four users has been tested using MATLAB-SIMULINK. The system has achieved a good performance in presence of noise compared to other communication systems.
  • Hopkins, M. and Lee, P. (2015). High frequency amplifiers for piezoelectric sensors noise analysis and reduction techniques. in: 2015 IEEE International Instrumentation and Measurement Technology Conference (I2MTC),. IEEE, pp. 893-898. Available at: http://dx.doi.org/10.1109/I2MTC.2015.7151387.
    The measurement and analysis of low level vibrations and Acoustic Emissions in components, fabrications and structures, is often accomplished by the utilisation of multiple distributed piezoelectric sensors. The frequencies of interest usually commence in the upper audio range above 2 kHz, but are more typically ultrasonic frequencies (up to 1MHz). Such measurements are intrinsically limited in terms of their dynamic range due to the signal to noise ratio of the overall system (comprising the transducer and front end signal amplification, together with associated interconnection cables). This paper demonstrates that the latest bipolar operational amplifier technologies (rather than traditional FET technologies) can provide a better solution at higher frequencies in these ultra low noise systems, whilst still delivering the high gain bandwidth needed. This paper also presents a comparative noise analysis for the three principle operational amplifier circuit topologies commonly utilised for piezoelectric sensors: (single ended charge amplifiers, differential charge amplifiers and voltage mode amplifiers). The effects of transducer cables and system configuration are also considered from the noise perspective. The theoretical analysis has been verified by practical experiment.
  • Motoc, I. et al. (2014). Zero Moment Point/Inverted Pendulum-Based Walking Algorithm for the NAO Robot. in: 2014 Fifth International Conference on Emerging Security Technologies (EST),. IEEE, pp. 63-66. Available at: http://doi.org/10.1109/EST.2014.34.
    Bipedal walking may be a difficult task to execute by a bipedal robot. Different factors such as the arm movement or the constant changing of the Center of Mass may lead to an unstable gait. This may be one of the reasons the trajectory of the Center of Mass should be calculated before making the next step. This paper presents a walking algorithm based on Zero Moment Point for the NAO robot. NAO is a 58 cm tall humanoid bipedal robot produced by the French company Aldebaran Robotics. Bipedal walking can be a quite difficult task, since the Center of Mass moves from one foot to another during the walking. For the NAO robot, walking is an even more difficult task, due to its limitations. This paper uses a Zero Moment Point-based walking algorithm in order to calculate the trajectory of the Center of Mass and obtain a stable and robust walk for NAO. The algorithm was used on a simulated environment using the NAO robot.
  • Motoc, I. et al. (2014). A Stable and Robust Walking Algorithm for the Humanoid Robot NAO based on the Zero Moment Point. in: CareTECH.
  • Horne, R., Kelly, S. and Lee, P. (2013). A Framework for Mouse Emulation that Uses a Minimally Invasive Tongue Palate Control Device utilizing Resistopalatography. in: Humascend 2013.
    The ability to interface fluently with a robust Human Input Device is a major challenge facing patients with severe levels of disability. This paper describes a new method of computer interaction utilizing Force Sensitive Resistor Array Technology, embedded into an Intra-Oral device (Resistopalatography), to emulate a USB Human Interface Device using standard Drivers. The system is based around the patient using their tongue to manipulate these sensors in order to give a position and force measurement; these can then be analyzed to generate the necessary metrics to control a mouse for computer input.
  • Yemiscioglu, G. and Lee, P. (2012). 16-Bit Clocked Adiabatic Logic (CAL) Leading One Detector for a Logarithmic Signal Processor. in: Ph.D. Research in Microelectronics and Electronics (PRIME), 2012 8th Conference. pp. 1-4.
    This paper describes the architecture of a Leading- One Detector (LOD) and its implementation using Clocked Adiabatic Logic (CAL). This modular circuit has been designed for use in a 16-bit logarithmic signal processor but can easily be adapted for longer or shorter word lengths. The circuit can also be used as the first stage in a floating-point converter. It has been designed using an AMS 0.35 micrometer CMOS process and consumes an area of 0.02 mm2. Spice simulations have shown that the circuit can operate at frequencies up to 250 MHz and energy calculation have indicated 20.38 pJ power consumption at the maximum operating frequency.
  • Yemiscioglu, G. and Lee, P. (2012). 16-Bit Clocked Adiabatic Logic (CAL) logarithmic signal processor. in: 55th International Midwest Symposium on Circuits and Systems (MWSCAS). pp. 113-116. Available at: http://dx.doi.org/10.1109/MWSCAS.2012.6291970.
  • Vandenbussche, J., Lee, P. and Peuterman, J. (2012). Linear Phase Approximation of Real and Complex Pole IIR Filters using MFIR Structures. in: De Strycker, L. ed. 5th European Confernce on the Use of Modern Information and Communication Technologies. pp. 221-231.
  • Lee, P., Adefila, K. and Yan, Y. (2012). An FPGA correlator for continuous real-time measurement of particulate flow. in: IEEE Instrumentation and Measurement Technology Conference (I2MTC), 2012 IEEE International. Instrumentation and Measurement Technology Conference (I2MTC), 2012 IEEE International: IEEE, pp. 2183-2186. Available at: http://dx.doi.org/10.1109/I2MTC.2012.6229664.
    This paper describes an FPGA based system for monitoring the flow rate of pneumatically conveyed particulates by calculating the cross-correlation of signals generated by a pair of electrostatic sensors embedded in a pipeline. The architecture is capable of calculating a delay in the range of 0 to 20.48 ms with a resolution of 20.48 ?s at the sampling rate of 48.8 kHz. The circuit is implemented using a Xilinx Spartan3 device operating at a frequency of 50 MHz and uses a novel logarithmic based arithmetic circuit for calculating normalized cross-correlation data at a rate 1000x higher than previously published microcontroller based solutions. The architecture is easily extensible to accommodate systems with more than 2 sensing elements and can operate at higher sampling rates as required. The current system uses just 12 multipliers, 12 BRAMS and 34% of the logic resources available on a XC3S700A device.
  • Vandenbussche, J., Lee, P. and Peuteman, J. (2012). An FPGA based Digital Lock-In Amplifier Implemented using MFIR Resonators. in: Signal Processing, Pattern Recognition and Applications / 779: Computer Graphics and Imaging - 2012.. Available at: http://dx.doi.org/10.2316/P.2012.778-034.
    This paper presents an alternative architecture for a digital lock-in amplifier that uses a linear phase digital low pass resonator built with a Multiplicative Finite Impulse Response (MFIR) filter. The paper compares the performance of the new architecture with traditional implementations that have been described in the literature. It shows that the MFIR resonator has a superior performance due to its very small equivalent noise bandwidth. The system has been implemented and tested on a Xilinx Spartan3A DSP Field Programmable Gate Array (FPGA). The paper also shows that the MFIR filter only uses a small amount of slices which makes it perfectly suited for implementation in state of the art mid-range FPGA’s.
  • Vandenbussche, J., Lee, P. and Peuteman, J. (2008). Analysis of Time and Frequency Domain Performance of MFIR Filters. in: ESA'08, 2008 International Conference on Embedded Systems and Applications.
  • Weston, J. and Lee, P. (2008). FPGA Implementation of Cellular Automata Spaces using a CAM Based Cellular Architecture. in: Keymeulen, D. et al. eds. 3rd NASA/ESA Conference on Adaptive Hardware and Systems. IEEE, pp. 315-322.
    This paper presents a content addressable memory (CAM) based architecture for implementing cellular automata (CA) spaces within afield programmable gate array (FPGA). CAMs have proved useful for implementing a number of applications that involve the need to match input data to stored data. This ability is a necessity when implementing cellular automata transition rule sets within hardware. A CAM matching process allows the next state of all cells in an automata space to be found efficiently in as little as a single clock cycle without the need for a complex memory searching algorithm. FPGAs are useful for creating cellular architectures as they are reconfigurable making it possible to model fault tolerance. Research into cellular architectures which can be made fault tolerant is of importance in the current era as faults are becoming increasingly common due to decreasing device dimensions and the increasing complexity of chips and the designs being implemented with them. The cells within the CAM architecture on the FPGA can be configured in different ways allowing it to adapt to varying system requirements and design density. This flexibility allows important factors such as look up table (LUT) usage and clock cycles per time step to be optimised during the design process.
  • Lee, P. and Alexiadis, E. (2008). An Implementation of a Multiplierless Hough Transform on an FPGA Platform using Hybrid-Log Arithmetic. in: Kehtarnavaz, N. and Carlsohn, M. F. eds. SPIE Conference on Real-Time Image Processing 2008. pp. U127-U136.
    This paper describes an implementation of the Hough Transform (HT) that uses a hybrid-log structure for the main arithmetic components instead of fixed or floating point architectures. A major advantage of this approach is a reduction in the overall computational complexity of the HT without adversely affecting its overall performance when compared to fixed point solutions. The proposed architecture is compatible with the latest FPGA architectures allowing multiple units to operate in parallel without exhausting the dedicated (but limited) on-chip signal processing resources that can instead be allocated to other image processing and classification tasks. The solution proposed is capable of performing a realtime HT on megapixel images at frame rates of up to 25 frames per second using a Xilinx Virtex4 (TM) architecture.
  • Nnolim, U. and Lee, P. (2008). Homomorphic Filtering of Colour Images using a Spatial Filter Kernel in the HIS Colour Space. in: I1MTC2008 - IEEE International Instrumentation and Measurement Technology Conference.
  • Nnolim, N. and Lee, P. (2008). A Review and Evaluation of Image Contrast Enhancement Algorithms based on Statistical Measures. in: 10th IASTED International Conference on Signal and Image Processing. Acta Press.
    This paper presents the evaluation and comparison of some popular image contrast enhancement algorithms using statistical measures as an indicator of enhancement quality. This is in addition to qualitative visual evaluation of image results. Furthermore, simple extensions of the statistical measurements are employed in evaluating the severity of the contrast enhancement provided by the various algorithms.
  • Lee, P. et al. (2007). LogTOTEM: A Logarithmic Neural Processor and its Implementation on an FPGA Fabric. in: Neural Networks. International Joint Conference on.
  • Litchfield, C., Lee, P. and Langley, R. (2007). Logarithmic Codecs for Adaptive Beamforming in WCDMA Downlink Channels. in: 2007 IEEE International Symposium on Circuits and Systems.
  • Wisdom, M. and Lee, P. (2007). An Efficient Implementation of a 2D DWT on FPGA. in: International Conference on Field Programmable Logic and Applications.
  • Weston, J. and Lee, P. (2007). Cellular Automata Based Binary Arithmetic for Use on Self Repairing, Fault Tolerant Hardware. in: NASA/ESA Conference on Adaptive Hardware and Systems. IEEE, pp. 732-739.
    The use of cellular automata has long been identified as a method, and means of modelling different behaviours and systems that may occur across various subject fields. One such area that has not yet been fully explored and tested is the use of cellular automata as the basis for performing arithmetic operations capable of being transferred to different types of hardware. This area is vitally important as it could provide the foundation for the next generation of evolvable and adaptive hardware techniques as the approach of true nano computing comes ever closer. The main feature of this work is a cellular automata based multiplier. This follows on from, and interacts with a previously created cellular automata based binary tree adder. This model will be transferred onto a form of cellular hardware in the very near future. This will enable the exploration of key issues such as fault tolerance in hardware which is of significance in a time where devices are gradually being scaled down in size and are becoming increasingly complex.
  • Wisdom, M. and Lee, P. (2007). Evaluation of a Hybrid-Log 2D Wavelet Image Transform. in: IEEE DSP2007 15th International Conference on Digital Signal Processing.
  • Lee, P. (2007). An Estimation of Mismatch Error in IDCT Decoders Implemented with Hybrid-LNS Arithmetic. in: 15th European Signal Processing Conference (EUSIPCO 2007).
  • Lee, P. (2007). A VLSI Implementation of a Digital Hybrid-LNS Neuron. in: International Symposium on Integrated Circuits.
  • Weston, J. and Lee, P. (2007). An Unbounded Parallel Binary Tree Adder for use on a Cellular Platform. in: IEEE Symposium on Artificial Life.
    Cellular automata are by definition highly parallel structures and are therefore capable of giving rise to massively parallel systems. The highly parallel nature of the cellular automata framework permits the creation of a multitude of structures, endowed with the flexibility to perform vast amounts of calculations concurrently. This flexibility and parallelism is also now present in a number of hardware platforms allowing for the adaptation of automata models into hardware. Presented herein is a binary tree adder implemented in cellular automata, able to perform substantial numbers of additions simultaneously. The number of calculations performed is only limited by the automata size. The binary tree adder is also more simplistic in terms of both states (25 used in total) and structure, than has been published before. Due to advances in hardware technology, it is a very realistic ambition for the future to be able to represent the tree adder structure on a cellular platform such as, an FPGA, allowing for such advantages as, increased robustness which is an area regarded as vital for developing the future of electronics hardware
  • Litchfield, C. et al. (2005). Least Squares Adaptive Algorithms Suitable for Multiplierless LMMSE Detection in 3rd Generation Mobile Systems. in: IEEE International Symposium on Personal, Indoor, and Mobile Radio Communications. pp. 1039-1044. Available at: http://dx.doi.org/10.1109/PIMRC.2005.1651599.
    This paper presents an evaluation of a simple adaptive FIR filter structure using a hybrid-logarithmic number system (H-LNS) to minimize the complexity of a decentralized LMMSE receiver in the WCDMA downlink. Non-linear arithmetic operations can be executed efficiently with the H-LNS architecture without the necessity of fixed or floating point array multiplications. The use of signed-LMS adaptive algorithms are included in this study since this significantly reduces the number of MAC operations required for conversion to and from the logarithmic domain. The multiplierless receiver has been simulated in Matlabreg for the WCDMA-FDD downlink with macrodiversity, where the mantissa for each fractional logarithmic number was limited to 4,6 and 8 binary address bits. The results indicate that the hybrid-logarithmic signed-LMS implementations are practical for slow fading channels, where the use of log-antilog conversion with small look up tables is feasible with only minimal reduction in the signal to noise ratio
  • Litchfield, C. et al. (2005). The Use of Hybrid Logarithmic Arithmetic for Root Raized Cosine Matched Filters in WCDMA Downlink Receivers. in: IEEE Wireless Communications and Networking Conference. IEEE, pp. 596-600. Available at: http://dx.doi.org/10.1109/WCNC.2005.1424568.
    The paper compares and contrasts the performance of a root raised cosine matched filter implemented using hybrid logarithmic arithmetic with that of standard binary and floating point implementations. Hybrid logarithmic arithmetic is advantageous for FIR digital filters since it removes the necessity for the use of high speed array multipliers. These can be replaced by simple lookup table structures for conversion to and from the logarithmic domain. Matlab simulations of the hybrid logarithmic structure show that its performance is superior to that of recently published fixed point solutions, while offering a significantly reduced complexity when compared to floating point equivalents proposed for the WCDMA downlink in receiver applications. The use of hybrid logarithmic arithmetic also has the potential to reduce the power consumption, latency and hardware complexity for mobile handset applications.
  • Lee, P. (2005). An Evaluation of a Hybrid-Logarithmic Number System DCT/IDCT Algorithm. in: IEEE International Symposium on Circuits and Systems.. Available at: http://dx.doi.org/10.1109/ISCAS.2005.1465722.
    This paper presents an evaluation of an algorithm for performing the forward and inverse discrete cosine transforms (DCT) on digital images using a hybrid-logarithmic number system (hybrid-LNS) instead of linear binary arithmetic. The algorithm has been simulated using Matlab® where the accuracy of the fractional part of the logarithm has been limited to 8 bits and has been calculated using just 4, 6 or 8 binary address bits of the linear input data. The results show that it is possible to use this hybrid-LNS architecture to build multiplierless DCT and IDCT transforms having only a minimal reduction in image quality. The algorithm is suitable for implementation on existing mid-range FPGA technologies where there are limitations on the size of on-chip memory and high-speed computing elements.
  • McBader, S., Lee, P. and Sartori, A. (2004). The Impact of Modern FPGA Architectures on Neural Hardware: A Case Study of the TOTEM Neural Processor. in: Int. Joint Conf. on Neural Networks, Special Session on Dig. Imp. of Neural Nets - Invited Paper. IEEE, pp. 3149-3154. Available at: http://dx.doi.org/10.1109/IJCNN.2004.1381178.
    The implementation of neural processors in hardware is a very challenging task. However, recent advances in programmable architectures facilitate this task by providing the fundamental hardware blocks for building neural structures. Using the TOTEM neural processor as a case study, this paper reports on the main advantages of implementing neural hardware on programmable logic devices such as FPGAs.
  • McBader, S. and Lee, P. (2003). Vision Systems-on-Chip: General Purpose Architectures and Limitations. in: ECCTD 2003.
  • Paschalakis, S. and Lee, P. (2003). Double Precision Floating-Point Arithmetic on FPGAs. in: Proc. 2003 2nd International Conference on Filed Programmable Technology (FPT2003). USA: IEEE, 345 E 47TH ST, New York, NY 10017 USA, pp. 352-358. Available at: http://dx.doi.org/10.1109/FPT.2003.1275775.
    We present low cost FPGA floating-point arithmetic circuits for all the common operations, i.e. addition/subtraction, multiplication, division and square root. Such circuits can be extremely useful in the FPGA implementation of complex systems that benefit from the reprogrammability and parallelism of the FPGA device but also require a general purpose arithmetic unit. While previous work has considered circuits for low precision floating-point formats, we consider the implementation of 64-bit double precision circuits that also provide rounding and exception handling.
  • El-Eraki, S. et al. (2002). A Multiplier-Less Adaptive CMA Equalizer. in: London Communications Symposium. pp. 253-256.
  • McBader, S. and Lee, P. (2002). A Programmable Image Signal Processing Architecture for Embedded Vision Systems. in: 14th IEEE International Conference on Digital Signal Processing, DSP 2002. pp. 1269-1272.
  • Paschalakis, S. and Lee, P. (2000). Combined geometric transformation and illumination invariant object recognition in RGB color images. in: Sanfeliu, A. et al. eds. 15th International Conference on Pattern Recognition (ICPR-2000). Ieee Computer Soc, pp. 584-587.
    This paper presents a novel approach for object recognition in RGB color images using features based on the theories of geometry ic and complex moments. BI effectively combining the properties of the RGB color space and the normalization procedures and properties of the geometric and complex moments we have implemented a feature vector that is invariant to geometric transformations (i.e. translation, rotation and scale) and changes in both the illumination color and illumination intensity The experimental results presented here will demonstrate the performance of the proposed feature set and investigate ifs tolerance to image distortions.
  • Paschalakis, S. and Lee, P. (1999). Pattern recognition in grey level images using moment based invariant features. in: 7th IEE Conference on Image Processing and its Applications (IPA99). Inst Electrical Engineers Inspec Inc, pp. 245-249. Available at: http://dx.doi.org/10.1049/cp:19990320.
    Moment based invariants, in various forms, have been widely used over the years as features for recognition in many areas of image analysis. Typical examples include the use of moments for optical character recognition and shape identification. However, most of the work that has been carried out to date using moments and moment invariants is concerned with the identification of distinct shapes using binary images. There can be cases, though, where the different objects to be recognised share identical shapes and binary images fail to convey the necessary information to the recognition processes. The work presented in this paper not only looks at object recognition using binary images, but also addresses the issue of classification among objects which have identical shapes, using grey level images for the moment calculations. Two different moment based feature vectors that provide translation, scale, contrast and rotation invariance are used for the recognition of the different objects. These are the complex moments magnitudes and the Hu (1962) moment invariants. The performance of these two feature vectors are assessed both in the presence and absence of noise and the effect of extending the order of the moments used in their calculations is investigated.
Last updated