Portrait of Dr Chee Siang (Jim) Ang

Dr Chee Siang (Jim) Ang

Senior Lecturer in Multimedia/Digital Systems
Director of Graduate Studies (Taught)
Director of Internationalisation

About

I am Chee Siang Ang (also known informally as Jim Ang), Senior Lecturer (equivalent to the North American Associate Professor) in Multimedia and Digital Systems in the School of Engineering and Digital ArtsUniversity of Kent. Before joining Kent, I was a research fellow at the Centre for Human Computer Interaction Design City University London, where I completed my PhD in the area of social gaming. I hold a Master’s degree (Information Technology) from Multimedia University Malaysia and I obtained my BSc. (computing) from the Technology University of Malaysia.

My main research interest lies in the general area of HCI (human-computer interaction) with an emphasis on digital health. Specific areas include:

  • Games and immersive media (such as VR and AR).
  • Sensing technologies.

Due to the multidisciplinary nature of my work, I work with exciting people from a wide range of areas, including electronic and mechanical engineering, medical science, psychology, sociology, and digital arts.

I am an investigator of several research projects:

  • Digital Brain Switch. Funded by the Engineering and Physical Sciences Research Council (EPSRC).
  • Kinetic User Interfaces and Multiuser 3D Virtual Worlds for Older People. Funded by EPSRC.
  • An interactive computer-based intervention to increase condom use: intervention development and pilot trial. Funded by National Institute of Health Research (NIHR).
  • Epilepsy Networks – Joined-Up Thinking For Better Care (InnovateUK).

Research interests

My main research area is in digital health, where I investigate, design and develop new technologies which can provide treatment and (self-) management of health conditions, through effective prevention, early intervention, personalised treatment and continuous monitoring of the conditions. I am particularly interested in immersive media technologies (virtual or augmented reality), computer games, and sensing technologies. 

GAMES AND IMMERSIVE MEDIA

Computers applications today are not restricted to conventional 2D displays, but can be in forms of 3D immersive visualisation and augmented information embedded in the physical world. Traditionally, this new form of computing has mostly been found in entertainment applications such as games but is increasingly making an impact in other more “serious” application domains such as training and healthcare. I work with psychologists and psychiatrists looking into how virtual reality (VR) and gaming technologies can be used in assessment and interventions in mental health. Recently, we have developed VR applications for anxiety disorder training, eating disorder therapy, pain management and emotion detection with VR eye-tracking. 

SENSING TECHNOLOGIES

I collaborate closely with researchers in electronic and mechanical engineering to develop integrated hardware and software online systems with an aim to solve people’s and societal problems through smart technology. For instance, I have developed a tangible interface using RFID tag on day-to-day objects that allows people with dementia to immerse in a 3D virtual world for reminiscence. I have also worked on projects designing and developing low-cost monitoring devices using skin-like sensors and 3D visualisation for biofeedback for Dysphagia therapy. A recent project involves the use of skin-like EMG and EEG sensors for eating behaviour tracking and real-time wheelchair control. 

Teaching

EL542 INTERACTIVE AND TANGIBLE MEDIA

This undergraduate module introduces the practical techniques for creating interactive visual display using Processing, a Java-based IDE. We will also develop interesting tangible interfaces using Arduino IDE, with a range of sensors and actuators. Students will learn to manipulate images, create realistic motions, use motion sensing and speech recognition, in a series of lectures and exciting workshops. 

EL645 VIDEO GAME DEVELOPMENT

This undergraduate module covers a range of topics in video game design and development, including game physics, AI, level design, player behaviour, game rules and mechanics, as well as user interfaces. This module introduces students to game development using Unity3D and C#. Students will also learn about mobile game development and optimisation issues. 

EL884 IMAGE ANALYSIS WITH SECURITY APPLICATION

This MSc module covers topics in computer visions, focusing on image analysis techniques and pattern recognition. I teach the section on pattern recognition/machine learning, covering topics including kNN, Bayes rules, linear and logistic regressions. 

Supervision

  • Completed: Panote Siriaraya (currently assistant professor in Kyoto Institute of Technology, Japan), Investigation of Virtual Worlds as a Platform to Support Healthy Ageing.
  • Completed: Anthony Emeakaroha (currently deputy energy manager in Medway NHS), Analysis of Energy Conservation through product-integrated persuasive feedback using a smart sensor.
  • Complete: Pruet Pjorn (Royal Thai Scholar, currently Assistant Deputy President of Mae Fah Luang University, Thailand), Internet of Educational Things for Primary School Science Education in Rural Thailand. Winner of Anglo-Thai Society Educational Awards for Excellence 2017.
  • Completed: Maria Matsangidou (currently postdoc in University of Sheffield), Impact of Visual Imagery in Human Perceptions of Pain.
  • Completed: Ben Nicholls (EPSRC DTC, Currently Data scientist in FinTech), Skin Electronics and 3D biofeedback for Swallowing and chewing detections.
  • Submitted: Boris Otkhmezuri, Design of Virtual Reality for psychological interventions.
  • 2015 – present: Luma Tabbaa, Virtual Reality and dementia care.
  • 2016 – present: Jittrapol Intarasirisawat (Royal Thai Scholar), Mobile games for cognitive assessment.
  • 2016 – present: Deogratias Mzurikwao (Commonwealth Scholar), Convolutional Neural Network for human physiological data analysis.
  • 2018 – present: Saber Mirzaee, Mask R-Convolutional Neural Network for microscopic cells segmentation and classification.
  • 2019 – present: Raya Al-Habsi (Royal Oman Scholar), Crowd-sourcing Virtual Reality content generation for dementia care.
  • 2019 – present: Ethan Cheung (EPSRC DTC), AI-driven Virtual Reality personalisation for dementia care for large scale deployment.
  • 2019 – present: Ryan Searle (EPSRC DTC), Deep Learning analysis of wearable sensor data for depression tracking.
  • 2019 – present: Derry Bass, Long-term large-scale evaluation of Virtual Reality use for dementia care in care homes.

I am currently interested in supervising PhD projects in all the above areas, specifically in a) Study, design and evaluation of novel virtual reality/augmented reality/gaming technology for health and well being; b) Creative and innovative use of integrated hardware-software systems in various domains, with a focus on healthcare.

Publications

Showing 50 of 98 total publications in the Kent Academic Repository. View all publications.

Article

  • Mishra, S., Kim, Y., Intarasirisawat, J., Kwon, Y., Lee, Y., Mahmood, M., Lim, H., Yu, K., Chee Siang, A. and Yeo, W. (2020). Soft, wireless periocular wearable electronics for real-time detection of eye vergence in a virtual reality toward mobile eye therapies. Science Advances [Online] 6. Available at: http://dx.doi.org/10.1126/sciadv.aay1729.
    Ocular disorders are currently affecting the developed world, causing loss of productivity in adults and children. While the cause of such disorders is not clear, neurological issues are often considered as the biggest possibility. Treatment of strabismus and vergence requires an invasive surgery or clinic-based vision therapy that has been used for decades due to the lack of alternatives such as portable therapeutic tools. Recent advancement in electronic packaging and image processing techniques have opened the possibility for optics-based portable eye tracking approaches, but several technical and safety hurdles limit the implementation of the technology in wearable applications. Here, we introduce a fully wearable, wireless soft electronic system that offers a portable, highly sensitive tracking of eye movements (vergence) via the combination of skin-conformal sensors and a virtual reality system. Advancement of material processing and printing technologies based on aerosol jet printing enables reliable manufacturing of skin-like sensors, while a flexible electronic circuit is prepared by the integration of chip components onto a soft elastomeric membrane. Analytical and computational study of a data classification algorithm provides a highly accurate tool for real-time detection and classification of ocular motions. In vivo demonstration with 14 human subjects captures the potential of the wearable electronics as a portable therapy system, which can be easily synchronized with a virtual reality headset.
  • Mahmood, M., Mzurikwao, D., Kim, Y., Lee, Y., Mishra, S., Herbert, R., Duarte, A., Ang, C. and Yeo, W. (2019). Fully portable and wireless universal brain-machine interfaces enabled by flexible scalp electronics and deep-learning algorithm. Nature Machine Intelligence [Online] 1:412-422. Available at: http://dx.doi.org/10.1038/s42256-019-0091-7.
    Variation in human brains creates difficulty in implementing electroencephalography (EEG) into universal brain-machine interfaces (BMI). Conventional EEG systems typically suffer from motion artifacts, extensive preparation time, and bulky equipment, while existing EEG classification methods require training on a per-subject or per-session basis. Here, we introduce a fully portable, wireless, flexible scalp electronic system, incorporating a set of dry electrodes and flexible membrane circuit. Time domain analysis using convolutional neural networks allows for an accurate, real-time classification of steady-state visually evoked potentials on the occipital lobe. Simultaneous comparison of EEG signals with two commercial systems captures the improved performance of the flexible electronics with significant reduction of noise and electromagnetic interference. The two-channel scalp electronic system achieves a high information transfer rate (122.1 ± 3.53 bits per minute) with six human subjects, allowing for a wireless, real-time, universal EEG classification for an electronic wheelchair, motorized vehicle, and keyboard-less presentation.
  • Rose, V., Stewart, I., Jenkins, K., Tabbaa, L., Ang, C. and Matsangidou, M. (2019). Bringing the outside in: The feasibility of virtual reality with people with dementia in an inpatient psychiatric care setting. Dementia [Online]. Available at: https://dx.doi.org/10.1177/1471301219868036.
    Background and objectives: Emerging research supports virtual reality use with people with dementia in the community, but is limited to this area, warranting further investigation in different care settings. The feasibility of virtual reality within an inpatient psychiatric care setting was therefore explored.

    Research design and methods: Eight people with dementia and 16 caregivers were recruited in January and February 2018 from a UK hospital specialising in progressive neurological conditions. A mixed methods design measured affect and behaviour using the Observed Emotion Rating Scale, Overt Aggression Scale-Modified for Neurorehabilitation and St Andrew’s Sexual Behaviour Assessment. Thematic analysis was conducted following semi-structured interviews. Caregivers who worked at the hospital supported people with dementia throughout the process and were interviewed for their views on Head Mounted Display-Virtual Reality (HMD-VR) use with people with dementia.
    Results

    HMD-VR was tried and accepted by people with dementia. Participants viewed HMD-VR positively as a ‘change in environment’ and would use it again. People with dementia experienced more pleasure during and after HMD-VR compared to before exposure, as well as increased alertness after. Three core themes emerged: ‘Virtual Reality Experiences’, ‘Impact of Virtual Reality’ and ‘Experiences within the Virtual Environment’. Caregivers discussed preconceptions about virtual reality use and how these changed.

    Discussion and implications: This is the first study to explore the feasibility of HMD-VR with people with mild to moderately severe dementia in hospital and found that overall HMD-VR is viable. Findings evidence the clinical feasibility of HMD-VR implementation in this environment and inform future research.
  • Intarasirisawat, J., Ang, C., Efstratiou, C., Dickens, L. and Page, R. (2019). Exploring the Touch and Motion Features in Game-Based Cognitive Assessments. Journal of Interactive, Mobile, Wearable and Ubiquitous Technologies [Online] 3. Available at: http://dx.doi.org/10.1145/3351245.
    Early detection of cognitive decline is important for timely intervention and treatment strategies to prevent further dete- rioration or development of more severe cognitive impairment, as well as identify at risk individuals for research. In this paper, we explore the feasibility of using data collected from built-in sensors of mobile phone and gameplay performance in mobile-game-based cognitive assessments. Twenty-two healthy participants took part in the two-session experiment where they were asked to take a series of standard cognitive assessments followed by playing three popular mobile games in which user-game interaction data were passively collected. The results from bivariate analysis reveal correlations between our proposed features and scores obtained from paper-based cognitive assessments. Our results show that touch gestural interaction and device motion patterns can be used as supplementary features on mobile game-based cognitive measurement. This study provides initial evidence that game related metrics on existing off-the-shelf games have potential to be used as proxies for conventional cognitive measures, specifically for visuospatial function, visual search capability, mental flexibility, memory and attention.
  • Douglas, K., Uscinski, J., Sutton, R., Cichocka, A., Nefes, T., Ang, C. and Deravi, F. (2019). Understanding conspiracy theories. Advances in Political Psychology [Online] 40:3-35. Available at: https://doi.org/10.1111/pops.12568.
    Scholarly efforts to understand conspiracy theories have grown significantly in recent years, and there is now a broad and interdisciplinary literature that we review in this article. We ask three specific questions. First, what are the factors that are associated with conspiracy theorizing? Our review of the literature shows that conspiracy beliefs result from a range of psychological, political and social factors. Next, how are conspiracy theories communicated? Here, we explain how conspiracy theories are shared among individuals and spread through traditional and social media platforms. Next, what are the risks and rewards associated with conspiracy theories? By focusing on politics and science, we argue that conspiracy theories do more harm than good. Finally, because this is a growing literature and many open questions remain, we conclude by suggesting several promising avenues for future research.
  • Otkhmezuri, B., Boffo, M., Siriaraya, P., Matsangidou, M., Wiers, R., Mackintosh, B., Ang, C. and Salemink, E. (2019). Believing Is Seeing: A Proof-of-Concept Semiexperimental Study on Using Mobile Virtual Reality to Boost the Effects of Interpretation Bias Modification for Anxiety. JMIR Mental Health [Online] 6. Available at: https://doi.org/10.2196/11517.
    Background: Cognitive Bias Modification of Interpretations (CBM-I) is a computerized intervention designed to change negatively biased interpretations of ambiguous information, which underlie and reinforce anxiety. The repetitive and monotonous features of CBM-I can negatively impact training adherence and learning processes.

    Objective: This proof-of-concept study aimed to examine whether performing a CBM-I training using mobile virtual reality technology (virtual reality Cognitive Bias Modification of Interpretations [VR-CBM-I]) improves training experience and effectiveness.

    Methods: A total of 42 students high in trait anxiety completed 1 session of either VR-CBM-I or standard CBM-I training for performance anxiety. Participants’ feelings of immersion and presence, emotional reactivity to a stressor, and changes in interpretation bias and state anxiety, were assessed.

    Results: The VR-CBM-I resulted in greater feelings of presence (P<.001, d=1.47) and immersion (P<.001, ηp2=0.74) in the training scenarios and outperformed the standard training in effects on state anxiety (P<.001, ηp2=0.3) and emotional reactivity to a stressor (P=.03, ηp2=0.12). Both training varieties successfully increased the endorsement of positive interpretations (P<.001, drepeated measures [drm]=0.79) and decreased negative ones. (P<.001, drm=0.72). In addition, changes in the emotional outcomes were correlated with greater feelings of immersion and presence.

    Conclusions: This study provided first evidence that (1) the putative working principles underlying CBM-I trainings can be translated into a virtual environment and (2) virtual reality holds promise as a tool to boost the effects of CMB-I training for highly anxious individuals while increasing users’ experience with the training application.
  • Kanjo, E., Younis, E. and Ang, C. (2018). Deep Learning Analysis of Mobile Physiological, Environmental and Location Sensor Data for Emotion Detection. Information Fusion [Online] 49:46-56. Available at: https://dx.doi.org/10.1016/j.inffus.2018.09.001.
    The detection and monitoring of emotions are important in various applications, e.g. to enable naturalistic and personalised human-robot interaction. Emotion detection often require modelling of various data inputs from multiple modalities, including physiological signals (e.g.EEG and GSR), environmental data (e.g. audio and weather), videos (e.g. for capturing facial expressions and gestures) and more recently motion and location data. Many traditional machine learning algorithms have been utilised to capture the diversity of multimodal data at the sensors and features levels for human emotion classification. While the feature engineering processes often embedded in these algorithms are beneficial for emotion modelling, they inherit some critical limitations which may hinder the development of reliable and accurate models. In this work, we adopt a deep learning approach for emotion classification through an iterative process by adding and removing large number of sensor signals from different modalities. Our dataset was collected in a real-world study from smart-phones and wearable devices. It merges local interaction of three sensor modalities: on-body, environmental and location into global model that represents signal dynamics along with the temporal relationships of each modality. Our approach employs a series of learning algorithms including a hybrid approach using Convolutional Neural Network and Long Short-term Memory Recurrent Neural Network (CNN-LSTM) on the raw sensor data, eliminating the needs for manual feature extraction and engineering. The results show that the adoption of deep-learning approaches is effective in human emotion classification when large number of sensors input is utilised (average accuracy 95% and F-Measure=%95) and the hybrid models outperform traditional fully connected deep neural network (average accuracy 73% and F-Measure=73%). Furthermore, the hybrid models outperform previously developed Ensemble algorithms that utilise feature engineering to train the model average accuracy 83% and F-Measure=82%)
  • Matsangidou, M., Otterbacher, J., Ang, C. and Zaphiris, P. (2018). Can the Crowd Tell How I Feel? Trait Empathy and Ethnic Background in a Visual Pain Judgment Task. Universal Access in the Information Society [Online] 17:649-661. Available at: https://dx.doi.org/10.1007/s10209-018-0611-y.
    Many advocate for artificial agents to be empathic. Crowdsourc- ing could help, by facilitating human-in-the-loop approaches and dataset crea- tion for visual emotion recognition algorithms. Although crowdsourcing has been employed successfully for a range of tasks, it is not clear how effective crowdsourcing is when the task involves subjective rating of emotions. We ex- amined relationships between demographics, empathy and ethnic identity in pain emotion recognition tasks. Amazon MTurkers viewed images of strangers in painful settings, and tagged subjects’ emotions. They rated their level of pain arousal and confidence in their responses, and completed tests to gauge trait empathy and ethnic identity. We found that Caucasian participants were less confident than others, even when viewing other Caucasians in pain. Gender cor- related to word choices for describing images, though not to pain arousal or confidence. The results underscore the need for verified information on crowdworkers, to harness diversity effectively for metadata generation tasks.
  • Putjorn, P., Siriaraya, P., Deravi, F. and Ang, C. (2018). Investigating the use of sensor-based IoET to facilitate learning for children in rural Thailand. PLOS ONE [Online] 13:e0201875. Available at: https://doi.org/10.1371/journal.pone.0201875.
    A novel sensor-based Internet of Educational Things (IoET) platform named OBSY was iteratively
    designed, developed and evaluated to support education in rural regions in Thailand.
    To assess the effectiveness of this platform, a study was carried out at four primary schools
    located near the Thai northern border with 244 students and 8 teachers. Participants were
    asked to carry out three science-based learning activities and were measured for improvements
    in learning outcome and learning engagement. Overall, the results showed that students
    in the IoET group who had used OBSY to learn showed significantly higher learning
    outcome and had better learning engagement than those in the control condition. In addition,
    for those in the IoET group, there was no significant effect regarding gender, home
    location (Urban or Rural), age, prior experience with technology and ethnicity on learning
    outcome. For learning engagement, only age was found to influence interest/enjoyment.
    The study demonstrated the potential of IoET technologies in underprivileged area, through
    a co-design approach with teachers and students, taking into account the local contexts.
  • Matsangidou, M., Ang, C., Mauger, A., Intarasirisawat, J., Otkhmezuri, B. and Avraamides, M. (2018). Is Your Virtual Self as Sensational as Your Real? Virtual Reality: The Effect of Body Consciousness on the Experience of Exercise Sensations. Psychology of Sport & Exercise [Online]. Available at: https://doi.org/10.1016/j.psychsport.2018.07.004.
    Objectives: Past research has shown that Virtual Reality (VR) is an effective method for reducing the perception of pain and effort associated with exercise. As pain and effort are subjective feelings, they are influenced by a variety of psychological factors, including one’s awareness of internal body sensations, known as Private Body Consciousness (PBC). The goal of the present study was to investigate whether the effectiveness of VR in reducing the feeling of exercise pain and effort is moderated by PBC.
    Design and Methods: Eighty participants were recruited to this study and were randomly assigned to a VR or a non-VR control group. All participants were required to maintain a 20% 1RM isometric bicep curl, whilst reporting ratings of pain intensity and perception of effort. Participants in the VR group completed the isometric bicep curl task whilst wearing a VR device which simulated an exercising environment. Participants in the non-VR group completed a conventional isometric bicep curl exercise without VR. Participants’ heart rate was continuously monitored along with time to exhaustion. A questionnaire was used to assess PBC.
    Results: Participants in the VR group reported significantly lower pain and effort and exhibited longer time to exhaustion compared to the non-VR group. Notably, PBC had no effect on these measures and did not interact with the VR manipulation.
    Conclusions: Results verified that VR during exercise could reduce negative sensations associated with exercise regardless of the levels of PBC.
  • Kanjo, E., Kuss, D. and Ang, C. (2017). NotiMind: Utilizing Responses to Smart Phone Notifications as Affective sensors. IEEE Access [Online]. Available at: http://dx.doi.org/10.1109/ACCESS.2017.2755661.
    Today’s mobile phone users are faced with large numbers of notifications on social media, ranging from new followers on Twitter and emails to messages received from WhatsApp and Facebook. These digital alerts continuously disrupt activities through instant calls for attention. This paper examines closely the way everyday users interact with notifications and their impact on users’ emotion. Fifty users were recruited to download our application NotiMind and use it over a five-week period. Users’ phones collected thousands of social and system notifications along with affect data collected via self-reported PANAS tests three times a day. Results showed a noticeable correlation between positive affective measures and keyboard activities. When large numbers of Post and Remove notifications occur, a corresponding increase in negative affective measures is detected. Our predictive model has achieved a good accuracy level using three different “in the wild” classifiers (F-measure 74-78% within- subject model, 72-76% global model). Our findings show that it is possible to automatically predict when people are experiencing positive, neutral or negative affective states based on interactions with notifications. We also show how our findings open the door to a wide range of applications in relation to emotion awareness on social and mobile communication.
  • Chauhan, S., Bobrowicz, A. and Ang, C. (2017). Perception of Digital and Physical Sculpture by People with Dementia: An Investigation into Creative Potential. The International Journal of New Media, Technology and the Arts [Online] 12:11 -25. Available at: http://dx.doi.org/10.18848/2326-9987/CGP/v12i02/11-25.
    Abstract: The perception of three-dimensional sculptural forms is quite different from two-dimensional art works such as painting and drawing. Though both are considered forms of artistic production, the distinction is the tactual and kinesthetic sensations of the three-dimensional sculptural forms. The understanding of the perception of sculptural forms adds another dimension to cognitive and emotive qualities embedded in art. The emotions evoked while observing, knowing, touching, and feeling a sculpture, as well as the experiences of working, creating, and producing one, affect an individual’s perception. People with dementia who develop visual and perceptual difficulties may gradually have a different experience of sculpture. The materiality of a sculpture and its tactile engagement have the capacity to influence their perception. With spatial errors, changes in colour, and misperceptions, there is a possibility that they see, appreciate, and experience, in a different way, both physical sculptural forms and those that are mediated through digital technology.
  • Douglas, K., Ang, C. and Deravi, F. (2017). Reclaiming the truth. The Psychologist [Online] 30:36-42. Available at: https://thepsychologist.bps.org.uk/volume-30/june-2017/reclaiming-truth.
  • Lee, Y., Nicholls, B., Lee, D., Chen, Y., Chun, Y., Ang, C. and Yeo, W. (2017). Soft Electronics Enabled Ergonomic Human-Computer Interaction for Swallowing Training. Scientific Reports [Online] 7. Available at: http://dx.doi.org/10.1038/srep46697.
    We introduce a skin-friendly electronic system that enables human-computer interaction (HCI) for swallowing training in dysphagia rehabilitation. For an ergonomic HCI, we utilize a soft, highly compliant (“skin-like”) electrode, which addresses critical issues of an existing rigid and planar electrode combined with a problematic conductive electrolyte and adhesive pad. The skin-like electrode offers a highly conformal, user-comfortable interaction with the skin for long-term wearable, high-fidelity recording of swallowing electromyograms on the chin. Mechanics modeling and experimental quantification captures the ultra-elastic mechanical characteristics of an open mesh microstructured sensor, conjugated with an elastomeric membrane. Systematic in vivo studies investigate the functionality of the soft electronics for HCI-enabled swallowing training, which includes the application of a biofeedback system to detect swallowing behavior. The collection of results demonstrates clinical feasibility of the ergonomic electronics in HCI-driven rehabilitation for patients with swallowing disorders.
  • Bailey, J., Webster, R., Hunter, R., Griffin, M., Freemantle, N., Rait, G., Estcourt, C., Michie, S., Andersan, J., Stephenson, J., Gerressue, M., Ang, C. and Murray, E. (2016). The Men’s Safer Sex project: intervention development and feasibility randomised controlled trial of an interactive digital intervention to increase condom use in men. Health Technology Assessment [Online] 20:1-115. Available at: http://dx.doi.org/10.3310/hta20910.
    Background: This report details the development of the Men’s Safer Sex website and the results of a
    feasibility randomised controlled trial (RCT), health economic assessment and qualitative evaluation.
    Objectives: (1) Develop the Men’s Safer Sex website to address barriers to condom use; (2) determine the best design for an online RCT; (3) inform the methods for collecting and analysing health economic data;
    (4) assess the Sexual Quality of Life (SQoL) questionnaire and European Quality of Life-5 Dimensions, three- level version (EQ-5D-3L) to calculate quality-adjusted life-years (QALYs); and (5) explore clinic staff and men’s views of online research methodology.
    Methods: (1) Website development: we combined evidence from research literature and the views of experts (n = 18) and male clinic users (n = 43); (2) feasibility RCT: 159 heterosexually active men were recruited from three sexual health clinics and were randomised by computer to the Men’s Safer Sex website plus usual care (n = 84) or usual clinic care only (n = 75). Men were invited to complete online questionnaires at 3, 6, 9 and 12 months, and sexually transmitted infection (STI) diagnoses were recorded from clinic notes at 12 months; (3) health economic evaluation: we investigated the impact of using different questionnaires to calculate utilities and QALYs (the EQ-5D-3L and SQoL questionnaire), and compared different methods to collect resource use; and (4) qualitative evaluation: thematic analysis of interviews with 11 male trial participants and nine clinic staff, as well as free-text comments from online outcome questionnaires.

    Results: (1) Software errors and clinic Wi-Fi access presented significant challenges. Response rates for online questionnaires were poor but improved with larger vouchers (from 36% with £10 to 50% with £30). Clinical records were located for 94% of participants for STI diagnoses. There were no group differences in condomless sex with female partners [incidence rate ratio (IRR) 1.01, 95% confidence interval (CI) 0.52 to 1.96]. New STI diagnoses were recorded for 8.8% (7/80) of the intervention group and 13.0% (9/69) of the control group (IRR 0.75, 95% CI 0.29 to 1.89). (2) Health-care resource data were more complete using patient files than questionnaires. The probability that the intervention is cost-effective is sensitive to the source of data used and whether or not data on intended pregnancies are included.
    (3) The pilot RCT fitted well around clinical activities but 37% of the intervention group did not see the Men’s Safer Sex website and technical problems were frustrating. Men’s views of the Men’s Safer Sex website and research procedures were largely positive.
    Conclusions: It would be feasible to conduct a large-scale RCT using clinic STI diagnoses as a primary outcome; however, technical errors and a poor response rate limited the collection of online self-reported outcomes. The next steps are (1) to optimise software for online trials, (2) to find the best ways to integrate digital health promotion with clinical services, (3) to develop more precise methods for collecting resource use data and (4) to work out how to overcome barriers to digital intervention testing and implementation in the NHS.
    Trial registration: Current Controlled Trials ISRCTN18649610.
    Funding: This project was funded by the NIHR Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 20, No. 91. See the NIHR Journals Library website for further project information.
  • OTTERBACHER, J., Ang, C., LITVAK, M. and ATKINS, D. (2016). Show Me You Care: Trait Empathy, Linguistic Style and Mimicry on Facebook. ACM Transactions on Internet Technology [Online] 17. Available at: http://dx.doi.org/10.1145/2996188.
    Linguistic mimicry, the adoption of another’s language patterns, is a subconscious behavior with pro-social benefits. However, some professions advocate its conscious use in empathic communication. This involves mutual mimicry; effective communicators mimic their interlocutors, who also mimic them back. Since mimicry has often been studied in face-to-face contexts, we ask whether individuals with empathic dis- positions have unique communication styles and/or elicit mimicry in mediated communication on Facebook. Participants completed Davis’ Interpersonal Reactivity Index and provided access to Facebook activity. We confirm that dispositional empathy is correlated to the use of particular stylistic features. In addition, we identify four empathy profiles and find correlations to writing style. When a linguistic feature is used, this often “triggers” use by friends. However, the presence of particular features, rather than participant dispo- sition, best predicts mimicry. This suggests that machine-human communications could be enhanced based on recently used features, without extensive user profiling.
  • Farzin, D., Chee Siang, A., M A, H., Areej, A., Malcolm, P. and Mohamed, S. (2015). Usability and Performance Measure of a Consumer-grade Brain Computer Interface System for Environmental Control by Neurological Patients. International Journal of Engineering and Technology Innovation [Online] 5:165-177. Available at: http://sparc.nfu.edu.tw/~ijeti/download.php?file_id=103.
    With the increasing incidence and prevalence of chronic brain injury patients and the current financial constraints in healthcare budgets, there is a need for a more intelligent way to realise the current practice of neuro-rehabilitation service provision. Brain-computer Interface (BCI) systems have the potential to address this issue to a certain extent only if carefully designed research can demonstrate that these systems are accurate, safe, cost-effective, are able to increase patient/carer satisfaction and enhance their quality of life. Therefore, one of the objectives of the proposed study was to examine whether participants (patients with brain injury and a sample of reference population) were able to use a low cost BCI system (Emotiv EPOC) to interact with a computer and to communicate via spelling words. Patients participated in the study did not have prior experience in using BCI headsets so as to measure the user experience in the first-exposure to BCI training. To measure emotional arousal of participants we used an ElectroDermal Activity Sensor (Qsensor by Affectiva). For the signal processing and feature extraction of imagery controls the Cognitive Suite of Emotiv's Control Panel was used. Our study reports the key findings based on data obtained from a group of patients and a sample reference population and presents the implications for the design and development of a BCI system for communication and control. The study also evaluates the performance of the system when used practically in context of an acute clinical environment.
  • Alelis, G., Bobrowicz, A. and Ang, C. (2015). Comparison of engagement and emotional responses of older and younger adults interacting with 3D cultural heritage artefacts on personal devices. Behaviour & Information Technology [Online]:1-15. Available at: http://dx.doi.org/10.1080/0144929X.2015.1056548.
    The availability of advanced software and less expensive hardware allows museums to preserve and share artefacts digitally. As a result, museums are frequently making their collections accessible online as interactive, 3D models. This could lead to the unique situation of viewing the digital artefact before the physical artefact. Experiencing artefacts digitally outside of the museum on personal devices may affect the user's ability to emotionally connect to the artefacts. This study examines how two target populations of young adults (18–21 years) and the elderly (65 years and older) responded to seeing cultural heritage artefacts in three different modalities: augmented reality on a tablet, 3D models on a laptop, and then physical artefacts. Specifically, the time spent, enjoyment, and emotional responses were analysed. Results revealed that regardless of age, the digital modalities were enjoyable and encouraged emotional responses. Seeing the physical artefacts after the digital ones did not lessen their enjoyment or emotions felt. These findings aim to provide an insight into the effectiveness of 3D artefacts viewed on personal devices and artefacts shown outside of the museum for encouraging emotional responses from older and younger people.
  • Bailey, J., Webster, R., Hunter, R., Freemantle, N., Rait, G., Michie, S., Estcourt, C., Anderson, J., Gerressu, M., Stephenson, J., Ang, C., Hart, G., Dhanjal, S. and Murray, E. (2015). The Men’s Safer Sex (MenSS) trial: protocol for a pilot randomised controlled trial of an interactive digital intervention to increase condom use in men. BMJ Open [Online] 5:e007552-e007552. Available at: http://dx.doi.org/10.1136/bmjopen-2014-007552.
  • Green, M., Bobrowicz, A. and Ang, C. (2015). The lesbian, gay, bisexual and transgender community online: discussions of bullying and self-disclosure in YouTube videos. Behaviour & Information Technology [Online]:1-9. Available at: http://dx.doi.org/10.1080/0144929X.2015.1012649.
    Computer-mediated communication has become a popular platform for identity construction and experimentation as well as social interaction for those who identify as lesbian, gay, bisexual or transgender (LGBT). The creation of user-generated videos has allowed content creators to share experiences on LGBT topics. With bullying becoming more common amongst LGBT youth, it is important to obtain a greater understanding of this phenomenon. In our study, we report on the analysis of 151 YouTube videos which were identified as having LGBT- and bullying-related content. The analysis reveals how content creators openly disclose personal information about themselves and their experiences in a non-anonymous rhetoric with an unknown public. These disclosures could indicate a desire to seek friendship, support and provide empathy.
  • Wilkinson, D., Moreno, S., Ang, C., Deravi, F., Sharma, D. and Sakel, M. (2015). Emotional Correlates of Unirhinal Odor Identification. Laterality: Asymmetries of Body, Brain and Cognition [Online] 21:85-99. Available at: http://dx.doi.org/10.1080/1357650X.2015.1075546.
    It seems self-evident that smell profoundly shapes emotion, but less clear is the nature of this interaction. Here we sought to determine whether the ability to identify odors co-varies with self-reported feelings of empathy and emotional expression recognition, as predicted if the two capacities draw on common resource. Thirty six neurotypical volunteers were administered the Alberta Smell Test, The Interpersonal Reactivity Index and an emotional expression recognition task. Statistical analyses indicated that feelings of emotional empathy positively correlated with odor discrimination in right nostril, while the recognition of happy and fearful facial expressions positively correlated with odor discrimination in left nostril. These results uncover new links between olfactory discrimination and emotion which, given the ipsilateral configuration of the olfactory projections, point towards intra- rather than inter-hemispheric interaction. The results also provide novel support for the proposed lateralisation of emotional empathy and the recognition of facial expression, and give reason to further explore the diagnostic sensitivity of smell tests because reduced sensitivity to others’ emotions can mark the onset of certain neurological diseases.
  • Pruet, P., Ang, C. and Farzin, D. (2014). Understanding tablet computer usage among primary school students in underdeveloped areas: Students’ technology experience, learning styles and attitudes. Computers in Human Behavior [Online] 55:1131-1144. Available at: http://doi.org/10.1016/j.chb.2014.09.063.
    The need to provide low-cost learning technologies such as laptops or tablet computers in developing countries with the aim to bridge the digital divide as well as addressing the uneven standards of education quality has been widely recognised by previous studies. With this aim in mind, the Thai Government has launched the “One Tablet PC Per Child” (OTPC) policy and distributed 800,000 tablet computers to grade-one students nationwide in 2012. However, there is limited empirical evidence on the effectiveness of tablet computer use in the classroom. Our study examined students’ learning styles, attitudes towards tablet computer use and how these are linked to their academic performance. The study has investigated 213 grade two students in economically underprivileged regions of North Thailand. Data collection was based on questionnaires filled in by the students with the help of their teachers. Our results overall suggested that there were some key significant differences in relation to students’ gender and home locations (urban vs. rural). In contrast to existing studies, both genders at this stage had similar technology experience and positive attitudes towards tablet computer use. However, we found girls had higher visual learning style (M = 4.23, p < .032) than boys (M = 3.96). Where home location was concerned, rural students had higher learning competitiveness and higher levels of anxiety towards tablet use (M = 1.71, p < .028) than urban students (M = 1.33). Additionally, we also found technology experiences, collaborative learning style and anxiety affected students’ academic performance.
  • Emeakaroha, A., Ang, C., Yan, Y. and Hopthrow, T. (2014). Integrating persuasive technology with energy delegates for energy conservation and carbon emission reduction in a university campus. Energy [Online] 76:357-374. Available at: http://dx.doi.org/10.1016/j.energy.2014.08.027.
    This paper presents the results of energy conservation strategies implemented in the University residential halls to address energy consumption issues, using IPTED (Integration of Persuasive Technology and Energy Delegate) in the student residential halls. The results show that real time energy feedback from a visual interface, when combined with energy delegate can provide significant energy savings. Therefore, applying IPTED reveals a significant conservation and carbon emission reduction as a result from the intervention conducted in student hall of residents comprising of 16 halls with 112 students. Overall, the intervention revealed that, the use of real time feedback system reduced energy consumption significantly when compared to baseline readings. Interestingly, we found that the combination of real time feedback system with a human energy delegate in 8 halls resulted in higher reduction of 37% in energy consumptions when compared to the baseline amounting to savings of 1360.49 kWh, and 713.71 kg of CO2 in the experimental halls. On the contrary, the 8 non-experimental halls, which were exposed to the real time feedback and weekly email alert, resulted in only 3.5% reduction in energy consumption when compared to the baseline, amounting to savings of only 165.00 kWh, and 86.56 kg of CO2.
  • Emeakaroha, A., Ang, C., Yan, Y. and Hopthrow, T. (2014). A persuasive feedback support system for energy conservation and carbon emission reduction in campus residential buildings. Energy and Buildings [Online] 82:719-732. Available at: http://dx.doi.org/10.1016/j.enbuild.2014.07.071.
    There is a need for energy conservation mechanisms, especially in university campuses, as students do not have any direct feedback on their energy consumptions, which leads to excess usages. There are few existing approaches aiming to reduce electricity usages in higher education institutions through real-time feedback applications. These approaches mainly apply student experimental studies with incentives (gift reward). Their feedback systems present data only in near real time using data loggers and Modbus data collector, which are characterised with a slow and unstable data transfer rate. Furthermore, they are not designed for long-term deployment in a wider campus energy management environment. Thus, the challenges for reducing energy consumption and carbon emissions in the higher education sector still remain.

    To address these challenges, we have designed, configured and implemented a robust persuasive feedback support system (PFSS) to facilitate energy conservation and carbon emission reduction. This paper presents the complete architecture of the proposed PFSS, its system interface and the real time measurement output strategies. To demonstrate the applicability of the proposed system and to assess its performance in comparison with the previous

Book section

  • Mzurikwao, D., Williams Samuel, O., Grace Asogbon, M., Li, X., Li, G., Yeo, W., Efstratiou, C. and Siang Ang, C. (2019). A Channel Selection Approach Based on Convolutional Neural Network for Multi-channel EEG Motor Imagery Decoding. In: 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE). New York, USA: IEEE, pp. 195-202. Available at: https://doi.org/10.1109/AIKE.2019.00042.
    For many disabled people, brain computer interface (BCI) may be the only way to communicate with others and to control things around them. Using motor imagery paradigm, one can decode an individual's intention by using their brainwaves to help them interact with their environment without having to make any physical movement. For decades, machine learning models, trained on features extracted from acquired electroencephalogram (EEG) signals have been used to decode motor imagery activities. This method has several limitations and constraints especially during feature extraction. Large number of channels on the current EEG devices make them hard to use in real-life as they are bulky, uncomfortable to wear, and takes lot of time in preparation. In this paper, we introduce a technique to perform channel selection using convolutional neural network (CNN) and to decode multiple classes of motor imagery intentions from four participants who are amputees. A CNN model trained on EEG data of 64 channels achieved a mean classification accuracy of 99.7% with five classes. Channel selection based on weights extracted from the trained model has been performed with subsequent models trained on eight selected channels achieved a reasonable accuracy of 91.5%. Training the model in time domain and frequency domain was also compared, different window sizes were experimented to test the possibilities of realtime application. Our method of channel selection was then evaluated on a publicly available motor imagery EEG dataset.
  • Siriaraya, P. and Ang, C. (2019). The Social Interaction Experiences of Older People in a 3D Virtual Environment. In: Sayago, S. ed. Perspectives on Human-Computer Interaction Research With Older People. Springer, pp. 101-117. Available at: https://doi.org/10.1007/978-3-030-06076-3_7.
    Virtual worlds offer much potential in supporting social interaction for older adults, particularly as a platform which can provide an interactive and immersive social experience. Yet, there has not been much work carried out to study the use, interaction and behavior of older people in 3D virtual world systems, especially studies which investigate their interactions in a fully functional virtual world. Most focus on issues related to usability such as cognitive difficulties when navigation in a 3D space and we know little about their perceptions and preferences when socializing in a virtual space. In this chapter, we report an experimental study examining the various factors which affected the social experience of older users in virtual worlds. The study involved 38 older participants engaging with a 3D and non-3D virtual grocery store. A mixed method of questionnaire and contextual interview was used for data collection and analysis. Overall, we found that physical presence was a significant predictor of many measures defining the quality of social interaction, yet participants often reported a sense of artificiality in their virtual experience. Interestingly, avatars were not considered directly important for social interaction and instead were only seen as a “place holder” to complete the tasks. Two factors contributed to this, the lack of non-verbal communication and the perceived difficulty in embodying physical people with virtual avatars.
  • Putjorn, P., Siriaraya, P., Ang, C. and Deravi, F. (2017). Designing a ubiquitous sensor-based platform to facilitate learning for young children in Thailand. In: Proceedings of the 19th International Conference on Human-Computer Interaction With Mobile Devices and Services. New York, USA: ACM. Available at: http://dx.doi.org/10.1145/3098279.3098525.
    Education plays an important role in helping developing nations reduce poverty and improving quality of life. Ubiquitous and mobile technologies could greatly enhance education in such regions by providing augmented access to learning. This paper presents a three-year iterative study where a ubiquitous sensor based learning platform was designed, developed and tested to support science learning among primary school students in underprivileged Northern Thailand. The platform is built upon the school’s existing mobile devices and was expanded to include sensor-based technology. Throughout the iterative design process, observations, interviews and group discussions were carried out with stakeholders. This lead to key reflections and design concepts such as the value of injecting anthropomorphic qualities into the learning device and providing personally and culturally relevant learning experiences through technology. Overall, the results outlined in this paper help contribute to knowledge regarding the design, development and implementation of ubiquitous sensor-based technology to support learning.
  • Walid, N., Noor, N., Ibrahim, E. and Ang, C. (2017). Potential Motivational Factors of Technology Usage for Indigenous People in Peninsular Malaysia. In: 2016 4th International Conference on User Science and Engineering (i-USEr). IEEE, pp. 259-264. Available at: https://dx.doi.org/10.1109/IUSER.2016.7857971.
    Interrelationship between ethnicity, motivation and technology usage is an interesting scope of study as it offers various dimensions to be explored, polished and refined. One of the most interest areas is on the motivational factors for technology usage among the indigenous people from the viewpoints of socio-structural, socioeconomic and socio cultural. This paper present the findings on the motivational factors of technology usage based on the socioeconomic and socio cultural situational study of Orang Asli in Peninsular Malaysia. The study analysis was performed against the theoretical lens of need-based theory of technology use to uncover constructs of needs that can be identified as motivational factors for technology use. From the study, it is found that Orang Asli has a tendency toward holding on to perceived idea of enjoyment and the preservation of culture and benefits delivery as a motivation for them to use the technology. We propose the affect based needs and cultural needs as new constructs for technology use by the indigenous people.
  • Litvak, M., Otterbacher, J., Ang, C. and Atkins, D. (2016). Social and Linguistic Behavior and its Correlation to Trait Empathy. In: Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media. Association for Computational Linguistics, pp. 128-137. Available at: https://peoples2016.github.io/accepted.html.
    A growing body of research exploits social media behaviors to gauge psychological character- istics, though trait empathy has received little attention. Because of its intimate link to the abil- ity to relate to others, our research aims to predict participants’ levels of empathy, given their textual and friending behaviors on Facebook. Using Poisson regression, we compared the vari- ance explained in Davis’ Interpersonal Reactivity Index (IRI) scores on four constructs (em- pathic concern, personal distress, fantasy, perspective taking), by two classes of variables: 1) post content and 2) linguistic style. Our study lays the groundwork for a greater understanding of empathy’s role in facilitating interactions on social media.
  • Putjorn, P., Ang, C. and Deravi, F. (2015). Learning IoT without the "I" - Educational Internet of Things in a Developing Context. In: Proceedings of the 2015 Workshop on Do-It-Yourself Networking: An Interdisciplinary Approach. New York, USA: ACM, pp. 11-13. Available at: http://doi.org/10.1145/2753488.2753489.
    To provide better education to children from different socio-economic backgrounds, the Thai Government launched the "One Tablet PC Per Child" (OTPC) policy and distributed 800,000 tablet computers to first grade students across the country in 2012. This initiative is an opportunity to study how mobile learning and Internet of Things (IoT) technology can be designed for students in underprivileged areas of northern Thailand. In this position paper, we present a prototype, called OBSY (Observation Learning System) which targets primary science education. OBSY consists of i) a sensor device, developed with low-cost open source singled-board computer Raspberry Pi, housed in a 3D printed case, ii) a mobile device friendly graphical interface displaying visualisations of the sensor data, iii) a self-contained DIY Wi-Fi network which allows the system to operate in an environment with inadequate ICT infrastructure.
  • Siriaraya, P. and Ang, C. (2014). Recreating living experiences from past memories through virtual worlds for people with dementia. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, USA: ACM, pp. 3977-3986. Available at: http://dx.doi.org/10.1145/2556288.2557035.
    This paper describes a study aimed to understand the use of 3D virtual world (VW) technology to support life engagement for people with dementia in long-term care. Three versions of VW prototypes (reminiscence room, virtual tour and gardening) utilising gestured-base interaction were developed iteratively. These prototypes were tested with older residents (80+) with dementia in care homes and their caregivers. Data collection was based on observations of how the residents and care staff interacted collaboratively with the VW. We discussed in depth the use of VWs in stimulating past memories and how this technology could help enhance their sense of self through various means. We also highlighted key approaches in designing VWs to sustain attention, create ludic experiences and facilitate interaction for older people with dementia.

Conference or workshop item

  • Tabbaa, L., Ang, C., Rose, V., Siriaraya, P., Stewart, I., Jenkins, K. and Matsangidou, M. (2019). Bring the Outside In: Providing Accessible Experiences Through VR for People with Dementia in Locked Psychiatric Hospitals. In: ACM CHI Conference on Human Factors in Computing Systems 2019. ACM. Available at: http://dx.doi.org/10.1145/3290605.3300466.
    Many people with dementia (PWD) residing in long-term care may face barriers in accessing experiences beyond their physical premises; this may be due to location, mobility constraints, legal mental health act restrictions, or offence-related restrictions. In recent years, there have been research interests towards designing non-pharmacological interventions aiming to improve the Quality of Life (QoL) for PWD within long-term care. We explored the use of Virtual Reality (VR) as a tool to provide 360°-video based experiences for individuals with moderate to severe dementia residing in a locked psychiatric hospital. We discuss at depth the appeal of using VR for PWD, and the observed impact of such interaction. We also present the design opportunities, pitfalls, and recommendations for future deployment in healthcare services. This paper demonstrates the potential of VR as a virtual alternative to experiences that may be difficult to reach for PWD residing within locked setting.
  • Rose, V., Stewart, I., Jenkins, K., Ang, C. and Matsangidou, M. (2018). A Scoping Review Exploring the Feasibility of Virtual Reality Technology Use with Individuals Living with Dementia. In: ICAT-EGVE 2018 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments. The Eurographics Association. Available at: https://doi.org/10.2312/egve.20181325.
    The existing evidence base in relation to the feasibility of using Virtual Reality technology systems with individuals living with a dementia appeared limited and was therefore explored. The research was collected and reviewed in terms of the different types of Virtual Reality systems (equipment and levels of immersion) and feasibility of the technology within different stages of demen- tia as well as the methodological limitations. A systematic search of the literature was conducted using the healthcare databases advanced search (Medline, PsychINFO, and EMBASE) and snowballing methods. The participants had a dementia diagnosis and the feasibility of Virtual Reality in terms of its acceptability and practicality was discussed. Only five articles met the eligibility criteria. Four included semi-immersive Virtual Reality with participants in the early stages of dementia. One included fully- immersive Virtual Reality where dementia stage ranged from ‘mild’ to ‘severe’. Based on available demographic information, study participants resided in residential care homes, alone in the community or with their spouse. The existing literature sug- gests that both semi and fully-immersive Virtual Reality technology use can be feasible amongst individuals living within the ear- lier stages of dementia outside of a hospital environment, with it being viewed as a welcomed distraction that increased alert- ness and pleasure. However, Virtual Reality was also found to increase fear and anxiety in one study, raising important ethical implications around the safety of the user. The current evidence-base leaves a predominant gap in Virtual Reality technology system use for people within the moderate to later stages of dementia and those living in a hospital environment.
  • Mzurikwao, D., Ang, C., Samuel, O., Asogbon, M., Li, X. and Li, G. (2018). Efficient Channel Selection Approach for Motor Imaginary Classification based on Convolutional Neural Network. In: IEEE International Conference on Cyborg and Bionic Systems (CBS). IEEE, pp. 418-421. Available at: https://doi.org/10.1109/CBS.2018.8612157​.
    Brain Computer Interface (BCI) may be the only way to communicate and control for disabled people. Someone's intention can be decoded from their brainwaves during motor imagery action. This can be used to help them control their environment without making any physical movement. To decode someone's intention from brainwaves during motor imagery activities, machine learning models trained on features extracted from the acquired EEG signals have been used. Although the technique has been successful, it has encountered several limitations and difficulties especially during feature extraction. Moreover, many current BCI systems rely on a large number of channels (e.g. 64) to capture spatial information which are necessary during training a machine learning model. In this study, Convolutional Neural Network (CNN) is used to decode five motor imagery intentions from EEG signals obtained from four subjects using 64 channels EEG device. A CNN model trained on raw EEG data managed to achieve a mean classification accuracy of 99.7%. Channel selection based on learned weights extracted from a trained CNN model has been performed with subsequent models trained on only two selected channels with higher weights attained a high accuracy (average of 98%) among three participants out of four.
  • Matsangidou, M., Ang, C., Mauger, L., Otkhmezuri, B. and Tabbaa, L. (2017). How Real is Unreal? Virtual Reality and the Impact of Visual Imagery on the Experience of Exercise-Induced Pain. In: INTERACT 2017 Conference. Springer. Available at: https://doi.org/10.1007/978-3-319-68059-0_18.
    As a consequence of prolonged muscle contraction, acute pain arises during exercise due to a build-up of noxious biochemicals in and around the muscle. Specific visual cues, e.g., the size of the object in weight lifting exercises, may reduce acute pain experienced during exercise. In this study, we examined how Virtual Reality (VR) can facilitate this “material-weight illusion”, influencing perception of task difficulty, which may reduce perceived pain. We found that when vision understated the real weight, the time to exhaustion was 2 minutes longer. Furthermore, participants’ heart rate was significantly lower by 5-7 bpm in the understated session. We concluded that visual-proprioceptive information modulated the individual’s willingness to continue to exercise for longer, primarily by reducing the intensity of negative perceptions of pain and effort associated with exercise. This result could inform the design of VR aimed at increasing the level of physical activity and thus a healthier lifestyle.
  • Nicholls, B., Ang, C., Efstratiou, C., Lee, Y. and Yeo, W. (2017). Swallowing detection for game control: using skin-like electronics to support people with dysphagia. In: IEEE PerCom Workshop on Pervasive Health Technologies. IEEE. Available at: http://dx.doi.org/10.1109/PERCOMW.2017.7917598.
    In this paper, we explore the feasibility of developing a sensor-driven rehabilitation game for people suffering from dysphagia. This study utilizes the skin-like electronics for unobtrusive, comfortable, continuous recording of surface electromyograms (EMG) during swallowing and use them for driving game-based, user-controlled feedback. The experimental study includes the development and evaluation of a real-time swallow detection algorithm using skin-like sensors and a game-based human-computer interaction. The user evaluations support the ease of use of the skin-like electronics as a motivational tool for people with dysphagia.
  • Bai, L., Efstratiou, C. and Ang, C. (2016). weSport: Utilising Wrist-Band Sensing to Detect Player Activities in Basketball Games. In: WristSense 2016: Workshop on Sensing Systems and Applications Using Wrist Worn Smart Devices (co-Located With IEEE PerCom 2016). Available at: https://sites.google.com/site/wristsenseworkshop2016/.
    Wristbands have been traditionally designed to track the activities of a single person. However there is an opportunity to utilize the sensing capabilities of wristbands to offer activity tracking services within the domain of team-based sports games. In this paper we demonstrate the design of an activity tracking system capable of detecting the players’ activities within a one-to-one basketball game. Relying on the inertial sensors of wristbands and smartphones, the system can capture the shooting attempts of each player and provide statistics about their performance. The system is based on a two- level classification architecture, combining data from both players in the game. We employ a technique for semi-automatic labelling of the ground truth that requires minimum manual input during a training game. Using a single game as a training dataset, and applying the classifier on future games we demonstrate that the system can achieve a good level of accuracy detecting the shooting attempts of both players in the game (precision 91.34%, recall 94.31%).
  • Chong, M., Whittle, J., Rashid, U. and Ang, C. (2015). Cue Now, Reflect Later: A Study of Delayed Reflection of Diary Events. In: 15th IFIP TC 13 International Conference on Human-Computer Interaction. Springer-Link, pp. 367-375. Available at: http://doi.org/10.1007/978-3-319-22698-9_24.
    Diary studies require participants to record entries at the moment of events, but the process often distracts the participants and disrupts the flow of the events. In this work, we explore the notion of delayed reflection for diary studies. Users quickly denote cues of diary events and only reflect on the cues later when they are not busy. To minimize disruptions, we employed a squeeze gesture that is swift and discreet for denoting cues. We investigated the feasibility of delayed reflection and compared it against a conventional digital diary that requires users to reflect immediately at the time of entry. In a weeklong field study, we asked participants to record their daily experiences with both types of diaries. Our results show that users’ preference is context-dependent. Delayed reflection is favored for use in contexts when interruptions are deemed inappropriate (e.g. in meetings or lectures) or when the users are mobile (e.g. walking). In contrast, the users prefer immediate reflection when they are alone, such as during leisure and downtime.
  • Walid, N., Ibrahim, E., Ang, C. and Noor, N. (2015). Exploring Socioeconomic and Sociocultural Implications of ICT Use: An Ethnographic Study of Indigenous People in Malaysia. In: HCI International 2015. Springer, pp. 403-413. Available at: http://dx.doi.org/10.1007/978-3-319-20907-4_37.
    In some countries, it is revealed that the ICT usage by indigenous people is possible to be accomplished and utilized to deliver benefits. For the purpose of development and advancement of Orang Asli (one of the indigenous groups in Malaysia) and in support of the national aspirations in Vision 2020, ICT exposure to Orang Asli requires holistic implementation. Therefore, the predominant issue to be discovered comprehensively is about Orang Asli and it is imperative to understand their needs and requirements in terms of ICT acceptance, appropriation, barriers, as well as infrastructure and infostructure issues. In conclusion, we found four main aspects to be considered in research involving Orang Asli’s use of ICT and benefit ICT: (i) the influential people, (ii) infrastructure barriers (iii) social development issues, and (iv) motivational factors.
  • Chauhan, S., Bobrowicz, A. and Ang, C. (2015). Perception of Digital and Physical Sculpture by People with Dementia. In: Tenth International Conference on the Arts in Society.
    The research reviews the basic elements of sculptures ascertaining the perception of people with dementia and the patterns of their interaction and visual understanding along with tactile engagement.
  • Putjorn, P., Ang, C., Deravi, F. and Chaiwut, N. (2015). Exploring the Internet of "Educational Things"(IoET) in rural underprivileged areas. In: 2015 12th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). IEEE, pp. 1-5. Available at: http://dx.doi.org/10.1109/ECTICon.2015.7207125.
  • Morgado, L., Rodrigues, R., Coelho, A., Magano, O., Calcada, T., Trigueiros Cunha, P., Echave, C., Kordas, O., Sama, S., Oliver, J., Ang, C., Deravi, F., Bento, R. and Ramos, L. (2015). Cities in citizens’ hands. In: 6th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-Exclusion (DSAI 2015).
  • Henderson, M., Nicholls, B., Ang, C., Smithard, D. and Marcelli, G. (2015). A Digital System to Quantify Eating Behaviour. In: Centre for Behaviour Change (CBC) Conference 2015, Harnessing Digital Technology for Health Behaviour Change.
  • Nicholls, B., Henderson, M., Marcelli, G. and Ang, C. (2015). 3D visualisation of the human anatomy for biofeedback therapy in swallowing disorder. In: Centre for Behaviour Change (CBC) Conference 2015, Harnessing Digital Technology for Health Behaviour Change.
  • Chong, M., Rashid, U., Whittle, J. and Ang, C. (2014). SqueezeDiary: Using Squeeze Gesture as Triggers of Diary Events. In: MobileHCI 2014. pp. 427-429. Available at: http://dx.doi.org/10.1145/2628363.2633572.
    The diary method has been adopted for recording participants' behaviours. However, recording diary entries can be difficult or deemed inappropriate in certain situations, like in a social group or in a meeting. In this demo we present SqueezeDiary, a tool that adopts squeeze gestures for users to denote triggers of diary events, and the users reflect on the triggers later when they are not busy (e.g. during lunch). Our application enables delayed reaction, where users can react on their recorded event instances retrospectively during their downtime.
  • Chong, M., Whittle, J., Rashid, U. and Ang, C. (2014). Squeeze the moment: denoting diary events by squeezing. In: ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2014). pp. 219-222. Available at: http://dx.doi.org/10.1145/2638728.2638734.
    In this demonstration, we showcase SqueezeDiary, a novel mobile diary application that uses squeeze gestures for denoting instances of events. SqueezeDiary consists of a mobile phone and a small squeeze sensor that communicate over a Bluetooth connection. To record an event instance, the user simply squeezes the sensor, and the phone records memory cues for review later. SqueezeDiary provides features for users to swiftly record instances as they continue to live through the experience, and only reflect on the instances during their downtime.

Thesis

  • Matsangidou, M. (2018). The Impact of Virtual Reality on the Experience of Exercise Pain.
    Exercise is essential for maintaining a healthy lifestyle, but intense or prolonged exercise can cause a degree of discomfort and pain. These negative exercise-based sensations have been considered as a limiter of exercise capacity and a potential barrier to physical activity. In recent years, computer technology has brought to light new opportunities for promoting physical activity. Virtual Reality (VR) is a representative example of this type of technology, since it allows users to experience a computer-simulated reality with visual, auditory, tactile and olfactory interactions, and distract them from perceiving nociceptive signals and pain.
    The present thesis aims to identify whether and how VR with or without psychological intervention strategies may affect the perception of Exercise Pain (EP). These questions are answered through a series of studies conducted on a large group of participants. As a first step, the effect VR might have on EP during a weight-lifting exercise in comparison to a non-VR weight-lifting exercise is investigated. Then, the effect that personal awareness and internal sensations might have on VR technology during weight-lifting EP is examined. Lastly, the effect of VR and different psychological intervention strategies on weight-lifting EP is considered through three studies.
    The findings of the present thesis extend our understanding of the physiological and psychological effects of VR, providing useful insights into the relationship of VR with the Heart Rate, the perception of task difficulty and the levels of pain and discomfort caused by an exhaustive muscle contraction. The main conclusion reached is that the use of VR during exercise can reduce physiological and psychological responses associated with negative sensations. This conclusion can be used as an informative input for the design of VR so that it can increase the level of physical activity and, by extension, promote a healthier lifestyle.
  • Green, M. (2017). Digitally Queer: The Use of Video-Mediated Communication Within the Gay and Lesbian Community.
    Computer-mediated communication has expanded the ways in which individuals can seek information and create content. Moreover, it allows for the forming of new connections between individuals that may otherwise be impossible. In the last decade, video-mediated communication has been adopted by the lesbian, gay, bisexual, and transgender (LGBT) community, as well as straight allies to share information and reach out to the wider community, particularly those who have been the victim of bullying. Despite this increase in video-mediated communication, most research in the area of gay men and lesbians has been focused on the construction of online identity and narratives of the coming-out journey. Therefore, it is necessary to investigate how video is utilised to disclose matters pertaining to lived experience to further understand this community, and identify how video could be used to better support this minority group.

    In the first stage of this research study, a qualitative analysis of online video was carried out to investigate how individuals engage with LGBT bullying content. The findings revealed individuals to openly disclose deeply personal, and identifiable, information to a global audience. Next an empirical study was carried out with a sample of gay men and lesbians to allow for the close examination of verbal and visual content disclosed in offline video diaries. This was followed by an interview study to examine the practicalities of using wearable and handheld technologies to facilitate this disclosure. Content was found to vary between sexes and recording device, with wearables facilitating a greater degree of discussion for certain topics. Moreover, the recording of point-of-view video diaries was found to be a useful tool in personal development.

    The findings from this thesis extend understanding of how gay men and lesbians engage in video-mediated communication. In addition, the findings reveal how wearable and handheld video recording can be used as a beneficial tool both for this group and the wider community.
  • Haji Matyassin, H. (2015). ENHANCING USERS’ EXPERIENCE WITH SMART MOBILE TECHNOLOGY.
    The aim of this thesis is to investigate mobile guides for use with smartphones. Mobile guides have been successfully used to provide information, personalisation and navigation for the user. The researcher also wanted to ascertain how and in what ways mobile guides can enhance users' experience.

    This research involved designing and developing web based applications to run on smartphones. Four studies were conducted, two of which involved testing of the particular application. The applications tested were a museum mobile guide application and a university mobile guide mapping application. Initial testing examined the prototype work for the ‘Chronology of His Majesty Sultan Haji Hassanal Bolkiah’ application. The results were used to assess the potential of using similar mobile guides in Brunei Darussalam’s museums. The second study involved testing of the ‘Kent LiveMap’ application for use at the University of Kent. Students at the university tested this mapping application, which uses crowdsourcing of information to provide live data. The results were promising and indicate that users' experience was enhanced when using the application.

    Overall results from testing and using the two applications that were developed as part of this thesis show that mobile guides have the potential to be implemented in Brunei Darussalam’s museums and on campus at the University of Kent. However, modifications to both applications are required to fulfil their potential and take them beyond the prototype stage in order to be fully functioning and commercially viable.

Forthcoming

  • Matsangidou, M., Otkhmezuri, B., Ang, C., Avraamides, M., Riva, G., Gaggioli, A., Iosif, D. and Karekla, M. (2020). “Now I Can See Me: A Virtual Representation of Self-image”: Designing a Multi-User Virtual Reality Remote Psychotherapy for Body Shape and Size. Human-Computer Interaction.
    Recent years have seen a growing research interest towards designing computer-assisted health interventions aiming to improve mental health services. Digital technologies are becoming common methods for diagnosis, therapy, education, and training. With the advent of lower-cost VR head-mounted-displays (HMDs) and high internet data transfer capacity, there is a new opportunity for applying immersive VR tools to augment existing interventions. This study is among the first to explore the use of a Multi-User Virtual Reality (MUVR) system as a therapeutic medium for participants at high-risk for developing Eating Disorders. The goal of the study is to examine the opportunities VR could offer for interventions, capitalising on the success of past VR-based therapies. This study aimed to investigate the design opportunities, challenges, feasibility, and user acceptability of introducing MUVR to facilitate remote psychotherapy. The appeal of using VR for remote psychotherapy and its observed impact on both therapists and participants is discussed. In particular, this paper demonstrates the potential value of MUVR remote psychotherapy for sufferers with body shape and weight concerns.
Last updated