Globally, the World Health Organisation estimates that there are about 1 billion people suffering from disabilities and the UK has about 10 million people suffering from neurological disabilities in particular. In extreme cases these individuals with disabilities such as Motor Neuron Disease(MND), Cerebral Palsy(CP) and Multiple Sclerosis(MS) may only be able to perform limited head movement, move their eyes or make facial gestures. The aim of this research is to investigate low-cost and reliable assistive devices using automatic gesture recognition systems that will enable the most severely disabled user to access electronic assistive technologies and communication devices thus enabling them to communicate with friends and relative.
The research presented in this thesis is concerned with the detection of head movements, eye movements, and facial gestures, through the analysis of video and depth images. The proposed system, using web cameras or a RGB-D sensor coupled with computer vision and pattern recognition techniques, will have to be able to detect the movement of the user and calibrate it to facilitate communication. The system will also provide the user with the functionality of choosing the sensor to be used i.e. the web camera or the RGB-D sensor, and the interaction or switching mechanism i.e. eye blink or eyebrows movement to use. This ability to system to enable the user to select according to the user's needs would make it easier on the users as they would not have to learn how to operating the same system as their condition changes.
This research aims to explore in particular the use of depth data for head movement based assistive devices and the usability of different gesture modalities as switching mechanisms. The proposed framework consists of a facial feature detection module, a head tracking module and a gesture recognition module. Techniques such as Haar-Cascade and skin detection were used to detect facial features such as the face, eyes and nose. The depth data from the RGB-D sensor was used to segment the area nearest to the sensor. Both the head tracking module and the gesture recognition module rely on the facial feature module as it provided data such as the location of the facial features. The head tracking module uses the facial feature data to calculate the centroid of the face, the distance to the sensor, the location of the eyes and the nose to detect head motion and translate it into pointer movement. The gesture detection module uses features such as the location of the eyes, the location of the pupil, the size of the pupil and calculates the interocular distance for the detection of blink or eyebrows movement to perform a click action. The research resulted in the creation of four assistive devices based on the combination of the sensors (Web Camera and RGB-D sensor) and facial gestures (Blink and Eyebrows movement): Webcam-Blink, Webcam-Eyebrows, Kinect-Blink and Kinect-Eyebrows. Another outcome of this research has been the creation of an evaluation framework based on Fitts' Law with a modified multi-directional task including a central location and a dataset consisting of both colour images and depth data of people performing head movement towards different direction and performing gestures such as eye blink, eyebrows movement and mouth movements.
The devices have been tested with healthy participants. From the observed data, it was found that both Kinect-based devices have lower Movement Time and higher Index of Performance and Effective Throughput than the web camera-based devices thus showing that the introduction of the depth data has had a positive impact on the head tracking algorithm. The usability assessment survey, suggests that there is a significant difference in eye fatigue experienced by the participants; blink gesture was less tiring to the eye than eyebrows movement gesture. Also, the analysis of the gestures showed that the Index of Difficulty has a large effect on the error rates of the gesture detection and also that the smaller the Index of Difficulty the higher the error rate.
Patient monitoring has advanced over the years, from bed side monitors in the hospital, to wearable devices that can monitor patients and communicate their data remotely to medical servers over wireless networks. It is a process that involves monitoring major vital signs of a patient, to check if their health is normal or deteriorating within a period of time. In a remote situation, vital signs information, can help health care providers to easily send help to patients when their health is at immediate risk. The problem with this kind of remote monitoring system is that most times the patients must be within a specified location to either monitor their health or receive emergency help. This paper presents a potential solution in the form of a global vital sign monitoring system and consists of two components to demonstrate the functionality; a wearable wireless monitoring device that records the temperature and pulse rate of the patient wearing it and a web application, which allows the patient and the emergency response unit to interact together over cellular network.
The Internet of Things (IoT) aims at transforming everyday objects into smart or virtual objects, giving us control of objects and additionally keeping us informed of the condition of these objects. This idea is gaining traction, thanks to the large number of gadgets connected to the web, ranging from cell phones to appliances. However, the development of the IoT gives rise to various security-related, privacy and ethical issues due to the complexity of the systems being implemented and the heterogeneity of these networks. It is furthermore imperative to stress that the greater part of connected objects are often left unattended or are not properly secured. In light of these issues, this study addresses the Internet of Things concepts through deliberate audit of academic research papers, corporate white papers and online databases. The primary goal of this study is to addresses challenges regarding privacy and security of Internet of Things, within some of its key application areas, namely Smart Homes, Smart Cities, Wearables, Smart Retail, Connected Cars, and Health-Care. Moreover, this paper provides a threat model for all aforementioned areas and provide extensive countermeasures for each of them, highlighting the benefits of security and privacy by design.
Building services have come to a stage where they are closely being integrated with ICT infrastructure. The aim of this research is to develop an innovative and affordable platform for business services interfaced with augmented reality. This paper provides an innovative way of interaction between building services lighting systems through augmented reality. The system allows users to show their availability status in real time using an intelligent lighting system projected into Augmented Reality (AR) which changes color according to their availability. Furthermore, the different status and availability status can also be manipulated through an interactive interface, creating a smart space.
Air pollution is one of the great challenges facing modern cities. According to the World Health Organization (WHO), 80% of people living in cities with air quality monitoring facilities are living in conditions where the quality of the air is well beyond the limits set out in the air quality guidelines. As more and more people are projected to move into urban areas by 2050, this problem is going to keep on increasing. A possible solution could be the advent of Smart Cities. One of the objectives of Smart Cities is to provide a better living environment to its inhabitants. With the Internet of Things providing easily deployable, low power, low cost air quality monitoring sensors and the resources to process the huge amount of data collected, this objective could be reached. In this paper, we propose an evaluation of the power consumption of two low cost air quality monitoring systems - one based on an Arduino and the other on a Raspberry Pi system. The air quality systems proposed are based on off-the shelf hardware and are easy to assemble and maintain. The proposed systems use Bluetooth Low Energy (BLE) to transmit data while being collected through a mobile app on a smartphone. The data was collected for five days and it was found by performing an ANOVA on the power consumption that there was a significant difference in the mean energy consumption of the two systems.
This paper presents the architecture for a novel RGB-D based assistive device that incorporates depth as well as RGB data to enhance head tracking and facial gesture based control for severely disabled users. Using depth information it is possible to remove background clutter and therefore achieve a more accurate and robust performance. The system is compared with the CameraMouse, SmartNav and our previous 2D head tracking system. For the RGB-D system, the effective throughput of dwell clicking increased by a third (from 0.21 to 0.30 bits per second) and that of blink clicking doubled (from 0.15 to 0.28 bits per second) compared to the 2D system.
The aim of this project is to develop a low cost air quality monitoring system. The proposed system used the Raspberry Pi board, Arduino board, Grove sensors and Microsoft's Azure based cloud service for data storage and analysis. The data was captured from the 10th of May to the 31st of August 2017 in Bonne Terre, Vacoas. It was found that during the data capture period there was one occurrence in May and two occurrences in August where the PM 2.5 and PM 10 were above the 25 ug/m3 and 50 ug/m3 level based on WHO guidelines.
This paper is directed towards discussing the challenges associated with cyber physical Systems (CPSs) as they encompass a broad collection of components integrating cyberspace and mechanical elements together. For a better understanding of the challenges associated with CPS, the paper provides a detailed foundation of CPSs including definitions of the various elements. Subsequently, we provide a qualitative content analysis of the current research in the field demonstrating issues related to the technical development, economic policies and the effect of those on the design phase of a CPS. Finally, we provide several recommendations to improve the design and implementation of CPS.
The aim of this research is to develop an innovative low cost and affordable platform for smart home control and energy monitoring interfaced with augmented reality. This method will educate people about energy use at a time when fuel costs are rising and create novel methods of interaction for those with disabilities. In order to increase the awareness of energy consumption, we have developed an interactive system using Augmented Reality to show live energy usage of electrical components. This system allows the user to view his real time energy consumption and at the same time offers the possibility to interact with the device in Augmented Reality. The energy usage was captured and stored in a database which can be accessed for energy monitoring. We believe that the combinations of both, complex smart home applications and transparent interactive user interface will increase the awareness of energy consumption.