This paper presents a driver status recognition method based on data fusion that changes the autonomous driving mode in our co-pilot system. Our research has the following two novelties: first, the fusion of information-based driver-status recognition between a direct method using the states of the driver's face and eyes and an indirect method of recognition based on the driver's driving patterns using vehicle information; and second, the ability to transfer from the driving mode to an autonomous mode through fusion of the information of the two methods. Four parameters are calculated in the fusion of these direct and indirect methods: the percent of eye closure, gaze direction, steering wheel angle, and vehicle speed. These parameters are combined to infer the level of drowsiness and attention dispersion of the driver. The system was tested under different circumstances for day and night driving conditions using different driving scenarios on a roadway. Our driver status recognition method utilized a smart device connected to our prototype autonomous vehicle.
Many researchers have reported that a decline in driving concentration caused by drowsiness or inattentiveness is one of the primary sources of serious car accidents. One of the most well-known methods to measure a driver's concentration is called driver state monitoring, where the driver is warned when he or she is falling asleep based on visual information of the face. On the other hand, autonomous driving systems have garnered attention in recent years as an alternative plan to reduce human-caused accidents. This system shows the possibility of realizing a vehicle with no steering wheel or pedals. However, lack of technical maturity, human acceptance problems, and individual desire to drive highlight the demand to keep human drivers in the loop. For these reasons, it is necessary to decide who will be responsible for driving the vehicle and adjusting the vehicle control system. This is known as the driving control authority. In this paper, we present a system that can suggest transitions in various driving control authority modes by sensing a decline of the human driver's performance caused by drowsiness or inattentiveness. In more detail, we identify the problems of the legacy driving control authority transition made only with vision-based driver state recognition. To address the shortcomings of this method, we propose a new recommendation method that combines the vision-based driver state recognition results and path suggestion of an autonomous system. Experiment results of simulated drowsy and inattentive drivers on an actual autonomous vehicle prototype show that our method has better transition accuracy with fewer false-positive errors compared with the legacy transition method that only uses vision-based driver state recognition.
The Robot Operating System (ROS) is a widely-used development and service platform for robot clusters and autonomous driving systems. ROS provides a data logging tool named ROSBAG to record and play back messages between processes into permanent storage devices. Despite of its excellent functionality for distributed communication, only a single logging node is allowed to gather data streams. Although such a single node logging is only appropriate for small systems, it does not have the capability to deal with big data distributed systems. In this paper, we present a distributed logging system for ROS-based systems. We wrapped ROSBAG tools with a Python-based parallel ssh tool - pssh to send commands to start and stop logging. We also supported a synchronous replay method to play back the data streams separately stored in several storage devices. Our mechanism can evenly distribute bandwidth consumption of storage and networks for collecting and storing data. It also disperses the logging data into the storage devices in several computers and improves the available logging duration previously restricted by storage capacity.
Object detection is a technology that deals with recognizing classes of objects and their location. It is used in many different areas, such as in face-detecting digital cameras, surveillance tools, or self-driving cars. These days, deep learning-based object detection approaches have achieved significantly better performance than the classic feature-based algorithms. Darknet [1] is a deep learning-based object detection framework, which is well known for its fast speed and simple structure. Unfortunately, like many other frameworks, Darknet only supports NVIDIA CUDA [2] for accelerating its calculations. For this reason, a user has only limited options for graphic card selection. OpenCL" (open computing language) [3] is an open standard for cross-platform, parallel programming of heterogeneous systems. It is available not only for CPUs, GPUs (graphics processing units), but also for DSPs (digital signal processors), FPGAs (field-programmable gate arrays) and other hardware accelerators. In this paper, we present the OpenCL-Darknet, which transforms the CUDA-based Darknet into an open standard OpenCL backend. Our goal was to implement a deep learning-based object detection framework that will be available for the general accelerator hardware and to achieve competitive performance compared to the original CUDA version. We evaluated the OpenCL-Darknet in AMD R7-integraged APU (accelerated processing unit) with OpenCL 2.0 and AMD Radeon RX560 with OpenCL 1.2 using a VOC 2007 dataset [4]. We also compared its performance with the original Darknet for NVIDIA GTX 1050 with CUDA 8.0 and cuDNN 6.0.
This paper presents motion control block implementation for driving computing system. The driving computing system provides recognition, decision making and control functions based on an integrated hardware platform for autonomous driving. The purpose of the driving computing system is autonomous driving in the urban environment, and various functions are required for this. The motion control block controls the behavior of the autonomous vehicle following the local path and target speed from motion planner in the driving computing system[1]. This block consists of a driving mode decision module, a lateral controller, a longitudinal controller and a driver intention decision module. For the autonomous driving in the urban environment, the lateral and longitudinal controllers of the motion control block are implemented and the driving mode can be switched by the driver operation. In order to verify the algorithm of motion control block, the experiment was conducted in the ETRI (Electronics and Telecommunications Research Institute) the results were confirmed.