Autonomous Vehicle Traffic Accident Prevention using Mobile-Integrated Deep Reinforcement Learning Technique
1
Citation
13
Reference
10
Related Paper
Citation Trend
Abstract:
When it concerns autonomous traffic management, the most effective decision-making reinforcement learning methods are often utilized for vehicle control. Surprisingly demanding circumstances, however, aggravate the collisions and, as a consequence, the chain collisions. In order to potentially offer guidance on eliminating and decreasing the danger of chain collision malfunctions, we first evaluate the main types of chain collisions and the chain events typically proceed. In an emergency, this study proposes mobile-integrated deep reinforcement learning (DRL) for autonomous vehicles to control collisions. Three essential influencing substances are completely taken into consideration and ultimately achieved by the offered strategy: accuracy, efficiency, and passenger comfort. Following this, we investigate the safety performance currently employed in security-driving solutions by interpreting the chain collision avoidance problem as a Markov Decision Process problem and offering a decision-making strategy based on mobile-integrated reinforcement learning. All of the analysis's findings have the objective of aid academics and policymakers to appreciate the positive aspects of a more reliable autonomous traffic infrastructure and to smooth out the way for the actual adoption of a driverless traffic scenario.Keywords:
Accident (philosophy)
Vehicle accident
As litigation has mushroomed in the 1970s and 807s, more and more varied types of people have proclaimed their expertise to practice motor-vehicle accident reconstruction. A vast number of those who have claimed to be experts have nothing more than a high-school education and a short course in accident reconstruction. Unfortunately, the courts, more often than not, have qualified these people as experts. Another large group of practitioners are college educated, but come to accident reconstruction by way of education and experience in non-related fields such as chemistry, nuclear physics, aeronautical engineering, air-conditioning design, plastics manufacture, and other distant disciplines. These people usually know the basic physics associated with accident reconstruction, but often do not appreciate or understand the idiosyncrasies of motor-vehicle collisions. But, they too are usually qualified as experts by the courts.
Accident (philosophy)
Vehicle accident
Nothing
Traffic accident
Cite
Citations (0)
Reinforcement Learning continues to show promise in solving problems in new ways. Recent publications have demonstrated how utilizing a reinforcement learning approach can lead to a superior policy for optimization. While previous works have demonstrated the ability to train without gradients, most recent works has focused on the simpler regression problems. This work will show how a Multi-Agent Reinforcement Learning approach can be used to optimize models in training without the need for the gradient of the loss function, and how this approach can benefit defense applications.
Presentation (obstetrics)
Cite
Citations (0)
Accident (philosophy)
Vehicle accident
Traffic accident
Road accident
Accident and emergency
Road traffic accident
Cite
Citations (0)
The Victorian road traffic authority has recently commenced detailed investigation of the behaviour of the motor vehicle in accidents. There is insufficient information on the behaviour of the vehicle in both accident caUSAtion and more particularly, in injury caUSAtion as a result of road accidents. This information can only be obtained from the investigation of actual accidents. The investigations make no attempt to attribute blame for the accidents. Particular case studies covering a range of vehicle types are examined. It is shown that reduction in accident severity and injury causing potential are possible at very little cost by considering how the vehicle or its components may behave in an accident (a).
Causation
Accident (philosophy)
Blame
Vehicle accident
Traffic accident
Road accident
Accident-proneness
Cite
Citations (0)
Learning a high-performance trade execution model via reinforcement learning (RL) requires interaction with the real dynamic market. However, the massive interactions required by direct RL would result in a significant training overhead. In this paper, we propose a cost-efficient reinforcement learning (RL) approach called Deep Dyna-Double Q-learning (D3Q), which integrates deep reinforcement learning and planning to reduce the training overhead while improving the trading performance. Specifically, D3Q includes a learnable market environment model, which approximates the market impact using real market experience, to enhance policy learning via the learned environment. Meanwhile, we propose a novel state-balanced exploration scheme to solve the exploration bias caused by the non-increasing residual inventory during the trade execution to accelerate model learning. As demonstrated by our extensive experiments, the proposed D3Q framework significantly increases sample efficiency and outperforms state-of-the-art methods on average trading cost as well.
Q-learning
Cite
Citations (1)
Q-learning
Spectrum management
Cite
Citations (13)
Vehicle accidents have become one of the major issues, which cause the death of many people around the globe. Presently India is in the top-ranked in death due to road accidents. This is a serious matter, which needs to be solved to save the life of many injured people due to accidents. To solve this problem many automobile companies have done different systems such as safety airbags, seat belts, camera sensors, etc. but still the cause and the effect of the accident cannot be reduced. One of the major solutions is to provide proper medical treatment to the victim on time. According to statistics whenever any kind of accident happens, the witness of the accident hesitates to help the victim due to the long procedure of reporting and inquiry to the police. Mostly the victim is not in a condition to ask for any sort of help from others in that situation. In such a situation, the life of the victim is in danger due to the lack of proper treatment and medical facility in time. To solve this problem there is an urgent requirement of a system that automatically detects the accident and based on that information it communicates about the accident and its location to the hospital and relative without any delay. In this work, an Arduino based Automatic Accident Detection and Location Communication System (AAADLCS) is developed that continuously tracks the location of the vehicle, when any kind of accident occurs. It automatically detects the accident and based on that information it sends that location to the hospital, relatives, and to the police quickly. The key benefit of this system is its low cost, easy to implement, easy to use, processing speed, high accuracy, and its self-reliance.
Accident (philosophy)
Arduino
Road accident
Vehicle accident
Cite
Citations (11)
Based on information precessing, the information's receiving and processing of driver at accident moment were analyzed. The driver's behivior excited by accident was discussed to support a scientific treatment for accident.
Accident (philosophy)
Traffic accident
Vehicle accident
Cite
Citations (3)
Highway accident reconstructionalists frequently may be able to reconstruct what happened during an accident event. However, the ultimate question of interest to the court may be -- why did this accident happen? Highway accidents usually have more than one contributing factor. This paper briefly discusses human factors and traffic control measures as possible causative factors which may explain why the accident occurred.
Accident (philosophy)
Vehicle accident
Traffic accident
Accident investigation
Road traffic accident
Road accident
Cite
Citations (1)
Unmanned Aerial Vehicle (UAV) is increasingly becoming an important tool used for a variety of tasks. In addition, Reinforcement Learning (RL) is a popular research topic. In this paper, these two fields are combined together and we apply the reinforcement learning into the UAV field, promote the application of reinforcement learning in our real life. We design a reinforcement learning framework named ROS-RL, this framework is based on the physical simulation platform Gazebo and it can address the problem of UAV motion in continuous action space. We can connect our algorithms into this framework through ROS and train the agent to control the drone to complete some tasks. We realize the autonomous landing task of UAV using three different reinforcement learning algorithms in this framework. The experiment results show the effectiveness of algorithm in controlling UAV which flights in a simulation environment close to the real world.
Drone
Cite
Citations (4)