logo
    A Novel Vehicle Destination Prediction Model With Expandable Features Using Attention Mechanism and Variational Autoencoder
    3
    Citation
    40
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:
    The daily passage of vehicles generates a huge amount of location-aware social data, which provides a rich source of data for analyzing vehicle travel behavior. Being able to accurately predict the future destinations of vehicle travel has great economic value and social impact. The presence of larger sparsity, fewer features and error information in the real dataset led to difficulties in convergence of previous models. Therefore, we propose a Novel Vehicle Destination Prediction Model with Expandable Features Using Attention Mechanism and Variational Autoencoder (EFAMVA). The EFAMVA model combines the autoencoder model and the attention mechanism has overcome the above mentioned problems. The variational autoencoder model obtains the hidden features conforming to the characteristics of the data from the structured vehicle driving data. And the attention mechanism can learn the appropriate combination of weight parameters. The comprehensive experimental results with other comparison models show that the EFAMVA model achieved the best index score, with the MSE value of 0.750, the RMSE value of 1.215, and the MAE value of 0.955. Therefore, it can be shown that the EFAMVA model has a better predictive effect on the future destination of the vehicle.
    Keywords:
    Autoencoder
    Value (mathematics)
    Stacked autoencoder is a typical deep neural network. The hidden layers will compress the input data with a better representation than the raw data. Stacked autoencoder has several hidden layers. However, the number of hidden layers is always experiential. In this paper, different hidden layers number autoencoders are discussed. Different depths of stacked autoencoder have different learning capability. The deeper stacked autoencoders have better learning capability which needs more training iterations and time.
    Autoencoder
    Feature Learning
    Representation
    Citations (20)
    A Neural Network is one of the techniques by which we classify data. In this paper, we have proposed an effectively stacked autoencoder with the help of a modified sigmoid activation function. We have made a two-layer stacked autoencoder with a modified sigmoid activation function. We have compared our autoencoder to the existing autoencoder technique. In the existing autoencoder technique, we generally use the logsigmoid activation function. But in multiple cases using this technique, we cannot achieve better results. In that case, we may use our technique for achieving better results. Our proposed autoencoder may achieve better results compared to this existing autoencoder technique. The reason behind this is that our modified sigmoid activation function gives more variations for different input values. We have tested our proposed autoencoder on the iris, glass, wine, ovarian, and digit image datasets for comparison propose. The existing autoencoder technique has achieved 96% accuracy on the iris, 91% accuracy on wine, 95.4% accuracy on ovarian, 96.3% accuracy on glass, and 98.7% accuracy on digit (image) dataset. Our proposed autoencoder has achieved 100% accuracy on the iris, wine, ovarian, and glass, and 99.4% accuracy on digit (image) datasets. For more verification of the effeteness of our proposed autoencoder, we have taken three more datasets. They are abalone, thyroid, and chemical datasets. Our proposed autoencoder has achieved 100% accuracy on the abalone and chemical, and 96% accuracy on thyroid datasets.
    Autoencoder
    Sigmoid function
    Activation function
    Citations (12)
    This paper presents a comparison performance on three types of autoencoders, namely, the traditional autoencoder with Restricted Boltzmann Machine (RBM), the stacked autoencoder without RBM and the stacked autoencoder with RBM. The performances are compared based on the reconstruction error for face images and using the same values for the parameters such as the number of neurons in the hidden layers, the training method, and the learning rate. The results show that the RBM stacked autoencoder gives better performance in terms of the reconstruction error compared to the other two architectures.
    Autoencoder
    Restricted Boltzmann machine
    Word error rate
    Citations (28)
    Autoencoder is an excellent unsupervised learning algorithm. However, it can not generate kinds of sample data in the decoding process. Variational autoencoder is a typical generative adversarial net which can generate various data to augment the sample data. In this paper, we want to do some research about the information learning in hidden layer. In the simulation, we compare the hidden layer learning of hidden layer in conventional autoencoder and variational autoencoder.
    Autoencoder
    Generative model
    Sample (material)
    Citations (22)
    The anomaly detection technology is the basis for ensuring the safe and stable operation of the on-rail payload. The traditional threshold-based anomaly detection method has low accuracy and poor flexibility, and cannot detect abnormalities in real time. In addition, due to the lack of abnormal samples, the distribution of positive and negative samples is extremely imbalanced, which increases the difficulty of abnormal detection. Therefore, this paper proposes an unsupervised learning method based on AutoEncoder and its variants, the Basic AutoEncoder, Deep AutoEncoder and Sparse AutoEncoder are used to verify the algorithm on three public datasets. And using the above three algorithms to carry out the case application on the real load dataset. The experiments show whether in the public dataset or the real data of the payload, the three methods of AutoEncoder have achieved good results, proving the AutoEncoder and its variants have a good application in anomaly detection. At the same time, it is verified that the three algorithms have different effects on different datasets, which proves that the AutoEncoder with different characteristics need to be selected in different scenarios.
    Autoencoder
    Payload (computing)
    Anomaly (physics)
    Citations (0)
    The concept of autoencoder was originally proposed by LeCun in 1987, early works on autoencoder were used for dimensionality reduction or feature learning. Recently, with the popularity of deep learning research, autoencoder has been brought to the forefront of generative modeling. Many variants of autoencoder have been proposed by different researchers and have been successfully applied in many fields, such as computer vision, speech recognition and natural language processing. In this paper, we present a comprehensive survey on autoencoder and its various variants. Furthermore, we also present the lineage of the surveyed autoencoders. This paper can provide researchers engaged in related works with very valuable help.
    Autoencoder
    Popularity
    Feature (linguistics)
    Citations (209)
    Deep Autoencoder has the powerful ability to learn features from large number of unlabeled samples and a small number of labeled samples. In this work, we have improved the network structure of the general deep autoencoder and applied it to the disease auxiliary diagnosis. We have achieved a network by entering the specific indicators and predicting whether suffering from liver disease, the network using real physical examination data for training and verification. Compared with the traditional semi-supervised machine learning algorithm, deep autoencoder will get higher accuracy.
    Autoencoder
    Training set
    A method for explaining a deep learning model prediction is proposed. It uses a combination of the standard autoencoder and the variational autoencoder. The standard autoencoder is exploited to reconstruct original images and to produce hidden representation vectors. The variational autoencoder is trained to transform the deep learning model outputs (embedding vectors) into the hidden representation vectors of the standard autoencoder. In explaining or testing phase, the variational autoencoder produces a set of vectors based on the explained image embedding. Then the trained decoder part of the standard autoencoder reconstructs a set of images which form a heatmap explaining the original explained image. In fact, the variational autoencoder plays a role of the perturbation technique of images. Numerical experiments with the well-known datasets MNIST and CIFAR10 illustrate the propose method.
    Autoencoder
    MNIST database
    Representation
    Anomaly detection is critical given the raft of cyber attacks in the wireless communications these days. It is thus a challenging task to determine network anomaly more accurately. In this paper, we propose an Autoencoder-based network anomaly detection method. Autoencoder is able to capture the non-linear correlations between features so as to increase the detection accuracy. We also apply the Convolutional Autoencoder (CAE) here to perform the dimensionality reduction. As the Convolutional Autoencoder has a smaller number of parameters, it requires less training time compared to the conventional Autoencoder. By evaluating on NSL-KDD dataset, CAE-based network anomaly detection method outperforms other detection methods.
    Autoencoder
    Anomaly (physics)
    Citations (276)