DGCddG: Deep Graph Convolution for Predicting Protein-Protein Binding Affinity Changes Upon Mutations
9
Citation
48
Reference
10
Related Paper
Citation Trend
Abstract:
Effectively and accurately predicting the effects of interactions between proteins after amino acid mutations is a key issue for understanding the mechanism of protein function and drug design. In this study, we present a deep graph convolution (DGC) network-based framework, DGCddG, to predict the changes of protein-protein binding affinity after mutation. DGCddG incorporates multi-layer graph convolution to extract a deep, contextualized representation for each residue of the protein complex structure. The mined channels of the mutation sites by DGC is then fitted to the binding affinity with a multi-layer perceptron. Experiments with results on multiple datasets show that our model can achieve relatively good performance for both single and multi-point mutations. For blind tests on datasets related to angiotensin-converting enzyme 2 binding with the SARS-CoV-2 virus, our method shows better results in predicting ACE2 changes, may help in finding favorable antibodies. Code and data availability: https://github.com/lennylv/DGCddG.Keywords:
Convolution (computer science)
Perceptron
Representation
Deep neural networks have become the primary learning technique for object recognition. Videos, unlike still images, are temporally coherent which makes the application of deep networks non-trivial. Here, we investigate how motion can aid object recognition in short videos. Our approach is based on Long Short-Term Memory (LSTM) deep networks. Unlike previous applications of LSTMs, we implement each gate as a convolution. We show that convolutional-based LSTM models are capable of learning motion dependencies and are able to improve the recognition accuracy when more frames in a sequence are available. We evaluate our approach on the Washington RGBD Object dataset and on the Washington RGBD Scenes dataset. Our approach outperforms deep nets applied to still images and sets a new state-of-the-art in this domain.
Convolution (computer science)
Deep Neural Networks
Cite
Citations (3)
Deep learning is a subfield of machine learning that has gained significant popularity in recent years due to its ability to achieve state-of-the-art results in a variety of applications, ranging from computer vision and natural language processing to robotics and gaming. It is based on artificial neural networks, which are designed to mimic the structure and function of the human brain. In this book, we provide a comprehensive overview of deep learning, including its definition, history, key characteristics, limitations, and applications. Chapter 1, we delve into the fundamentals of deep learning, including its definition, historical overview, and the differences between deep learning and machine learning. Additionally, we introduce Bayesian learning concepts, which are an important aspect of deep learning. We also cover the concept of decision surfaces and how they can be used to visualize and interpret the results of deep learning algorithms. Chapter 2 focuses on linear classifiers, including linear discriminant analysis, logistic regression, and the perceptron algorithm. We also cover linear machines with hinge loss, which is a popular optimization technique used in deep learning. Chapter 3 discusses various types of optimization techniques, including gradient descent and batch optimization. We provide an overview of each optimization method, as well as their variants, and explain how they work. Chapter 4, we introduce neural networks, including the structure of neural networks, how they work, and the key components of neural networks. We then delve into the multilayer perceptron, which is one of the most commonly used neural network architectures, and the back propagation learning algorithm, which is used to train neural networks. Keywords: Machine learning, Artificial neural networks, Computer vision, Natural language processing, Robotics, Gaming, State-of-the-art results, Human brain, Comprehensive overview, Bayesian learning, Decision surfaces, Linear classifiers, Linear discriminant analysis, Logistic regression, Perceptron algorithm, Linear machines, Hinge loss, Optimization techniques, Gradient descent, Batch optimization, Neural networks, Multilayer perceptron, Back propagation learning algorithm, Key components, Training neural networks
Perceptron
Bayesian Optimization
Cite
Citations (0)
Machine learning is a subfield of artificial intelligence (AI) that involves the development of algorithms that can learn patterns from data and make predictions or decisions without being explicitly programmed. Deep learning is a subset of machine learning that uses deep neural networks with multiple layers to learn and extract features from complex data. The history of machine learning can be traced back to the 1950s, with the development of perceptrons, a type of artificial neuron. However, progress was slow until the 1990s, when the availability of large datasets and more powerful computing resources enabled the development of more sophisticated algorithms. In recent years, deep learning has achieved remarkable success in a wide range of applications, such as image recognition, speech recognition, and natural language processing. Machine learning and deep learning are important because they enable computers to perform tasks that were previously thought to be the exclusive domain of humans. They have the potential to revolutionize many industries, such as healthcare, finance, and transportation, by automating tasks and improving decision-making. However, they also raise ethical and societal concerns, such as bias and job displacement, that need to be addressed.
Perceptron
Instance-based learning
Cite
Citations (0)
Recent advances in machine learning, specifically in deep learning with neural networks, has made a profound impact on fields such as natural language processing, image classification, and language modeling; however, feasibility and potential benefits of the approaches to metagenomic data analysis has been largely under-explored. Deep learning exploits many layers of learning nonlinear feature representations, typically in an unsupervised fashion, and recent results have shown outstanding generalization performance on previously unseen data. Furthermore, some deep learning methods can also represent the structure in a data set. Consequently, deep learning and neural networks may prove to be an appropriate approach for metagenomic data. To determine whether such approaches are indeed appropriate for metagenomics, we experiment with two deep learning methods: i) a deep belief network, and ii) a recursive neural network, the latter of which provides a tree representing the structure of the data. We compare these approaches to the standard multi-layer perceptron, which has been well-established in the machine learning community as a powerful prediction algorithm, though its presence is largely missing in metagenomics literature. We find that traditional neural networks can be quite powerful classifiers on metagenomic data compared to baseline methods, such as random forests. On the other hand, while the deep learning approaches did not result in improvements to the classification accuracy, they do provide the ability to learn hierarchical representations of a data set that standard classification methods do not allow. Our goal in this effort is not to determine the best algorithm in terms accuracy-as that depends on the specific application-but rather to highlight the benefits and drawbacks of each of the approach we discuss and provide insight on how they can be improved for predictive metagenomic analysis.
Perceptron
Deep belief network
Cite
Citations (99)
Feature (linguistics)
Cite
Citations (881)
Abstract The functional impact of protein mutations is reflected on the alteration of conformation and thermodynamics of protein-protein interactions (PPIs). Quantifying the changes of two interacting proteins upon mutations are commonly carried out by computational approaches. Hence, extensive research efforts have been put to the extraction of energetic or structural features on proteins, followed by statistical learning methods to estimate the effects of mutations to PPI properties. Nonetheless, such features require extensive human labors and expert knowledge to obtain, and have limited abilities to reflect point mutations. We present an end-to-end deep learning framework, MuPIPR , to estimate the effects of mutations on PPIs. MuPIPR incorporates a contextualized representation mechanism of amino acids to propagate the effects of a point mutation to surrounding amino acid representations, therefore amplifying the subtle change in a long protein sequence. On top of that, MuPIPR leverages a Siamese residual recurrent convolutional neural encoder to encode a wildtype protein pair and its mutation pair. Multiple-layer perceptron regressors are applied to the protein pair representations to predict the quantifiable changes of PPI properties upon mutations. Experimental evaluations show that MuPIPR outperforms various state-of-the-art systems on the change of binding affinity prediction and the buried surface area prediction. The software implementation is available at https://github.com/guangyu-zhou/MuPIPR
ENCODE
Representation
Perceptron
Cite
Citations (4)
The suggested study's objectives are to develop an unique criterion-based method for classifying RBC pictures and to increase classification accuracy by utilizing Deep Convolutional Neural Networks instead of Conventional CNN Algorithm. Materials and Procedures A dataset-master image dataset of 790 pictures is used to apply Deep Convolutional Neural Network. Convolutional Neural Network and Deep Convolutional Neural Network comparison using deep learning has been suggested and developed to improve classification accuracy of RBC pictures. Using Gpower, the sample size was calculated to be 27 for each group. Results: When compared to Convolutional Neural Network, Deep Convolutional Neural Network had the highest accuracy in classifying blood cell pictures (95.2%) and the lowest mean error (85.8 percent). Between the classifiers, there is a statistically significant difference of p=0.005. The study demonstrates that Deep Convolutional Neural Networks perform more accurately than Conventional Neural Networks while classifying photos of blood cells[1].
Convolution (computer science)
Cite
Citations (2)
Abstract Artificial intelligence is a concept that includes machine learning and deep learning. The deep learning model used in this study corresponds to DNN (deep neural network) by utilizing two or more hidden layers. In this study, MLP (multi-layer perceptron) and machine learning models (XGBoost, LGBM) were used. An MLP consists of at least three layers: an input layer, a hidden layer, and an output layer. In general, tree models or linear models using machine learning are widely used for classification. We analyzed our data by applying deep learning (MLP) to improve the performance, which showed good results. The deep learning and ML models showed differences in predictive power and disease classification patterns. We used a confusion matrix and analyzed feature importance using the SHAP value method. Here, we present a protocol to confirm that the use of deep learning can show good performance in disease classification using hospital numerical structured data (laboratory test).
Confusion matrix
Multilayer perceptron
Perceptron
Feature (linguistics)
Cite
Citations (8)
Deep neural networks have become increasingly popular under the name of deep learning recently due to their success in challenging machine learning tasks. Although the popularity is mainly due to recent successes, the history of neural networks goes as far back as 1958 when Rosenblatt presented a perceptron learning algorithm. Since then, various kinds of artificial neural networks have been proposed. They include Hopfield networks, self-organizing maps, neural principal component analysis, Boltzmann machines, multi-layer perceptrons, radial-basis function networks, autoencoders, sigmoid belief networks, support vector machines and deep belief networks.
The first part of this thesis investigates shallow and deep neural networks in search of principles that explain why deep neural networks work so well across a range of applications. The thesis starts from some of the earlier ideas and models in the field of artificial neural networks and arrive at autoencoders and Boltzmann machines which are two most widely studied neural networks these days. The author thoroughly discusses how those various neural networks are related to each other and how the principles behind those networks form a foundation for autoencoders and Boltzmann machines.
The second part is the collection of the ten recent publications by the author. These publications mainly focus on learning and inference algorithms of Boltzmann machines and autoencoders. Especially, Boltzmann machines, which are known to be difficult to train, have been in the main focus. Throughout several publications the author and the co-authors have devised and proposed a new set of learning algorithms which includes the enhanced gradient, adaptive learning rate and parallel tempering. These algorithms are further applied to a restricted Boltzmann machine with Gaussian visible units.
In addition to these algorithms for restricted Boltzmann machines the author proposed a two-stage pretraining algorithm that initializes the parameters of a deep Boltzmann machine to match the variational posterior distribution of a similarly structured deep autoencoder. Finally, deep neural networks are applied to image denoising and speech recognition.
Boltzmann machine
Restricted Boltzmann machine
Deep belief network
Perceptron
Cite
Citations (4)
Abstract Deep learning is a promising branch of machine learning. It is an algorithm that uses artificial neural networks as the architecture to characterize and learn data. In recent years, many companies, for example, Google, Microsoft and Baidu, have become interested in the field of deep learning and have set up many large-scale projects, such as Google’s Deepmind project, including alphago, which has achieved success in Go and e-sports. This article analyzes and summarizes each current research direction and approach of deep learning, with prospection about the future research direction and development of deep learning expounded. An overview of the three basic models of deep learning is given, namely multilayer perceptrons and perceptrons, convolutional neural networks and recurrent neural networks. The benefits and superiority of the deep learning algorithm are illustrated and compared with the conventional methodology used in the common applications. Further research on emerging types of convolutional neural networks and recurrent neural networks are introduced. An overview of the three basic models of deep learning is given, namely multilayer perceptrons and perceptrons, convolutional neural networks and recurrent neural networks. The current application of deep learning in various fields is summarized, such as artificial intelligence, computer vision and natural language processing applications, and some open problems for future research are also analyzed. Finally, the significance and purpose of deep learning are discussed.
Perceptron
Cite
Citations (7)