Deep learning plays an increasingly important role in industrial applications, such as the remaining useful life (RUL) prediction of machines. However, when dealing with multifeature data, most deep learning approaches do not have effective mechanisms to weigh the input features adaptively. In this article, a novel feature-attention-based end-to-end approach is proposed for RUL prediction. First, the proposed feature-attention mechanism is directly applied to the input data, which gives greater attention weights to more important features dynamically in the training process. This helps the model focus more on those critical inputs, and the prediction performance is therefore improved. Next, bidirectional gated recurrent units (BGRU) are used to extract long-term dependencies from the weighted input data, and convolutional neural networks are employed to capture local features from the output sequences of BGRU. Finally, fully connected networks are used to learn the above-mentioned abstract representations to predict the RUL. The proposed approach is validated in a case study of turbofan engines. The experimental results demonstrate that the proposed approach outperforms other latest existing approaches.
In visual servoing (VS) tasks, the target object with complex shape brings the challenge to extract effective visual information used for robotic control. Appropriate image features describing the overall contour of the object are critical to implement such tasks. In this work, we propose a B-spline features-based contour VS method. The quasi-uniform B-spline curve is employed to construct image features by control points from contours. With the good shape description capability and concise mathematical expression, B-spline features handle the object with complex shape in a visually intuitive and efficient way. Moreover, to improve the VS system's robustness to the dynamic environment and temporary occlusion, a real-time estimate framework composed of B-spline features estimator (BFE) and B-spline features predictor (BFP) is proposed. BFE achieves the optimal estimation of current image features and corresponding depth based on adaptive extended Kalman Filter (AEKF), where control points are regarded as observations. BFP mainly tackles the problem of object occlusion, where the perspective projection invariance of B-spline features and Lie algebra model are introduced to predict the occluded features through nonlinear optimization. The effectiveness of the proposed method is validated on VS tasks for objects with complex shapes by simulations and experiments.
By means of estimators based on non-equilibrium work, equilibrium free energy differences or potentials of mean force (PMFs) of a system of interest can be computed from biased molecular dynamics (MD) simulations. The approach, however, is often plagued by slow conformational sampling and poor convergence, especially when the solvent effects are taken into account. Here, as a possible way to alleviate the problem, several widely used implicit-solvent models, which are derived from the analytic generalized Born (GB) equation and implemented in the AMBER suite of programs, were employed in free energy calculations based on non-equilibrium work and evaluated for their abilities to emulate explicit water. As a test case, pulling MD simulations were carried out on an alanine polypeptide with different solvent models and protocols, followed by comparisons of the reconstructed PMF profiles along the unfolding coordinate. The results show that when employing the non-equilibrium work method, sampling with an implicit-solvent model is several times faster and, more importantly, converges more rapidly than that with explicit water due to reduction of dissipation. Among the assessed GB models, the Neck variants outperform the OBC and HCT variants in terms of accuracy, whereas their computational costs are comparable. In addition, for the best-performing models, the impact of the solvent-accessible surface area (SASA) dependent nonpolar solvation term was also examined. The present study highlights the advantages of implicit-solvent models for non-equilibrium sampling.
Abstract Organ-on-a-chip systems have been increasingly recognized as attractive platforms to assess toxicity and to develop new therapeutic agents. However, current organ-on-a-chip platforms are limited by a “single pot” design, which inevitably requires holistic analysis and limits parallel processing. Here, we developed a digital organ-on-a-chip by combining a microwell array with cellular microspheres, which significantly increased the parallelism over traditional organ-on-a-chip for drug development. Up to 127 uniform liver cancer microspheres in this digital organ-on-a-chip format served as individual analytical units, allowing for analysis with high consistency and quick response. Our platform displayed evident anti-cancer efficacy at a concentration of 10 μM for sorafenib, and had greater alignment than the “single pot” organ-on-a-chip with a previous in vivo study. In addition, this digital organ-on-a-chip demonstrated the treatment efficacy of natural killer cell-derived extracellular vesicles for liver cancer at 50 μg/mL. The successful development of this digital organ-on-a-chip platform provides high-parallelism and a low-variability analytical tool for toxicity assessment and the exploration of new anticancer modalities, thereby accelerating the joint endeavor to combat cancer. Graphic abstract
Quality prediction, as the basis of quality control, is dedicated to predicting quality indices of the manufacturing process. In recent years, data-driven deep learning methods have received a lot of attention due to their accuracy, robustness, and convenience for the prediction of quality indices. However, the existing studies mainly focus on the quality prediction of a single machine, while ignoring dependency relationships among multiple machines in multistage manufacturing process. To tackle the above issues, a novel path enhanced bidirectional graph attention network (PGAT) is proposed in this article. PGAT models the dependencies among machines into directed graphs and introduces graph attention network to encode the dependencies. Nonetheless, it is difficult for graph neural networks to encode long-distance dependencies. Hence, dependency path information is introduced into the features of machines. Moreover, in order to solve the label noise problem that often occurs in actual industrial dataset, a masked loss function is devised. With its help, batch training with noisy labels can be achieved, which improves the training efficiency. Furthermore, experiments are conducted on a public quality prediction dataset collected from an actual production line. PGAT achieves the state-of-the-art results on this dataset, which confirms the effectiveness of PGAT. Additionally, the experimental results demonstrate the critical role of modeling dependency relationships among machines.
Dependency-based models are widely used to extract semantic relations in text. Most existing dependency-based models establish stacked structures to merge contextual and dependency information, which encode the contextual information first and then encode the dependency information. However, this unidirectional information flow weakens the representation of words in the sentence, which further restricts the performance of existing models. To establish bidirectional information flow, a dual attention graph convolutional network (DAGCN) with a parallel structure is proposed. Most importantly, DAGCN can build multi-turn interactions between contextual and dependency information to imitate the multi-turn looking-back actions of human beings. In addition, multi-layer adjacency matrix-aware multi-head attention (AMAtt), including context-to-dependency attention and dependency-to-context attention, is carefully designed as a merge mechanism in the parallel structure to preserve the structural information of sentences and dependency trees during interactions. Furthermore, DAGCN is evaluated on the popular PubMed dataset, TACRED dataset and SemEval 2010 Task 8 dataset to demonstrate its validity. Experimental results show that our model outperforms the existing dependency-based models.
The class imbalance problem has a huge impact on the performance of diagnostic models. When it occurs, the minority samples are easily ignored by classification models. Besides, the distribution of class imbalanced data differs from the actual data distribution, which makes it difficult for classifiers to learn an accurate decision boundary. To tackle the above issues, this article proposes a novel imbalanced data classification method based on weakly supervised learning. First, Bagging algorithm is employed to sample majority data randomly to generate several relatively balanced subsets, which are then used to train several support vector machine (SVM) classifiers. Next, these trained SVM classifiers are adopted to predict the labels of those unlabeled data, and samples that are predicted as minority class are added to the original dataset to reduce the imbalance ratio. The critical idea of this article is to introduce real-world samples into the imbalanced dataset by virtue of weakly supervised learning. In addition, bidirectional gated recurrent units are used to construct a diagnostic model for fault diagnosis, and a new weighted cross-entropy function is proposed as the loss function to reduce the impact of noise. Besides, it also increases the model's attention to the original minority samples. Furthermore, experimental evaluations of the proposed method are conducted on two datasets, i.e., Prognostics and Health Management challenge 2008 and 2010 datasets, and the experimental results demonstrate the effectiveness and superiority of the proposed method.