An unsupervised long short-term memory neural network for event detection in cell videos
1
Citation
26
Reference
10
Related Paper
Citation Trend
Abstract:
We propose an automatic unsupervised cell event detection and classification method, which expands convolutional Long Short-Term Memory (LSTM) neural networks, for cellular events in cell video sequences. Cells in images that are captured from various biomedical applications usually have different shapes and motility, which pose difficulties for the automated event detection in cell videos. Current methods to detect cellular events are based on supervised machine learning and rely on tedious manual annotation from investigators with specific expertise. So that our LSTM network could be trained in an unsupervised manner, we designed it with a branched structure where one branch learns the frequent, regular appearance and movements of objects and the second learns the stochastic events, which occur rarely and without warning in a cell video sequence. We tested our network on a publicly available dataset of densely packed stem cell phase-contrast microscopy images undergoing cell division. This dataset is considered to be more challenging that a dataset with sparse cells. We compared our method to several published supervised methods evaluated on the same dataset and to a supervised LSTM method with a similar design and configuration to our unsupervised method. We used an F1-score, which is a balanced measure for both precision and recall. Our results show that our unsupervised method has a higher or similar F1-score when compared to two fully supervised methods that are based on Hidden Conditional Random Fields (HCRF), and has comparable accuracy with the current best supervised HCRF-based method. Our method was generalizable as after being trained on one video it could be applied to videos where the cells were in different conditions. The accuracy of our unsupervised method approached that of its supervised counterpart.Keywords:
Supervised Learning
Supervised Learning
Instance-based learning
Competitive learning
Cite
Citations (35)
Abstract Fraud is a significant issue for insurance companies, generating much interest in machine learning solutions. Although supervised learning for insurance fraud detection has long been a research focus, unsupervised learning has rarely been studied in this context, and there remains insufficient evidence to guide the choice between these branches of machine learning for insurance fraud detection. Accordingly, this study evaluates supervised and unsupervised learning using proprietary insurance claim data. Furthermore, we conduct a field experiment in cooperation with an insurance company to investigate the performance of each approach in terms of identifying new fraudulent claims. We derive several important findings. Unsupervised learning, especially isolation forests, can successfully detect insurance fraud. Supervised learning also performs strongly, despite few labeled fraud cases. Interestingly, unsupervised and supervised learning detect new fraudulent claims based on different input information. Therefore, for implementation, we suggest understanding supervised and unsupervised methods as complements rather than substitutes.
Supervised Learning
Cite
Citations (26)
Unsupervised learning has always been appealing to machine learning researchers and practitioners, allowing them to avoid an expensive and complicated process of labeling the data. However, unsupervised learning of complex data is challenging, and even the best approaches show much weaker performance than their supervised counterparts. Self-supervised deep learning has become a strong instrument for representation learning in computer vision. However, those methods have not been evaluated in a fully unsupervised setting. In this paper, we propose a simple scheme for unsupervised classification based on self-supervised representations. We evaluate the proposed approach with several recent self-supervised methods showing that it achieves competitive results for ImageNet classification (39% accuracy on ImageNet with 1000 clusters and 46% with overclustering). We suggest adding the unsupervised evaluation to a set of standard benchmarks for self-supervised learning. The code is available at https://github.com/Randl/kmeans_selfsuper
Supervised Learning
Feature Learning
Competitive learning
Representation
Code (set theory)
Cite
Citations (4)
To harness the value of data generated from IoT, there is a crucial requirement of new mechanisms. Machine learning (ML) is among the most suitable paradigms of computation which embeds strong intelligence within IoT devices. Various ML techniques are being widely utilised for improving network security in IoT. These techniques include reinforcement learning, semi-supervised learning, supervised learning, and unsupervised learning. This report aims to critically analyse the role played by supervised and unsupervised ML for the enhancement of IoT security.
Supervised Learning
Cite
Citations (37)
Unsupervised learning has always been appealing to machine learning researchers and practitioners, allowing them to avoid an expensive and complicated process of labeling the data. However, unsupervised learning of complex data is challenging, and even the best approaches show much weaker performance than their supervised counterparts. Self-supervised deep learning has become a strong instrument for representation learning in computer vision. However, those methods have not been evaluated in a fully unsupervised setting. In this paper, we propose a simple scheme for unsupervised classification based on self-supervised representations. We evaluate the proposed approach with several recent self-supervised methods showing that it achieves competitive results for ImageNet classification (39% accuracy on ImageNet with 1000 clusters and 46% with overclustering). We suggest adding the unsupervised evaluation to a set of standard benchmarks for self-supervised learning. The code is available at this https URL
Supervised Learning
Competitive learning
Feature Learning
Representation
Code (set theory)
Cite
Citations (1)
A dynamic growing neural network (DGNN) for supervised learning of pattern recognition or unsupervised learning of clustering is presented. The main ideas included in DGNN are growing, resonance, and post-prune. DGNN is called dynamic growing because it is based on the Hebbian learning rule and adds new neurons under certain conditions. When DGNN performs supervised learning, resonance will happen if the winner can't match the training example; this rule combines the ART/ARTMAP neural network and WTA learning rule. When DGNN performs unsupervised learning, post-prune is carried out to prevent over fitting the training data just like decision tree learning. DGNN's prune rule is based on the distance threshold. DGNN has some advantages: learning not only is stable because it grows under certain conditions; but also it is faster than back-propagation rules and favorable learned predictive accuracy in small, noisy, online or offline data sets. Three classes of simulations are performed on the primary benchmarks: circle-in-the-square and two-spirals-apart benchmarks are used to check DGNN's supervised learning and compare it with ARTMAP and BP neural networks; DGNN's unsupervised learning ability is checked on UCI Machine Learning Archive's Synthetic Control Chart Time Series data set
Competitive learning
Supervised Learning
Hebbian theory
Cite
Citations (6)
Real-world named entity recognition (NER) datasets are notorious for their noisy nature, attributed to annotation errors, inconsistencies, and subjective interpretations. Such noises present a substantial challenge for traditional supervised learning methods. In this paper, we present a new and unified approach to tackle annotation noises for NER. Our method considers NER as a constituency tree parsing problem, utilizing a tree-structured Conditional Random Fields (CRFs) with uncertainty evaluation for integration. Through extensive experiments conducted on four real-world datasets, we demonstrate the effectiveness of our model in addressing both partial and incorrect annotation errors. Remarkably, our model exhibits superb performance even in extreme scenarios with 90% annotation noise.
CRFS
Named Entity Recognition
Tree (set theory)
Structured prediction
Cite
Citations (2)
Machine learning is the field that is dedicated to the design and development of algorithms and techniques that allow computers to "learn". Two common types of learning that are often mentioned are supervised learning and unsupervised learning. One often understands that in supervised learning, the system is given the desired output, and it is required to produce the correct output for the given input, while in unsupervised learning the system is given only the input and the objective is to find the natural structure inherent in the input data. We, however, suggest that even with unsupervised learning, the information inside the input, the structure of the input, and the sequence that the input is given to the system actually make the learning "supervised" in some way. Therefore, we recommend that in order to make the machine learn, even in a "supervised" manner, we should use an "unsupervised learning" model together with an appropriate way of presenting the input. We propose in this paper a simple plasticity neural network model that has the ability of storing information as well as storing the association between a pair of inputs. We then introduce two simple unsupervised learning rules and a framework to supervise our neural network.
Competitive learning
Supervised Learning
Instance-based learning
Cite
Citations (6)
The brain performs unsupervised learning and (perhaps) simultaneous supervised learning. This raises the question as to whether a hybrid of supervised and unsupervised methods will produce better learning. Inspired by the rich space of Hebbian learning rules, we set out to directly learn the unsupervised learning rule on local information that best augments a supervised signal. We present the Hebbian-augmented training algorithm (HAT) for combining gradient-based learning with an unsupervised rule on pre-synpatic activity, post-synaptic activities, and current weights. We test HAT's effect on a simple problem (Fashion-MNIST) and find consistently higher performance than supervised learning alone. This finding provides empirical evidence that unsupervised learning on synaptic activities provides a strong signal that can be used to augment gradient-based methods. We further find that the meta-learned update rule is a time-varying function; thus, it is difficult to pinpoint an interpretable Hebbian update rule that aids in training. We do find that the meta-learner eventually degenerates into a non-Hebbian rule that preserves important weights so as not to disturb the learner's convergence.
Hebbian theory
MNIST database
Learning rule
Competitive learning
Leabra
Supervised Learning
Cite
Citations (1)
Supervised Learning
Online machine learning
Cite
Citations (1)