logo
    The hidden Markov model (HMM) is a double stochastic process. The observable process produces a sequence of observations and the hidden process is a Markov process. The HMM assumes that the occurrence of one observation is statistically independent of the occurrence of the others. To avoid this limitation, a temporal HMM is proposed. The hidden process in the temporal HMM is the same, but the observable process is now a Markov process. Each observation in the training sequence is assumed to be statistically dependent on its predecessor, and codewords or Gaussian components are used as states in the observable Markov process. Speaker identification experiments performed on 138 Gaussian mixture speaker models in the YOHO database shows a better performance for the temporal HMM compared to the standard HMM.
    Hidden semi-Markov model
    Sequence (biology)
    Deep architectures have recently been explored in hybrid hidden Markov model/artificial neural network (HMM/ANN) framework where the ANN outputs are usually the clustered states of context-dependent phones derived from the best performing HMM/Gaussian mixture model (GMM) system. We can view a hybrid HMM/ANN system as a special case of recently proposed Kullback-Leibler divergence based hidden Markov model (KL-HMM) approach. In KL-HMM approach a probabilistic relationship between the ANN outputs and the context-dependent HMM states is modeled. In this paper, we show that in KL-HMM framework we may not require as many clustered states as the best HMM/GMM system in the ANN output layer. Our experimental results on German part of Media-Parl database show that KL-HMM system achieves better performance compared to hybrid HMM/ANN and HMM/GMM systems with much fewer number of clustered states than is required for HMM/GMM system. The reduction in number of clustered states has broader implications on model complexity and data sparsity issues.
    Divergence (linguistics)
    Citations (24)
    Two effective models,Hidden Markov Model and Input/Output Hidden Markov Model,used in the field of protein secondary structure prediction are introduced in this paper.Their principles,algorithms and applications are also presented.When applied them to the problem of protein structure prediction with few training samples, the result shows that HMM and I/O HMM are of higher performance than others presented in references.
    Maximum-entropy Markov model
    Forward algorithm
    Citations (0)
    This paper proposes a new hidden Makov model (HMM) which we call speaker-ensemble HMM (SE-HMM). An SE-HMM is a multi-path HMM in which each path is an HMM constructed from the training data of a different speaker. SE-HMM may be considered a form of template-based acoustic model where speaker-specific acoustic templates are compressed statistically into speaker-specific HMMs. However, one has the flexibility of building SE-HMM at various level of compression: SE-HMM may be built for a triphone state, a triphone, a whole utterance, or other convenient phonetic units. As a result, SE-HMM contains more details than conventional HMM, but is much smaller than common template-based acoustic models. Furthermore, the construction of SE-HMM is simple, and since it is still an HMM, its construction and computation is well supported by common HMM toolkits such as HTK. The proposed SE-HMM was evaluated on Resource Management and Wall Street Journal tasks, and it consistently gives better word recognition results than conventional HMM.
    Στην εποχή της γονιδιωματικής, τα μοντέλα ανάλυσης δεδομένων και οι αλγόριθμοι που παρέχουν τα μέσα για τη μείωση των μεγάλων σύνθετων συνόλων σε σημαντικές πληροφορίες είναι αναπόσπαστα για την περαιτέρω κατανόηση των σύνθετων βιολογικών συστημάτων. Τα κρυφά μαρκοβιανά μοντέλα (Hidden Markov Models) περιλαμβάνουν μια τέτοια τεχνική ανάλυσης δεδομένων που έχει γίνει η βάση πολλών εργαλείων βιοπληροφορικής. Η σχετική επιτυχία οφείλεται κυρίως στην εννοιολογική απλότητά της και στην ισχυρή στατιστική βάση. Παρά το γεγονός ότι είναι μια από τις πιο δημοφιλείς τεχνικές μοντελοποίησης και ανάλυσης δεδομένων για την ταξινόμηση ακολουθιών δεδομένων, οι ερευνητές έχουν λίγες διαθέσιμες επιλογές λογισμικού για να εφαρμόσουν γρήγορα το απαραίτητο πλαίσιο και αλγόριθμους μοντελοποίησης. Τα περισσότερα εργαλεία εξακολουθούν να είναι κωδικοποιημένα στο χέρι, επειδή οι τρέχουσες λύσεις υλοποίησης δεν παρέχουν την απαιτούμενη ευκολία ή ευελιξία που επιτρέπει στους ερευνητές να εφαρμόζουν μοντέλα με μη παραδοσιακούς τρόπους. Στην παρούσα διδακτορική διατριβή, έχουμε αναπτύξει ένα λογισμικό ανοιχτού κώδικα σε Java, που ονομάζεται JUCHMME, το οποίο παρέχει στους ερευνητές την ευελιξία να εφαρμόζουν Hidden Markov Models σε προβλήματα ανάλυσης ακολουθιών. Παρέχει στους ερευνητές τη δυνατότητα να εφαρμόσουν γρήγορα ένα μοντέλο χρησιμοποιώντας ένα απλό αρχείο κειμένου και ταυτόχρονα την ευελιξία να προσαρμόσουν το μοντέλο με μη παραδοσιακούς τρόπους. Επιπλέον, αναπτύξαμε πολλές δυνατότητες/επεκτάσεις που δεν είναι διαθέσιμες σε κανένα τρέχον εργαλείο υλοποίησης HMM, όπως τα Κρυφά Νευρωνικά Δϊκτυα (Hidden Neural Networks – HNNs), μοντέλα που εξαρτώνται από προηγούμενες παρατηρήσεις και μια μέθοδο για ημι-εποπτευόμενη εκμάθηση HMM που ενσωματώνει δεδομένα με επισήμανση, χωρίς σήμανση και μερική επισήμανση και πολλούς τρόπους για την ενσωμάτωση πρόσθετων πηγών δεδομένων μαζί για να κάνουν καλύτερες προβλέψεις. Χρησιμοποιώντας το JUCHMME, καταφέραμε να εφαρμόσουμε HMM μοντέλα σε ένα σημαντικό βιολογικό πρόβλημα της πρόβλεψης τοπολογίας διαμεμβρανικών πρωτεϊνών α-ελίκων και β-βαρελιών λαμβάνοντας ενθαρρυντικά αποτελέσματα.
    Maximum-entropy Markov model
    Hidden semi-Markov model
    Forward algorithm
    Citations (0)
    Recently computer systems' call sequences are considered as a data source, this paper expounds how to use Hidden Markov Models (HMM) for software behavior recognition and trend prediction. Due to that HMM is sensitive to initial parameters, especially sensitive to B-parameter which makes model fall into a local optimum in training, this paper proposes using Genetic Algorithm (GA) approach to optimize the B-parameter together with HMM for establishing an optimal training model. The model is called GA-HMM. In order to eliminate the HMM's reflection on observations characteristics, this paper puts forward a new approach to recognize software behavior with hidden states.
    Forward algorithm
    Citations (0)
    Artificial neural network are increasingly being applied to time series forecasting, but with mixed results. There appears to be as many methods as there are studies. This research investigates whetkr prior statistical deseasonalising of data is necessary for producing accurate forecasts with neural networks, or whether the network can adequately model seasonality. Neural network trained with deseasonalised &a from [5] were conquared with neural networh devcbped without prior deseasonalisatwn. Both sets of neural networks produced forecasts for the 68 monthly time series fiom the M-competition 171. Results indicate that neural network forecasts fiom deseasonalised data we significantly more accurate than the forecasts produced by neural networks which modelled seasonality.
    Seasonality
    Citations (9)
    In this paper, we propose a new method for music identification based on embedded hidden Markov model (EHMM). Differing from conventional HMM, the EHMM estimates the emission probability of its external HMM from the second, state specific HMM, which is referred as internal HMM. EHMM clusters the feature blocks with its external HMM and describes spectral and the temporal structures of each feature block with its internal HMM. Our analysis and experimental results show that the proposed method for music identification achieves higher accuracy and lower complexity than previous approaches
    Identification
    Feature (linguistics)
    Citations (0)
    A "mixed autoregressive hidden Markov model" (MAR-HMM) is proposed for modeling people's movements. MAR-HMM is equivalent to a special case of an autoregressive hidden Markov model (AR-HMM), which takes into account changes of people's internal properties. The number of parameters is thus reduced in the case of MAR-HMM. A dataset is applied to evaluate MAR-HMM in this study. The prediction rate of MAR-HMM is 56.8% and that of AR-HMM is 51.5%. It is therefore concluded that MAR-HMM is applicable to trajectory analysis of pedestrians.
    Hidden semi-Markov model
    Forward algorithm
    Citations (13)
    This paper presents Hidden Markov Models (HMM) approach for mid-long term load forecasting. HMM has been extensively used for pattern recognition and classification problems because of its proven suitability for modeling dynamic systems. However, using HMM for predicting is not straightforward. Here we use only one HMM that is trained on the past dataset of the chosen load data. The trained HMM is used to search for the variable of interest behavioral data pattern from the past dataset. By interpolating the neighboring values of these datasets forecasts are prepared. The results obtained using HMM are encouraging and HMM offers a new paradigm for load forecasting, an area that has been of much research interest lately.
    Citations (10)