Algorithms to determine the feasibilities and weights of multi-layer perceptions with application to speech classification

1996 
In this dissertation, two algorithms are presented, the Weight Deletion/Selection (WDS) Algorithm and the Weight Determination with Partial Training (WDPT) Algorithm, to determine the feasibilities and weights of multi-layer perceptions according to different assumptions and network structures. The WDS algorithm examines the feasibility and selects weights of the second layer in two-layer perceptrons without any training procedure based on pre-determined decision regions. The WDPT algorithm produces decision boundaries by partially training, if necessary, the weights of the first layer in three-layer perceptrons where the weights of the second and third layers are determined without any training procedure. In the WDS algorithm, we use the weight deletion procedure to determine whether a decision region can be implemented by two-layer perceptrons and use weight selection procedure to generate the weights of the second layer of a two-layer perceptron if the decision region is implementable for two-layer perceptrons. The algorithm is available for two-dimensional inputs and two-class classification problems and can generalize easily to multi-dimensional inputs and multi-class classification problems. The WDS algorithm is available only for pre-determined decision regions. However, in practical pattern recognition problems, decision regions are not pre-determined. Besides, the algorithm examines the feasibility of two-layer perceptrons and generates the weights only for implementable decision regions. If the decision regions are not implementable, the WDS algorithm cannot be applied to get the weights. To solve the problems above, we therefore develop the WDPT algorithm to implement the practical pattern recognition problems by adding a third layer in the previous perceptron. The WDPT algorithm generates the weights of the first layer in a three-layer perceptron by grouping the original data into several subgroups and then examines the normality for each subgroup. If a subgroup is normally distributed, we use a hyper-octahedron containing a high percentage of coverage of the patterns in the subgroup to approximate the decision boundaries of the subgroup, and therefore get the weights of the first layer of the perceptron. If the subgroup is not a normal distribution we use a fast training algorithm to get the weights of the first layer, and then from the decision boundaries. In the WDPT algorithm, all weights in the second and third layers are pre-determined. We also prove the feasibility of the WDPT algorithm by setting proper thresholds in the nodes of the second and third layers. Finally, we apply the WDPT algorithm to voiced-unvoiced-silence classification for speech signals. We also compare the experimental results with a counterpart algorithm, the back-propagation algorithm. The results show the WDPT algorithm is better than the back-propagation algorithm in both CPU times and recognition rates.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    2
    Citations
    NaN
    KQI
    []