A Novel Method to Compute the Weights of Neural Networks

2020 
Abstract Neural networks are the main strength of modern artificial intelligence; they have demonstrated revolutionary performance in a wide range of applications. In practice, the weights of neural networks are generally obtained indirectly using iterative training methods. Such methods are inefficient and problematic in many respects. Besides, neural networks trained end-to-end by such methods are typical black box models that are hard to interpret. Thus, it would be significantly better if the weights of a neural network could be calculated directly. In this paper, we located the key for calculating the weights of a neural network directly: assigning proper targets to the hidden units. Furthermore, if such targets are assigned, the neural network becomes a white box model that is easy to interpret. Thus, we propose a framework for solving the weights of a neural network and provide a sample implementation of the framework. The implementation was tested in various classification and regression experiments. Compared with neural networks trained using traditional methods, the constructed ones using solved weights had similar or better performance on many tasks, while remaining interpretable. Given the early stage of the proposed approach, many improvements are expectable in future developments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    1
    Citations
    NaN
    KQI
    []