Methodical Design and Trimming of Deep Learning Networks: Enhancing External BP Learning with Internal Omnipresent-supervision Training Paradigm

2019 
Back-propagation (BP) is now a classic learning paradigm whose source of supervision is exclusively from the external (input/output) nodes. Consequently, BP is easily vulnerable to curse-of-depth in (very) Deep Learning Networks (DLNs). This prompts us to advocate Internal Neuron’s Learnablility (INL) with (1)internal teacher labels (ITL); and (2)internal optimization metrics (IOM) for evaluating hidden layers/nodes. Conceptually, INL is a step beyond the notion of Internal Neuron’s Explainablility (INE), championed by DARPA’s XAI (or AI3.0). Practically, INL facilitates a structure/parameter NP-iterative learning for (supervised) deep compression/quantization: simultaneously trimming hidden nodes and raising accuracy. Pursuant to our simulations, the NP-iteration appears to outperform several prominent pruning methods in the literature.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    3
    Citations
    NaN
    KQI
    []