MotionNet architecture with Randomized Emission training strategy for adaptive bit rate video compression

2020 
The traditional methods of video compression are hand designed and fundamentally based on frame prediction strategies. Even though they are working efficiently but significant improvements have been observed by some deep learning based enhancements. In the way, to explore pure deep learning based approaches, the commendable performance motivated further research in the same domain. Adaptive bit rate representation of the videos is preferred over fixed bit representation providing flexibility for adjusting video quality with varying bandwidth in addition to efficient bit utilization. The proposed pure deep learning video compression architecture comprises of frame autoencoder and motion prediction network to propagate motion information for reconstruction of predicted frames. The network was trained with both fixed and random emission steps independently. The experimental results reveal that randomized emission training strategy gives better results for adaptive bit-rate video compression when compared in SSIM, PSNR, EPE and TPF performance parameters.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []