Motion perception based adaptive quantization for video coding
2005
A visual measure for the purpose of video compressions is proposed in this paper. The novelty of the proposed scheme relies on combining three human perception models: motion attention model, eye movement based spatio-temporal visual sensitivity function, and visual masking model. With the aid of spatiotemporal visual sensitivity function, the visual sensitivities to DCT coefficients on less attended macroblocks are evaluated. The spatiotemporal distortion masking measures at macroblock level are then estimated based on the visual masking thresholds of the DCT coefficients with low sensitivities. Accordingly, macroblocks that can hide more distortions are assigned larger quantization parameters. Experiments conducted on the basis of H.264 demonstrate that this scheme effectively improves coding efficiency without picture quality degradation.
Keywords:
- Correction
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI