Automatic posture change analysis of lactating sows by action localisation and tube optimisation from untrimmed depth videos

2020 
The automatic detection of postures and posture changes in sows using a computer-vision system has substantial potential for learning their maternal abilities, enhancing their welfare and productivity, and reducing the crushing risk to piglets. The objectives of this study are to (1) detect frame-level sow postures, (2) temporally localise posture change actions, and (3) generate spatio-temporally action tubes parsed from a long-time untrimmed segment of depth video. Depth videos were recorded for five batches of lactating sows, using a Kinect from a top-view in a commercial farm. Three batches were used for training and validation, and the other two for testing. Four postures (standing, sitting, ventral lying, and lateral lying) were automatically detected, with a mean average-precision (mAP) of 0.927. The localisation performance of the clip-level mAP involved eight posture change actions, and achieved 0.774 in the temporal intersection over union (tIoU) ≥ 0.5. A tube optimisation algorithm was used to optimise and smooth the action tubes. When the mean IoU≥0.8 in the tube, the performance of the video-level mAP significantly improved, from 0.313 to 0.796. The error analysis could deepen the understanding of the causes of errors in action detection. The system was applied to test two day videos of various sows, by obtaining the regularity of posture change probability, comparing the action characteristics, and discerning the maternal differences of the sows. The methodology can be applied in large-scale deployments for learning livestock action preferences and behavioural traits, thereby enhancing welfare and productivity on a farm.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    62
    References
    6
    Citations
    NaN
    KQI
    []