A Task-Aware Network for Multi-task Learning

2020 
Creating a model capable of learning new tasks without deteriorating its performance on the previously learned tasks has been a challenge of multi-task learning. Fine-tuning a pre-trained network for another task could change the network in a way that degrades the performance on its original task. In this paper, we proposed a novel deep network for learning multiple tasks based on extendable Dynamic Convolutional Blocks. Using the dynamic residual connections between layers, our method adjusts the network depth to enable adaptability. The activation function selection strategy accommodates a variety of choices in the training of the network for a specific task. We evaluated our method using a publicly available dataset ALFW and conduct a comparison study against the state-of-the-art methods. It was demonstrated that our multi-task network outperforms the existing single and multi-task methods by reducing the average error by as much as 25%. Our method also exhibits a greater consistency for different tasks. By training variations of our proposed MTN, we observed that MTN-3 achieved the best performance with a cumulative error rate of 1.87% and 5.7% reduction in average error.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []