Rethinking the Optimization of Average Precision: Only Penalizing Negative Instances before Positive Ones is Enough.
2021
Optimising the approximation of Average Precision (AP) has been widely studied for image retrieval. Such methods consider both negative and positive instances ranking before each positive instance. However, we claim that only penalizing negative instances before positive ones is enough, because the loss only comes from them. To this end, we propose a novel loss, namely Penalizing Negative instances before Positive ones (PNP), which directly minimizes the number of negative instances before each positive one. Meanwhile, AP-based methods adopt a sub-optimal gradient assignment strategy. We systematically investigate different gradient assignment solutions via constructing derivative functions of the loss, resulting in PNP-I with increasing derivative functions and PNP-D with decreasing ones. PNP-I focuses more on the hard positive instances by assigning larger gradients to them and tries to make all relevant instances closer. In contrast, considering such instances may belong to another center of the corresponding category, PNP-D pays less attention to such instances and keeps them as they were. For most real-world data, one class usually contains several local clusters. Thus, PNP-D is more suitable for such situation. Experiments on three standard retrieval datasets show consistent results of the above analysis. Extensive evaluations demonstrate that PNP-D achieves the state-of-the-art performance.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
39
References
0
Citations
NaN
KQI