D. Yoo, и I. Kweon. (2019)cite arxiv:1905.03677Comment: Accepted to CVPR 2019 (Oral).
Аннотация
The performance of deep neural networks improves with more annotated data.
The problem is that the budget for annotation is limited. One solution to this
is active learning, where a model asks human to annotate data that it perceived
as uncertain. A variety of recent methods have been proposed to apply active
learning to deep networks but most of them are either designed specific for
their target tasks or computationally inefficient for large networks. In this
paper, we propose a novel active learning method that is simple but
task-agnostic, and works efficiently with the deep networks. We attach a small
parametric module, named "loss prediction module," to a target network, and
learn it to predict target losses of unlabeled inputs. Then, this module can
suggest data that the target model is likely to produce a wrong prediction.
This method is task-agnostic as networks are learned from a single loss
regardless of target tasks. We rigorously validate our method through image
classification, object detection, and human pose estimation, with the recent
network architectures. The results demonstrate that our method consistently
outperforms the previous methods over the tasks.
%0 Generic
%1 yoo2019learning
%A Yoo, Donggeun
%A Kweon, In So
%D 2019
%K activelearning al
%T Learning Loss for Active Learning
%U http://arxiv.org/abs/1905.03677
%X The performance of deep neural networks improves with more annotated data.
The problem is that the budget for annotation is limited. One solution to this
is active learning, where a model asks human to annotate data that it perceived
as uncertain. A variety of recent methods have been proposed to apply active
learning to deep networks but most of them are either designed specific for
their target tasks or computationally inefficient for large networks. In this
paper, we propose a novel active learning method that is simple but
task-agnostic, and works efficiently with the deep networks. We attach a small
parametric module, named "loss prediction module," to a target network, and
learn it to predict target losses of unlabeled inputs. Then, this module can
suggest data that the target model is likely to produce a wrong prediction.
This method is task-agnostic as networks are learned from a single loss
regardless of target tasks. We rigorously validate our method through image
classification, object detection, and human pose estimation, with the recent
network architectures. The results demonstrate that our method consistently
outperforms the previous methods over the tasks.
@misc{yoo2019learning,
abstract = {The performance of deep neural networks improves with more annotated data.
The problem is that the budget for annotation is limited. One solution to this
is active learning, where a model asks human to annotate data that it perceived
as uncertain. A variety of recent methods have been proposed to apply active
learning to deep networks but most of them are either designed specific for
their target tasks or computationally inefficient for large networks. In this
paper, we propose a novel active learning method that is simple but
task-agnostic, and works efficiently with the deep networks. We attach a small
parametric module, named "loss prediction module," to a target network, and
learn it to predict target losses of unlabeled inputs. Then, this module can
suggest data that the target model is likely to produce a wrong prediction.
This method is task-agnostic as networks are learned from a single loss
regardless of target tasks. We rigorously validate our method through image
classification, object detection, and human pose estimation, with the recent
network architectures. The results demonstrate that our method consistently
outperforms the previous methods over the tasks.},
added-at = {2022-04-21T11:27:06.000+0200},
author = {Yoo, Donggeun and Kweon, In So},
biburl = {https://www.bibsonomy.org/bibtex/2195edf4e13f06e4ce08358857842ec79/simonha94},
description = {1905.03677.pdf},
interhash = {9c68ec9645101d426255b3d8fffcf5a3},
intrahash = {195edf4e13f06e4ce08358857842ec79},
keywords = {activelearning al},
note = {cite arxiv:1905.03677Comment: Accepted to CVPR 2019 (Oral)},
timestamp = {2022-04-21T11:27:06.000+0200},
title = {Learning Loss for Active Learning},
url = {http://arxiv.org/abs/1905.03677},
year = 2019
}