To Re(label), or Not To Re(label)

Authors

  • Christopher Lin University of Washington
  • . Mausam Indian Institute of Technology, Delhi
  • Daniel Weld University of Washington

DOI:

https://doi.org/10.1609/hcomp.v2i1.13167

Keywords:

relabeling, crowdsourcing, machine learning

Abstract

One of the most popular uses of crowdsourcing is to provide training data for supervised machine learning algorithms. Since human annotators often make errors, requesters commonly ask multiple workers to label each example.  But is this strategy always the most cost effective use of crowdsourced workers? We argue "No" --- often classifiers can achieve higher accuracies when trained with noisy "unilabeled" data. However, in some cases relabeling is extremely important.  We discuss three factors that may make relabeling an effective strategy: classifier expressiveness, worker accuracy, and budget.

Downloads

Published

2014-09-05

How to Cite

Lin, C., Mausam, ., & Weld, D. (2014). To Re(label), or Not To Re(label). Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 2(1), 151-158. https://doi.org/10.1609/hcomp.v2i1.13167