Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, Jing | - |
dc.contributor.author | Ipeirotis, Panagiotis | - |
dc.date.accessioned | 2013-06-19T14:39:56Z | - |
dc.date.available | 2013-06-19T14:39:56Z | - |
dc.date.issued | 2013-06-19 | - |
dc.identifier.uri | http://hdl.handle.net/2451/31833 | - |
dc.description.abstract | The emergence of online paid micro-crowdsourcing platforms, such as Amazon Mechanical Turk (AMT), allows on-demand and at scale distribution of tasks to human workers around the world. In such settings, online workers come and complete small tasks posted by a company, working for as long or as little as they wish. Such temporary employer-employee relationships give rise to adverse selection, moral hazard, and many other challenges. How can we ensure that the submitted work is accurate, especially when the verification cost is comparable to the cost of performing the task? How can we estimate the exhibited quality of the workers? What pricing strategies should be used to induce the effort of workers with varying ability levels? We develop a comprehensive framework for managing the quality in such micro crowdsourcing settings: First, we describe an algorithm for estimating the error rates of the participating workers, and show how to separate systematic worker biases from unrecoverable errors and generate an unbiased “worker quality” measurement. Next, we present a selective repeated-labeling algorithm that acquires labels in a way so that quality requirements can be met at minimum cost. Then, we propose a quality-adjusted pricing scheme that adjusts the payment level according to the contributed value by each worker. We test our compensation scheme in a principal-agent setting in which workers respond to incentives by varying their effort. Our simulation results demonstrate that the proposed pricing scheme is able to induce workers to exert higher levels of effort and yield larger profits for employers compared to the commonly adopted uniform pricing schemes. We also describe strategies that build on our quality control and pricing framework, to tackle crowdsourced tasks of increasingly higher complexity, while still maintaining a tight quality control of the process. | en_US |
dc.relation.ispartofseries | CBA-13-06; | - |
dc.title | A Framework for Quality Assurance in Crowdsourcing | en_US |
Appears in Collections: | Center for Business Analytics Working Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Framework for QA in Crowdsourcing.pdf | 1.61 MB | Adobe PDF | View/Open |
Items in FDA are protected by copyright, with all rights reserved, unless otherwise indicated.