Skip navigation
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChen, Daizhuo-
dc.contributor.authorFraiberger, Samuel P.-
dc.contributor.authorMoakler, Robert-
dc.contributor.authorProvost, Foster-
dc.date.accessioned2015-05-27T13:47:27Z-
dc.date.available2015-05-27T13:47:27Z-
dc.date.issued2015-05-27-
dc.identifier.urihttp://hdl.handle.net/2451/33969-
dc.description.abstractRecent studies show the remarkable power of information disclosed by users on social network sites to infer the users' personal characteristics via predictive modeling. In response, attention is turning increasingly to the transparency that sites provide to users as to what inferences are drawn and why, as well as to what sort of control users can be given over inferences that are drawn about them. We draw on the evidence counterfactual as a means for providing transparency into why particular inferences are drawn about them. We then introduce the idea of a \cloaking device" as a vehicle to provide (and to study) control. Specifically, the cloaking device provides a mechanism for users to inhibit the use of particular pieces of information in inference; combined with the transparency provided by the evidence counterfactual a user can control model-driven inferences, while minimizing the amount of disruption to her normal activity. Using these analytical tools we ask two main questions: (1) How much information must users cloak in order to significantly affect inferences about their personal traits? We find that usually a user must cloak only a small portion of her actions in order to inhibit inference. We also find that, encouragingly, false positive inferences are significantly easier to cloak than true positive inferences. (2) Can firms change their modeling behavior to make cloaking more difficult? The answer is a definitive yes. In our main results we replicate the methodology of Kosinski et al. (2013) for modeling personal traits; then we demonstrate a simple modeling change that still gives accurate inferences of personal traits, but requires users to cloak substantially more information to affect the inferences drawn. The upshot is that organizations can provide transparency and control even into complicated, predictive model-driven inferences, but they also can make modeling choices to make control easier or harder for their users.en_US
dc.description.sponsorshipColumbia University, New York University, NYU Stern School of Business, NYU Center for Data Scienceen_US
dc.language.isoen_USen_US
dc.relation.ispartofseries;CBA-15-01-
dc.titleEnhancing Transparency and Control when Drawing Data-Driven Inferences about Individualsen_US
dc.typeWorking Paperen_US
Appears in Collections:Center for Business Analytics Working Papers

Files in This Item:
File Description SizeFormat 
CBA-15-01.pdf835.88 kBAdobe PDFView/Open


Items in FDA are protected by copyright, with all rights reserved, unless otherwise indicated.