Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Weiss, Gary | - |
dc.contributor.author | Provost, Foster | - |
dc.date.accessioned | 2008-11-19T21:39:23Z | - |
dc.date.available | 2008-11-19T21:39:23Z | - |
dc.date.issued | 2003-10-01 | - |
dc.identifier.citation | Issue 19 (2003) pp. 315-345 | en |
dc.identifier.uri | http://hdl.handle.net/2451/27769 | - |
dc.description.abstract | For large, real-world inductive learning problems, the number of training examples often must be limited due to the costs associated with procuring, preparing, and storing the training examples and/or the computational costs associated with learning from them. In such circumstances, one question of practical importance is: if only n training examples can be selected, in what proportion should the classes be represented? In this article we help to answer this question by analyzing, for a fixed training-set size, the relationship between the class distribution of the training data and the performance of classification trees induced from this data. We study twenty-six data sets and, for each, determine the best class distribution for learning. The naturally occurring class distribution is shown to generally perform well when classifier performance is evaluated using undifferentiated error rate (0/1 loss). However, when the area under the ROC curve is used to evaluate classifier performance, a balanced distribution is shown to perform well. Since neither of these choices for class distribution always generates the best-performing classifier, we introduce a "budget-sensitive" progressive sampling algorithm for selecting training examples based on the class associated with each example. An empirical analysis of this algorithm shows that the class distribution of the resulting training set yields classifiers with good (nearly-optimal) classification performance. | en |
dc.description.sponsorship | NYU, Stern School of Business, IOMS Department, Center for Digital Economy Research | en |
dc.format.extent | 275693 bytes | - |
dc.format.mimetype | application/pdf | - |
dc.language.iso | en_US | en |
dc.publisher | Journal of Artificial Intelligence Research | en |
dc.relation.ispartofseries | CeDER-PP-2003-04 | en |
dc.title | Learning When Training Data are Costly: The Effect of Class Distribution on Tree Induction | en |
dc.type | Article | en |
Appears in Collections: | CeDER Published Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
CPP-04-03.pdf | 269.23 kB | Adobe PDF | View/Open |
Items in FDA are protected by copyright, with all rights reserved, unless otherwise indicated.