Abstract
One-class support vector machine (OCSVM) has demonstrated superior performance in one-class classification problems. However, its training is impractical for large-scale datasets owing to high computational complexity with respect to the number of training instances. In this study, we propose an approximate training method based on the concept of expected margin to obtain a model identical to full training with reduced computational burden. The proposed method selects prospective support vectors using multiple OCSVM models trained on small bootstrap samples of an original dataset. The final OCSVM model is trained using only the selected instances. The proposed method is not only simple and straightforward but also considerably effective in improving the training efficiency of OCSVM. Preliminary experiments are conducted on large-scale benchmark datasets to examine the effectiveness of the proposed method in terms of approximation performance and computational efficiency.
Original language | English |
---|---|
Pages (from-to) | 772-778 |
Number of pages | 7 |
Journal | Computers and Industrial Engineering |
Volume | 130 |
DOIs | |
State | Published - Apr 2019 |
Bibliographical note
Funding Information:This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT; Ministry of Science and ICT) (No. NRF-2017R1C1B5075685 ). This work was also supported by Chungnam National University (No. 2018-0611-01 ).
Publisher Copyright:
© 2019 Elsevier Ltd
Keywords
- Approximate training
- Data selection
- Expected margin
- One-class support vector machine
- Support vector data description