Abstract
Recently, Yoo (Statistics 50:1086–1099, 2016) newly defines an informative predictor subspace to contain the central subspace. The method to estimate the informative predictor subspace does not require any of the conditions assumed to hold in usual sufficient dimension reduction methodologies. However, like sliced inverse regression (Li in J Am Stat Assoc 86:316–342, 1991) and sliced average variance estimation (Cook and Weisberg in J Am Stat Assoc 86:328–332, 1991), its non-asymptotic behavior in the estimation is sensitive to the choices of the categorization of the predictors and response. The paper develops an estimation approach that is robust to the categorization choices. For this, sample kernel matrices are combined in two ways. Numerical studies and real data analysis are presented to confirm the potential usefulness of the proposed approach in practice.
Original language | English |
---|---|
Pages (from-to) | 350-363 |
Number of pages | 14 |
Journal | Journal of the Korean Statistical Society |
Volume | 49 |
Issue number | 2 |
DOIs | |
State | Published - 1 Jun 2020 |
Bibliographical note
Funding Information:For Jae Keun Yoo, this work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean Ministry of Education (NRF-2019R1F1A1050715/2019R1A6A1A11051177).
Funding Information:
For Jae Keun Yoo, this work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean Ministry of Education (NRF-2019R1F1A1050715/2019R1A6A1A11051177).
Publisher Copyright:
© 2020, Korean Statistical Society.
Keywords
- Clustering mean method
- Fused estimation
- Informative predictor subspace
- K-means clustering
- Sufficient dimension reduction