Abstract
Autoencoders are a type of deep neural network and are widely used for unsupervised learning, particularly in tasks that require feature extraction and dimensionality reduction. While most research focuses on compressing input data, less attention has been given to reducing the size and complexity of the autoencoder model itself, which is crucial for deployment on resource-constrained edge devices. This paper introduces a layer-wise pruning algorithm specifically for multilayer perceptron-based autoencoders. The resulting pruned model is referred to as a Shapley Value-based Sparse AutoEncoder (SV-SAE). Using cooperative game theory, the proposed algorithm models the autoencoder as a coalition of interconnected units and links, where the Shapley value quantifies their individual contributions to overall performance. This enables the selective removal of less important components, achieving an optimal balance between sparsity and accuracy. Experimental results confirm that the SV-SAE reaches an accuracy of 99.25%, utilizing only 10% of the original links. Notably, the SV-SAE remains robust under high sparsity levels with minimal performance degradation, whereas other algorithms experience sharp declines as the pruning ratio increases. Designed for edge environments, the SV-SAE offers an interpretable framework for controlling layer-wise sparsity while preserving essential features in latent representations. The results highlight its potential for efficient deployment in resource-constrained scenarios, where model size and inference speed are critical factors.
| Original language | English |
|---|---|
| Pages (from-to) | 75666-75678 |
| Number of pages | 13 |
| Journal | IEEE Access |
| Volume | 13 |
| DOIs | |
| State | Published - 2025 |
Bibliographical note
Publisher Copyright:© 2013 IEEE.
Keywords
- Edge computing
- Shapley value
- feature importance
- latent representation
- layer-wise pruning
- lightweight model
- sparse autoencoder
- unstructured pruning