Conflict Relaxation of Activation-Based Regularization for Neural Network

Kangil Kim, Junhyug Noh, Dong Kyun Kim, Minhyeok Kim

Research output: Contribution to journalArticlepeer-review

Abstract

Neural networks often penalize their loss functions by a regularization or constraint term dependent on training data. These penalty terms are defined on activation values of hidden vectors and reduced with a loss in the training process. Reducing the activation, networks condense hidden vectors and often over-compresses specific region in the hidden vector space even after converging to an optimal penalty value because of a simple form of penalty terms. This over-compression may restrict accurate training, which is an unnecessary negative effect in penalization. In this paper, we propose an approach to control penalty values with respect to geometric density for reducing the risk of the compression. We provide an example of data-dependent penalty forms sophisticatedly designed via estimating dense region and assigning near-zero penalty to the region. In practical experiments of time series regression, the proposed approach improved training and validation accuracy without significant loss of test accuracy. The result implies that the proposed method expands the range of samples accurately forecasted.

Original languageEnglish
Article number8466565
Pages (from-to)52510-52518
Number of pages9
JournalIEEE Access
Volume6
DOIs
StatePublished - 14 Sep 2018

Bibliographical note

Publisher Copyright:
© 2013 IEEE.

Keywords

  • Conflict reduction
  • cost penalization
  • geometric sparsity
  • long short term memory
  • neural network
  • power consumption
  • recurrent neural network
  • water quality

Fingerprint

Dive into the research topics of 'Conflict Relaxation of Activation-Based Regularization for Neural Network'. Together they form a unique fingerprint.

Cite this