Deep learning-based solar image captioning

Ji Hye Baek, Sujin Kim, Seonghwan Choi, Jongyeob Park, Dongil Kim

Research output: Contribution to journalArticlepeer-review

Abstract

Solar images are essential for identifying and predicting solar phenomena, and have been used as key information for analyzing space weather. In this paper, we propose a solar image captioning method that applies a transformer-based deep learning (DL) natural language processing method. In addition, we provide a new DeepSDO description dataset for training solar image captioning models. First, we develop the DeepSDO description dataset using solar image data from Korean Data Center for solar dynamics observatory (SDO) and scripts from the National Aeronautics and Space Administration (NASA) SDO gallery website. The DeepSDO description dataset includes nine solar events: sunspots, flares, prominences, prominent eruptions, coronal holes, coronal loops, filaments, active regions, and eclipses. Second, we train the DL-based image captioning model, the meshed-memory transformer, using the DeepSDO description dataset. The experimental results show that the proposed method outperforms other benchmark methods in terms of four evaluation metrics. This study demonstrates that DL-based image captioning can successfully generate solar image captions for multiple solar features, and could potentially be used in other themes of solar physics and space weather.

Original languageEnglish
Pages (from-to)3270-3281
Number of pages12
JournalAdvances in Space Research
Volume73
Issue number6
DOIs
StatePublished - 15 Mar 2024

Bibliographical note

Publisher Copyright:
© 2024 COSPAR

Keywords

  • Deep learning
  • Image captioning
  • Solar dynamics observatory
  • Solar event
  • Solar image

Fingerprint

Dive into the research topics of 'Deep learning-based solar image captioning'. Together they form a unique fingerprint.

Cite this